I like reputation systems in general because they mimic the way social norms were created in the real world (provided having a bad reputation had consequences, but yet was redeemable)
However I'm not sure about:
> Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores.
Would it be possible to automate having a "good behavior" with some very low-stake actions (basically, sharing funny cat pictures ?) with one actor, in order to get tokens to allow upping the reputation of another actors (controlled by the same human being in the end) that is then able to send spam in proportion ?
> The Reputation Score S of any Actor automatically decreases over time
I get that this is a feature, to prevent the emergence of "gurus", bitwouldn't it incentivize authors to produce poor quality content regularly over producing high quality content ? You don't want to be Don Knuth in such a system...
step 2 : create multiple actors (bots etc) let's say ACTOR[A-Z] to raise the reputation of ACTOR1
step 3 : use all the Rating Token Balance (R) of ACTOR[A-Z] to give reputation (S) to ACTOR1
step4 : ACTOR1 now have a good reputation on the blockchain
step5 : ???
step6 : back to nickserv on irc with known nicknames to find someone with good reputation
EDIT : I missed the "Note they can only acquire reputation by being good Actors in the eyes of already-good Actors" but I am curious to see how this could not be rigged by a algorithm
Fun fact: These papers played a role in the inspiration for me to work on my own similar project when I was still in my early 20s (over a decade ago). It is not an entirely original idea. I had a face to face chat about it with Scribd's CEO at one of their happy hours back in the day.
I'd guess that Quora had some component of this in their early vision before dropping it.
Some more recent work has done by Anton Kolonin. There are probably others, but I'm too jaded to care enough to keep up anymore.
They do note "Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores. Note they can only acquire reputation by being good Actors in the eyes of already-good Actors. "
However, the "mitigation" already demonstrates a potential attack for the proposed algorithm, namely, that once a bad actor does get some reputation score capital (e.g. by investing some money/effort on genuine good content) then they get the effective ability to invest that reputation into spawning a self-reinforcing farm with a large quantity of reputation bots (using the mechanism of periodic replenishment of rating tokens) who do not need approval from any good actors if they get initially boosted/sponsored by the bad actor, and they can then indeed take over the whole system by maximizing their effect of the rating process/investment which real users simply won't do.
Yes, that's what I was looking for too. I think the following quote is meant to address that:
> To bootstrap the system, an initial set of Actors who share the same core values about the to-be-created reputation each gets allocated a bootstrap Reputation Score. This gives them the ability to receive Rating Tokens with which they can rate each other and newly entering Actors.
So basically it assumes there is an initial set of trusted Actors.
But even with this assumption there are potential issues (e.g. good Actors may be bribed).
I can't tell if this is the result of "I thought of a blog post topic and spent two hours on it" or the distillation of years of thinking. It looks very simplistic, but maybe it's "elegant". Who knows.
Intuitively I am thinking it might work for small communities (where it is not even needed) and there's no way it will make its way into something like a Facebook. So I do not really see the point of it.
The problem is not the lack of a well-defined ruleset, but that any open or clearly intended ruleset can be optimized for. One of possible solutions is to obfuscate and enforce rules in some irregular, inconsistent and unfair manner, with a well-intended dictatorship or aristocracy oversight, which is proven unworkable long term.
So someone gets negative reputation, this is visible, moralists then join in adding more negative reputation, and possibly without much or even any good reason they suddenly have a massive negative reputation from which they cannot escape. This sounds like an amplifier for gang stalking and other bad behavior.
Another technocrat who wants to improve human life by literally de-humanizing it. Hey let’s upload and outsource how we feel about each other!
It never, ever works.
It reminds me of the trope of monsters that ravenously pursue human flesh in order to feel human, and fail fail fail (see Sandman’s Corinthian for the latest incarnation of that).
Reputation is a complex social phenomenon. What this post recommends is some sort of online reputation-related game that over time everyone would be forced to play. No thank you.
- even with the anti-abuse provisions for bad actors, what is stopping me from making a botnet to artificially increasing my reputation?
- this system, no matter how decentralised, and "hands off" will inevitably be corrupted under the guise of good will and justice.
- who is stopping people from downvoting my reputation if it is perfectly valid, but does not align with the community group think? Will this then affect my usage of other services?
This is uncomfortable for me, similar to social credit systems, a la Black Mirror.
I like reputation systems in general because they mimic the way social norms were created in the real world (provided having a bad reputation had consequences, but yet was redeemable)
However I'm not sure about:
> Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores.
Would it be possible to automate having a "good behavior" with some very low-stake actions (basically, sharing funny cat pictures ?) with one actor, in order to get tokens to allow upping the reputation of another actors (controlled by the same human being in the end) that is then able to send spam in proportion ?
> The Reputation Score S of any Actor automatically decreases over time
I get that this is a feature, to prevent the emergence of "gurus", bitwouldn't it incentivize authors to produce poor quality content regularly over producing high quality content ? You don't want to be Don Knuth in such a system...
If I read this right :
step 1 : create an actor ; let's say ACTOR1
step 2 : create multiple actors (bots etc) let's say ACTOR[A-Z] to raise the reputation of ACTOR1
step 3 : use all the Rating Token Balance (R) of ACTOR[A-Z] to give reputation (S) to ACTOR1
step4 : ACTOR1 now have a good reputation on the blockchain
step5 : ???
step6 : back to nickserv on irc with known nicknames to find someone with good reputation
EDIT : I missed the "Note they can only acquire reputation by being good Actors in the eyes of already-good Actors" but I am curious to see how this could not be rigged by a algorithm
You didn't miss it, you're right. Actors get "S" reputation they can award others periodically.
The max amount they can accumulate/award is based on their own reputation.
Makes no difference. 1 bot awarding 50 points or 50 bots awarding 1 point each..
The analysis is completely shallow and ignores decades of papers on the topic in economics and CS.
Care to elaborate and share some interesting links? I'm honestly interested and I agree the article is shallow.
Drexler (yes, that one) and Miller's Agoric Papers (1988)
http://papers.agoric.com/papers/
http://papers.agoric.com/papers/incentive-engineering-for-co...
Fun fact: These papers played a role in the inspiration for me to work on my own similar project when I was still in my early 20s (over a decade ago). It is not an entirely original idea. I had a face to face chat about it with Scribd's CEO at one of their happy hours back in the day.
I'd guess that Quora had some component of this in their early vision before dropping it.
Some more recent work has done by Anton Kolonin. There are probably others, but I'm too jaded to care enough to keep up anymore.
https://aigents.com/
https://aigents.com/papers/2017/Assessment-of-personal-envir...
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3906383
What’s Clyde Drexler got to do with anything? :)
This shouldn't even be evaluated until it has a section on Sybil attacks.
Literally nothing proposed here is new, and it is manifestly uninformed by the experience of others.
They do note "Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores. Note they can only acquire reputation by being good Actors in the eyes of already-good Actors. "
However, the "mitigation" already demonstrates a potential attack for the proposed algorithm, namely, that once a bad actor does get some reputation score capital (e.g. by investing some money/effort on genuine good content) then they get the effective ability to invest that reputation into spawning a self-reinforcing farm with a large quantity of reputation bots (using the mechanism of periodic replenishment of rating tokens) who do not need approval from any good actors if they get initially boosted/sponsored by the bad actor, and they can then indeed take over the whole system by maximizing their effect of the rating process/investment which real users simply won't do.
Yes, that's what I was looking for too. I think the following quote is meant to address that:
> To bootstrap the system, an initial set of Actors who share the same core values about the to-be-created reputation each gets allocated a bootstrap Reputation Score. This gives them the ability to receive Rating Tokens with which they can rate each other and newly entering Actors.
So basically it assumes there is an initial set of trusted Actors.
But even with this assumption there are potential issues (e.g. good Actors may be bribed).
I can't tell if this is the result of "I thought of a blog post topic and spent two hours on it" or the distillation of years of thinking. It looks very simplistic, but maybe it's "elegant". Who knows.
Intuitively I am thinking it might work for small communities (where it is not even needed) and there's no way it will make its way into something like a Facebook. So I do not really see the point of it.
I'm leaning towards the former as this topic has been covered in so much more depth over the last 2 decades.
The problem is not the lack of a well-defined ruleset, but that any open or clearly intended ruleset can be optimized for. One of possible solutions is to obfuscate and enforce rules in some irregular, inconsistent and unfair manner, with a well-intended dictatorship or aristocracy oversight, which is proven unworkable long term.
So someone gets negative reputation, this is visible, moralists then join in adding more negative reputation, and possibly without much or even any good reason they suddenly have a massive negative reputation from which they cannot escape. This sounds like an amplifier for gang stalking and other bad behavior.
The suggestion sounds almost exactly like slashdot's system. Personally I'm partial to web of trust.
I can just say I hope I'd be able to opt out of this
We're already living in a post-truth world with broken incentives. Something like this will, at worst, add a transparency layer to it.
So everyone gets a global identifier attached to social (media) credits? I'm quite sure that's someone's pipe dream: Putin, Xi, ...
Another technocrat who wants to improve human life by literally de-humanizing it. Hey let’s upload and outsource how we feel about each other!
It never, ever works.
It reminds me of the trope of monsters that ravenously pursue human flesh in order to feel human, and fail fail fail (see Sandman’s Corinthian for the latest incarnation of that).
Reputation is a complex social phenomenon. What this post recommends is some sort of online reputation-related game that over time everyone would be forced to play. No thank you.
This sounds ripe for abuse.
- who assignes IDs to each actor
- who removes stolen actor IDs
- even with the anti-abuse provisions for bad actors, what is stopping me from making a botnet to artificially increasing my reputation?
- this system, no matter how decentralised, and "hands off" will inevitably be corrupted under the guise of good will and justice.
- who is stopping people from downvoting my reputation if it is perfectly valid, but does not align with the community group think? Will this then affect my usage of other services?
This is uncomfortable for me, similar to social credit systems, a la Black Mirror.
Wrote this over lunch and made a typo.
"...downvoting my opinion."