phtrivier 2 years ago

I like reputation systems in general because they mimic the way social norms were created in the real world (provided having a bad reputation had consequences, but yet was redeemable)

However I'm not sure about:

> Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores.

Would it be possible to automate having a "good behavior" with some very low-stake actions (basically, sharing funny cat pictures ?) with one actor, in order to get tokens to allow upping the reputation of another actors (controlled by the same human being in the end) that is then able to send spam in proportion ?

> The Reputation Score S of any Actor automatically decreases over time

I get that this is a feature, to prevent the emergence of "gurus", bitwouldn't it incentivize authors to produce poor quality content regularly over producing high quality content ? You don't want to be Don Knuth in such a system...

peppermint_tea 2 years ago

If I read this right :

step 1 : create an actor ; let's say ACTOR1

step 2 : create multiple actors (bots etc) let's say ACTOR[A-Z] to raise the reputation of ACTOR1

step 3 : use all the Rating Token Balance (R) of ACTOR[A-Z] to give reputation (S) to ACTOR1

step4 : ACTOR1 now have a good reputation on the blockchain

step5 : ???

step6 : back to nickserv on irc with known nicknames to find someone with good reputation

EDIT : I missed the "Note they can only acquire reputation by being good Actors in the eyes of already-good Actors" but I am curious to see how this could not be rigged by a algorithm

  • TruthWillHurt 2 years ago

    You didn't miss it, you're right. Actors get "S" reputation they can award others periodically.

    The max amount they can accumulate/award is based on their own reputation.

    Makes no difference. 1 bot awarding 50 points or 50 bots awarding 1 point each..

kwyjibo123456 2 years ago

The analysis is completely shallow and ignores decades of papers on the topic in economics and CS.

samatman 2 years ago

This shouldn't even be evaluated until it has a section on Sybil attacks.

Literally nothing proposed here is new, and it is manifestly uninformed by the experience of others.

  • PeterisP 2 years ago

    They do note "Bad Actors cannot take over the system until they have played by the rules long enough to have accumulated sufficiently high reputation scores. Note they can only acquire reputation by being good Actors in the eyes of already-good Actors. "

    However, the "mitigation" already demonstrates a potential attack for the proposed algorithm, namely, that once a bad actor does get some reputation score capital (e.g. by investing some money/effort on genuine good content) then they get the effective ability to invest that reputation into spawning a self-reinforcing farm with a large quantity of reputation bots (using the mechanism of periodic replenishment of rating tokens) who do not need approval from any good actors if they get initially boosted/sponsored by the bad actor, and they can then indeed take over the whole system by maximizing their effect of the rating process/investment which real users simply won't do.

  • hddqsb 2 years ago

    Yes, that's what I was looking for too. I think the following quote is meant to address that:

    > To bootstrap the system, an initial set of Actors who share the same core values about the to-be-created reputation each gets allocated a bootstrap Reputation Score. This gives them the ability to receive Rating Tokens with which they can rate each other and newly entering Actors.

    So basically it assumes there is an initial set of trusted Actors.

    But even with this assumption there are potential issues (e.g. good Actors may be bribed).

dizhn 2 years ago

I can't tell if this is the result of "I thought of a blog post topic and spent two hours on it" or the distillation of years of thinking. It looks very simplistic, but maybe it's "elegant". Who knows.

Intuitively I am thinking it might work for small communities (where it is not even needed) and there's no way it will make its way into something like a Facebook. So I do not really see the point of it.

  • PeterStuer 2 years ago

    I'm leaning towards the former as this topic has been covered in so much more depth over the last 2 decades.

numpad0 2 years ago

The problem is not the lack of a well-defined ruleset, but that any open or clearly intended ruleset can be optimized for. One of possible solutions is to obfuscate and enforce rules in some irregular, inconsistent and unfair manner, with a well-intended dictatorship or aristocracy oversight, which is proven unworkable long term.

m0llusk 2 years ago

So someone gets negative reputation, this is visible, moralists then join in adding more negative reputation, and possibly without much or even any good reason they suddenly have a massive negative reputation from which they cannot escape. This sounds like an amplifier for gang stalking and other bad behavior.

im3w1l 2 years ago

The suggestion sounds almost exactly like slashdot's system. Personally I'm partial to web of trust.

user_named 2 years ago

I can just say I hope I'd be able to opt out of this

  • Nuzzerino 2 years ago

    We're already living in a post-truth world with broken incentives. Something like this will, at worst, add a transparency layer to it.

tgv 2 years ago

So everyone gets a global identifier attached to social (media) credits? I'm quite sure that's someone's pipe dream: Putin, Xi, ...

satisfice 2 years ago

Another technocrat who wants to improve human life by literally de-humanizing it. Hey let’s upload and outsource how we feel about each other!

It never, ever works.

It reminds me of the trope of monsters that ravenously pursue human flesh in order to feel human, and fail fail fail (see Sandman’s Corinthian for the latest incarnation of that).

Reputation is a complex social phenomenon. What this post recommends is some sort of online reputation-related game that over time everyone would be forced to play. No thank you.

___null___ 2 years ago

This sounds ripe for abuse.

- who assignes IDs to each actor

- who removes stolen actor IDs

- even with the anti-abuse provisions for bad actors, what is stopping me from making a botnet to artificially increasing my reputation?

- this system, no matter how decentralised, and "hands off" will inevitably be corrupted under the guise of good will and justice.

- who is stopping people from downvoting my reputation if it is perfectly valid, but does not align with the community group think? Will this then affect my usage of other services?

This is uncomfortable for me, similar to social credit systems, a la Black Mirror.

  • ___null___ 2 years ago

    Wrote this over lunch and made a typo.

    "...downvoting my opinion."