alabastervlog 2 days ago

This comes alongside lots of people posting (elsewhere) screenshots of information about plans for explicitly non-violent, organized protests that they posted on Reddit, and the ban notice they got for those posts because they were “inciting violence” (apparently those notices include the “offending” post, idk, I’ve never been banned on Reddit probably because I barely use it)

They could be lying, photoshopping the screenshots or whatever, but there are a lot of them with similar stories.

So FYI you may get warned for upvoting posts about upcoming protests that are planning to follow the ACLU’s guide for legal peaceful protests, complete with that fact being stated in the post itself, if these are true.

antifa a day ago

And of course, later we're going to find out that "violent content" included a surprising amount of self-defense and non-violent free speech content...

  • nosioptar a day ago

    That happened to me last year. Someone asked if anyone knew the law in my state in regards to defending yourself against an aggressive dog. I linked the relevant law. I got a 72 hour ban for advocating for violence because the linked law says you can kill an aggressive dog to defend people/pets/livestock.

duxup 2 days ago

This seems like a poor choice.

People voting aren’t and won’t be considering the rules.

Maybe they should but that’s not how it works.

  • defrost 2 days ago

       In addition, while this is currently “warn only,” we will consider adding additional actions down the road.
    
    The back end can exert all manner of downward pressure on accounts that routinely upvote violence .. textual warnings are just the tip.
  • swatcoder 2 days ago

    That's exactly the attitude they're trying to engineer by providing a new feedback mechanism.

    Receiving warnings will help users who mean to be positive participants understand when their votes might have had a different effect, and will provide citable evidence for users who are being stubbornly negative/anti-community.

    There's are fair critiques to be made about on how this might supress freedom of expression or how it anxious people who will panic on receipt of a warning, but it otherwise sounds like a practical idea for what it intends to acheive.

    • duxup 2 days ago

      I don’t think this discourages anyone who wants to upvote that kind of thing.

      I think it they misunderstand how people vote.

      • defrost 2 days ago

        It will modify the behaviour of some of the people who vote.

        Which is the point.

        The interesting part is whether they can group out users who are more predisposed to upvote violence, develop a measure of tendancy, and observe a change for some subgroup.

        If the admin want to downplay violent content they have a range of options, eg: if they have suitable measures then those who do not modify behaviour can have their upvotes diluted or negated on violent content or any other actions up to shadowbanning or beyond(?).

        Reddit literally has a decade+ database on how people vote.

        • duxup 2 days ago

          I honestly don’t think it will. People vote on instinct, they aren’t thinking of the rules, they won’t.

          This is a hall monitor type solution where the only thing they can think of is to give the hall monitor permission to use the stick more.

          • defrost 2 days ago

            Imagine a thousand people that upvote a specific bit of imagery that all get warnings.

            Are you saying that they are all sheep that act in unison and not a single one would react differently to another?

            How does that notion compare to, say, results from marketing tests on groups that trial various strategies and look at the spectrum of reactions in order to tune a campaign going forward?