antiterra 5 years ago

Product Person #1: Hey, here’s a writeup that says if we show too many positive posts to people that it creates bad feelings. Should we show less positive posts?

Product Person #2: Not sure, wouldn’t negative posts make people feel worse?

Product Person #1: Wow, I donno, maybe we should try adjusting additional posts people see one way a bit and see what happens?

Product Person #2: Not a bad idea, how about we try both? You know, an A/B test.

Product Person #1: Hmmm. Ok, but— when we’re done, let’s have some scientific review of our data just so that we can correct the record and push along the science around this stuff.

Journalist: This company deliberately made people sad

  • chatmasta 5 years ago

    I admit I didn’t read the article because I thought I knew the event it was referring to, but didn’t Facebook actually publish this as a psychology study? i.e. they didn’t just use it for A/B testing features but actually thought they were doing something “good” to the point of publishing a study about it. It’s laughable now lol.

    • cryptoz 5 years ago

      Yes, they published the paper and publicly bragged about how successful they were in making hundreds of thousands of people feel sad.

      https://www.pnas.org/content/pnas/111/24/8788.full.pdf

      Edit: And it is in their public defense of this paper that they think we all gave 'informed consent' to this kind of psychological manipulation when we signed up for Facebook because it's in the TOS.

      They literally define 'informed consent' as the legal text in their TOS. I mean that's insane, they know their users don't read that before signing up. It's not f'ing informed or consent ugh this makes me so mad.

      Also what kind of sick company puts into their TOS that they can emotionally manipulate you on purpose whenever they want? Fucked up.

      • wpietri 5 years ago

        Agreed. If people end up surprised about what they nominally consented to, it's not informed consent. Another example of Facebook's "move fast and break people" ethos.

  • thatcat 5 years ago

    That's the definition of an unethical experiment design and would be considered academic misconduct if that were academic work. Regardless of intent, fb probably shouldn't be allowing front end devs to design psychological experiments at mass scale.

    https://psychology.wikia.org/wiki/Experimental_ethics

    • plaidfuji 5 years ago

      To be fair, the article states that the study was designed by a data scientist. Not that that implies that any additional ethical considerations were taken into account.

      • seattledev 5 years ago

        Data scientist means someone who's had classes in stats and knows how to use software to analyze data. They don't cover how to run experiment and don't cover ethics

        • urthor 5 years ago

          To be fair, the profile of a data scientist at a F500 non tech company and at a company like FB is slightly different. Facebook's data science crew is known for being heavier on the stats lighter on the software (relatively speaking) and probably can design a good experiment.

          That said, anyone who signs up for a job at Facebook doesn't give a shit about ethical design of experiments.

          • mgraczyk 5 years ago

            Hi, I used to work on ML and ran experiments like this one at FB. Just want to point out that the effect of experiments like this on an individual is smaller than the effect of making the app start 0.1 seconds faster, or retraining the network used to filter hate speech. The only reason the effect is significant is because N>650k, so tiny effect sizes can be detected.

            • thatcat 5 years ago

              whats the variance on that effect?? doesn't a large data set like that just lead to type ii errors?

    • graeme 5 years ago

      Doesn’t that suggest professional ethics are immoral? They block a temporary, one off experiment which could suggest how to avoid sadness.

      The alternative is relying on intuition only and possibly making people permanently sadder. Or is there a way to achieve this knowledge consistent with the field of research ethics?

      We saw this with challenge trials in the pandemic where ethicists fretted people would reject them and concluded challenge trials are immoral. Meanwhile opinion polls suggest people view challenge trials as virtuous and the current method of waiting for infections as immoral.

      • seattledev 5 years ago

        Yes, there is a way. You left them specifically opt in to it, you provide them disclosure documents of what's being tested, and you provide free access to mental health resources.

        • ARehmat 5 years ago

          It would be interesting to find out if these experiments lead to any suicides etc. Although I suppose that Facebook is not interested in releasing that type of information.

          • zwkrt 5 years ago

            Which drop of rain caused the dam to break? In the book Good Omens there is a demon who considered himself the best not because he was the best deceiver or torturer, but because of his ability to cause mass malcontent.

            The real evil isn’t in the experiment, it’s that the researchers found that Facebook makes people’s lives worse. How many suicides are caused by Facebook’s investors? Whether they can mitigate it this way or that is splitting hairs.

      • thatcat 5 years ago

        Isn't every experiment temporary one off? Ethics are strict bc of the dark history of human experimentation.

      • TomSwirly 5 years ago

        > Doesn’t that suggest professional ethics are immoral? They block a temporary, one off experiment which could suggest how to avoid sadness.

        NO, companies should not perform secret psychological experiments on people without their consent because they "could suggest how to avoid sadness."

        > challenge trials in the pandemic

        You mean the ones people consented to? Why is consent not an issue for you?

    • unethical_ban 5 years ago

      How would it have been ethical?

      If their current process was already suspected to be the worse of the two, and their goal was to improve peoples' moods per se, what is the ethical thing to do? Keep people feeling more sad?

      Or, perhaps, stop being algorithmic in what people see, or else put people in full control of simple sorting (prioritize my "starred" category and sort by date)?

    • shadowgovt 5 years ago

      Believe it or not, they didn't have the front end devs design it.

      There was a specific task force at Facebook intended to better understand customer psychology. Came from them.

    • mgraczyk 5 years ago

      Which part of this conception of experimental ethics does this violate? Facebook's experiment did not directly cause the emotions, the posted content for that. Even if it did, this sort of thing is well within the bounds of normal experimentation with large systems.

      This is similar to claiming that traffic experiments which change the flow of cars, in term causing some people to experience more traffic and become unhappy, are unethical. Do you think traffic experiments are unethical?

      • seattledev 5 years ago

        I disagree. If facebook is testing content like this then they explicitly know which content makes people sad. To manipulate someone's feed to push them stuff that knowingly makes them sad for the purpose of experimenting with someone's emotions, that is highly unethical to me.

        There is a massive difference in studying and directing traffic to help make traffic flow better. The goal there is to improve driving for everyone. Facebook's goal is to intentionally make people unhappy for the sole purpose of making more money.

      • xg15 5 years ago

        > Facebook's experiment did not directly cause the emotions, the posted content for that.

        Facebook definitely caused the emotions - by deliberately choosing to show particular content.

        > This is similar to claiming that traffic experiments which change the flow of cars, in term causing some people to experience more traffic and become unhappy, are unethical. Do you think traffic experiments are unethical?

        If the objective of the traffic experiment had been to deliberately get a certain group of cars stuck in a traffic jam, this would absolutely be unethical.

        • vickychijwani 5 years ago

          > If the objective of the traffic experiment had been to deliberately get a certain group of cars stuck in a traffic jam, this would absolutely be unethical.

          A closer analogue would be a traffic experiment designed to gauge the emotional effects of a particular route. That's an important difference.

          The article itself says the experiment was designed to look for evidence of emotional contagion, which is quite different from "it was designed to make people sad".

          Also, in another thread it's pointed out that the effect sizes from this study were extremely small - something like 0.3% more negative words were used by ~150k people. The effect is said to be on the same scale as any minor UI change, like a size/color change of the "like" button. So it's hard to see this as anything other than folks looking for a reason to get outraged.

          • xg15 5 years ago

            > The article itself says the experiment was designed to look for evidence of emotional contagion, which is quite different from "it was designed to make people sad".

            That was their research goal, yes. Their methodology was this:

            > For some, that meant 90% of all "positive" posts were removed from their newsfeed for a week, rendering the social network a pit of despair.

            Research can be unethical even if the intention of the research is not.

            For the traffic analogy, if part of that traffic experiment would involve a routing change that researchers know will likely cause a traffic jam, then the experiment will be unethical, no matter how warranted the research may be.

            That's the reason you cannot use placebos to test life-saving medication (without prior informed consent), even though it would certainly be beneficial for science if you could.

            And in Facebook's case, their research goal wasn't even that ethical in the first place. Maye we can have this talk if they try to find a cure for depression through emotional manipulation, but this was literally just about preventing people from leaving Facebook:

            > "At the same time, we were concerned that exposure to friends' negativity might lead people to avoid visiting Facebook."

      • thatcat 5 years ago

        Requirement for a clear research protocol statement from the outset, including clear timetable and detailed procedures.

        Protection of clinical trial subjects

        Clinical trials on minors

        Clinical trials on incapacitated adults not able to give informed legal

        Establishment of mandatory Ethics Committees

    • slver 5 years ago

      Maybe it's unethical in academic context, but in public service context, you have an existing service, and you're in charge of improving it.

      When you have an improved algorithm, would you change your algorithm all at once for everybody, or would you test your change on small % of users, before you go full-scale? You'd do the latter. There's no better option. Because if you're wrong, your impact is much more devastating.

      In other words, there's always an algorithm filtering your feed. If that was unethical, it makes Facebook unethical by merely existing.

      • TomSwirly 5 years ago

        > in public service context

        This is not a "public service" - it's a for-profit company.

        > When you have an improved algorithm,

        This is not an "improved algorithm". They deliberately removed almost all positive news from some people's feeds for a week.

        • slver 5 years ago

          I meant public service as in it's a service, and it's providing itself to the public. Not as in government service.

  • mhh__ 5 years ago

    > This company deliberately made people sad

    Which would be correct, surely?

    • antiterra 5 years ago

      Would you consider ‘this company deliberately made people happy’ to also be correct, then?

    • Aunche 5 years ago

      It's technically correct but implies malicious intent. It's like saying "Pfizer deliberately injected a substance in people that kills them" to describe a failed drug trial.

      • dheera 5 years ago

        "it creates bad feelings"

        If this is the hypothesis they are testing for, the problem is they subjected people to a psychological experiment without consenting to it, for the ultimate purpose of their own financial gain.

        How is that not malicious intent?

        • eptcyka 5 years ago

          If you dislike a _b_ version of an a/b tested site to the point that you feel deeply dissappinted by it, were the people who designed the experiment operating with a malicious intent?

          I don't think facebook engineers or data scientists care what people feel when they're adjusting their models, they only care about time spent on their platform or some other overly reductionist metric. One could argue that chasing super-efficiency will always result in abominations, but I think its hard to say that someone is doing something imoral by just doing the human equivalent of gradient descent in the optimization problem of driving engagement. That's not to say that we shouldn't have a conversation about this and not figure out ways to make these super efficient and powerful companies find ways to be satisfied with organic engagement.

          • dheera 5 years ago

            There's a difference between disappointment about a product (which is fine) and doing an experiment to test that might knowingly create bad feelings in someone's life in one cohort of your experiment.

            What if an almost-suicidal person got the "b" version?

            A/B testing should be used when you think, in good faith, that BOTH "a" and "b" versions could be good and want to know which one works better, not to confirm a hypothesis that "b" has a negative impact on users' personal lives.

        • Aunche 5 years ago

          Surely, people understand that Facebook makes changes to their website, and that these changes are based on data. That is all an experiment is. If Facebook wasn't allowed to experiment, they would never be allowed to change their interface or recommendation engine at all.

          • dheera 5 years ago

            There's a difference between experimenting on the product, and conducting an experiment that deliberately puts one cohort of users in an environment hypothesized to create negative emotions.

            A/B tests for the case where you honestly, in good faith, think both A or B could be good designs, but you don't know which one is better.

            A/B testing is for product design changes, not psychology experiments. It is NOT for the primary intention of a psychology experiment with a group of subjects that didn't consent to a study, where either A or B has a suspected negative impact on emotions, and you're using A/B testing to confirm or deny that hypothesis.

            • Aunche 5 years ago

              It's still a product decision though, especially the way the top comment frames it. It's just one that happens to be centered around emotional language. I suspect that an experiment that deals with more or less political content would have a similar effect on people's mental state.

      • moron4hire 5 years ago

        Except people have to give consent for drug trials

        • kennywinker 5 years ago

          Not just consent! informed consent.

      • detaro 5 years ago

        Guess why you need to opt in to a drug trial.

  • tshaddox 5 years ago

    It seems like you’re implying that, by explaining in slightly more detail how this might have happened, you somehow show that the journalist was oversimplifying or distorting things. But, no, your last sentence is definitely still correct.

  • typon 5 years ago

    The constant derision of journalists in tech circles (especially on HN) is kind of shocking to see. Did you read the article or did you miss that the journalist is relaying fellow researchers' apprehensiveness about Facebook being allowed to conduct an unethical study. This article [1] is linked in the second paragraph.

    [1] https://www.theguardian.com/technology/2014/jun/30/facebook-...

    • hn_throwaway_99 5 years ago

      It's nothing personal against journalists, but the economics of the business mean that they have huge incentives to make things seem more outrageous or nefarious than they actually are.

      I did read the whole article, and that doesn't excuse the BS clickbait title of "Facebook deliberately made people sad."

      I find the fact that the article is complaining about emotional manipulation by Facebook, while using a deliberately manipulative title, to be more than a tad ironic.

      • threatofrain 5 years ago

        Do journalists even write the titles to their own articles? I'm sure every newsroom has their own distinct culture, but this sounds like an editor's job.

    • rendall 5 years ago

      Questioning whether someone read the article is against the guidelines:

      > "Please don't comment on whether someone read an article. "Did you even read the article? It mentions that" can be shortened to "The article mentions that.""

      https://news.ycombinator.com/newsguidelines.html

    • antiterra 5 years ago

      The commentary is more on the clickbait title, which very likely could have been written by someone other than the article’s author. (NB: journalists do A/B testing to pick the title that incites the most clicks from outrage[1].)

      Further, the article you link appears to have commentary by a single researcher.

      Here’s a writeup by researcher who thinks it wasn’t unethical at all and that it would have passed IRB as performed. It, of course, does not end the discussion, but it does demonstrate that the issue is more complex than a company deliberately making people sad:

      https://hbr.org/2014/07/were-okcupids-and-facebooks-experime...

      [1] https://www.washingtonpost.com/lifestyle/media/media-values-...

    • smsm42 5 years ago

      I don't know why it's shocking. The quality of the work in press has been going downhill for years, and journalists routinely misunderstand and misinterpret things they report about - sometimes innocently through ignorance and incuriosity, sometimes maliciously to push an agenda. It is, of course, not limited to tech - but tech is also not excluded from this global trend. Is it reasonable to expect from tech circles to ignore it?

      And "the journalist is relaying" is a poor excuse - the journalist has a choice of what to relay and how to frame it, and this choice is routinely used to frame things in a specific way. There's no "just reporting" anymore, if there ever was. There's always a reason why something is reported (or not reported, as it happens) and there's always a reason why something is reported in a specific way. The choice and control is always on the press's side.

  • Graffur 5 years ago

    Why did you make this comment? Is it what you believe happened?

hackinthebochs 5 years ago

I still don't see why I should be outraged over this. A user of facebook already consents to facebook displaying whatever content on their feed facebook chooses. Why should facebook require extra consent to gather scientific evidence on the effects of the content being displayed, given they are only analyzing data they already gather?

  • cryptoz 5 years ago

    This is unethical psychological research against non-consenting adults and children that likely caused real harm. From a paper cited below,

    > This Facebook study was conducted without consent and without appropriate oversight, and may have harmed both participants and non-participants. Kramer’s apology also puts the vast number of participants in context; a full 0.04% of Facebook’s users, or 1 in 2500, were unwitting subjects in this unethical research. Many of these people were almost certainly children, and many of the participants were probably suffering from depression. It is surprising and worrying that one of the world’s most prominent companies should treat both the emotions of its users and research ethics so carelessly. Steps must be taken to ensure that international psychological and medical studies involving social network users are regulated to the same standard as other human subjects research.

    https://journals.sagepub.com/doi/10.1177/1747016115579535

    Edit: Also you said something that confused me,

    > given they are only analyzing data they already gather?

    I'm not sure what this means. But they did not only analyze existing data, they created new data specifically intended to make people feel sad, then analyzed that data.

    • antiterra 5 years ago

      Again, that means if they just accepted the wisdom of the time, which was the ‘common worry that seeing friends post positive content leads to people feeling negative or left out’ and unilaterally limited positive content, they’d still possibly be creating more negative feelings and causing this ‘harm.’ But, you know, it’d be all ethical, since they didn’t look at the data or run an A/B test.

  • philplckthun 5 years ago

    To be fair, since this has been a while ago it's hard to tell what to do about this. Personally I find it hard to draw any conclusions from this just due to the time that has passed. I don't use Facebook, so maybe it's just my distance from it. But it has happened and it's worth stating that this is basically a psychological experiment and not a simple A/B test, but at a company that most likely at the time didn't have an ethics board to review this.

    Other sources list a couple of principles behind the ethics of psychological research. The relevant ones being:

    - Minimise the risk of harm - Obtain informed consent

    Some of them do state that the latter isn't always exactly possible, since that may influence the outcome.

    But the fact of the matter is that Facebook did an A/B test that could inflect serious harm on the quality of life of the participants, who weren't aware of any research being conducted. The latter sounds like it'd be at least the minimum here.

    So, I'm not a psychologist, but this does sound like it shouldn't have happened in this way. There were definitely more ethical ways in running this experiment that wouldn't have involved 700K unknowing and potentially unwilling participants.

    • emodendroket 5 years ago

      Let's imagine hypothetically that sad, negative posts get more engagement by whatever metric Facebook uses, and Facebook was paying no attention to sentiments at all and ending up putting more sad posts on feeds. Would that have been unethical? I can't really see what would be so different.

      • alfl 5 years ago

        Is it unethical to create an automated system that maximizes global unhappiness for profit?

        • erik_seaberg 5 years ago

          When a movie makes the audience sad, it wins Oscars, we don't censor it. Why should the rules be different for Facebook?

          • pizza 5 years ago

            It's hard to get a lot of positive reinforcement by interacting with like-minded others at scale through a movie. Facebook's original stated intent was to study contagion of emotion, which seems to me to suggest a multiplayer, interactive effect.

          • tobr 5 years ago

            That is a banal comparison. When a film makes you sad, you are aware of what is going on. If you are unusually sensitive to these types of emotions, you can read about the film ahead of time to see if you might want to avoid it.

            • emodendroket 5 years ago

              Do you typically go read a synopsis of the entire plot of a film, including any surprise developments, before watching it?

              • tobr 5 years ago

                No?

                • emodendroket 5 years ago

                  Ok. Then what you’re saying doesn’t make sense.

                  • tobr 5 years ago

                    Why?

                    • emodendroket 5 years ago

                      Because films’ promotions may deliberately conceal information about tragic events in the story to achieve maximum impact and nobody thinks that is unethical.

                      • tobr 5 years ago

                        This feels a lot like talking to Eliza. Your replies very vaguely connect to what’s being discussed in this thread, but there’s just no substance or coherency to the argument.

                        • emodendroket 5 years ago

                          "When a film makes you sad you are aware of what's going on" is your claim, but I don't see how that applies to something like, say, Terminator 2, whose entire goal, according to Cameron, was "making the audience cry for the Terminator," yet was not promoted as a sad film. It's hard to come up with a principled difference here.

                          • tobr 5 years ago

                            The Terminator 2 audience knows the film is an authored fictional story. It can make someone cry when they didn’t expect it, but they understand that the filmmakers are intentionally trying to provoke certain emotions. If you can’t see the myriad principled differences between that situation, and logging onto Facebook expecting to see an unfiltered selection of posts, I really can’t believe you are trying hard.

                            • emodendroket 5 years ago

                              If you'll scroll up a bit you'll see this subthread begins when I propose a thought experiment where the "natural" order yields the same results and ask if it's unethical, and people tell me that yes, they think it is. But now you're specifically calling attention to an unmet expectation of "unfiltered" posts (I'd question whether anyone has such an expectation, although the specifics of the curation are not advertised). I think this gets away from what I was talking about in the first place.

        • emodendroket 5 years ago

          Well, if so the problem goes a bit deeper than Facebook.

      • tobr 5 years ago

        Yes, that would be deeply unethical. And to make matters worse, I believe that’s a fairly accurate description of how Facebook works.

        • emodendroket 5 years ago

          So how could someone ethically run social media of any stripe?

          • tobr 5 years ago

            I don’t understand how this question can follow. Are you suggesting that social media simply must optimize for engagement and not pay attention to negative consequences?

            • emodendroket 5 years ago

              What does that mean? Like, we want the Facebook mods to delete anything that's too depressing? Sounds more dystopian rather than less... and I thought we were supposed to be worried about "duck syndrome" where everyone appears to be having great lives, making you feel bad, because you don't see the negatives (like a duck paddling underwater, see?).

          • bryan_w 5 years ago

            Exactly, we have no idea if HN is supressing positive stories in an experiment or not. Twitter, reddit, FB, tictok all sort content by magic and could be trying to make you sad.

  • xg15 5 years ago

    Because "consent" is worthless if it is not informed consent. And it's neither expected by, nor in the interest of the user to be emotionally manipulated.

    By the way, the whole "intelligent" newsfeed is likely not expected either. My guess is, most people sign up to Facebook with the expectation to see the update of all their friends, nit be subject to Facebook's AI games and product research.

    • hackinthebochs 5 years ago

      But "emotional manipulation" may already be happening by using the website. If the user gave consent to that, I don't see what difference the act of gathering scientific data makes. Your argument seems to be against the current bar of consent that facebook and similar sites are required to get.

      • xg15 5 years ago

        The user never consented to "emotional manipulation". If anything, the user consented to usage of an algorithmic newsfeed that ranks posts based on unknown criteria, but always with "relevance" as the overall goal. There is nothing about emotional manipulation in this.

        Emotional manipulation is also clearly not in the user's self interest, so there is no reason why they ever should give consent. (As opposed to an AI newsfeed, which could be a useful feature if it worked in the user's interest)

        • hackinthebochs 5 years ago

          Emotional manipulation was scare quoted for a reason. Consent to "rank posts based on unknown criteria to maximize relevance" will incidentally, at least on some occasions, cause "emotional manipulation" in the consumers of the feed. So again, consent is already provided for the possibility of emotional manipulation. What's the difference between accidental and scientifically valid emotional manipulation such that extra consent is needed?

          • xg15 5 years ago

            > Consent to "rank posts based on unknown criteria to maximize relevance" will incidentally, at least on some occasions, cause "emotional manipulation" in the consumers of the feed.

            Why is that automatically the case and how would the user know that?

            > So again, consent is already provided for the possibility of emotional manipulation.

            Giving consent for one specific form of emotional manipulation does not give blanket consent to all forms - especially not to deliberate acts.

            Using the same logic, a surgeon could argue:

            This patient asks me to fix his broken leg. To do this, I have to operate on him and create an incision. An incision is an instance of causing bodily harm. Therefore, the patient has given consent for me to cause him bodily harm. Therefore, I now have consent to punch the patient's face and steal his kidneys.

            See the problem?

            > What's the difference between accidental and scientifically valid emotional manipulation such that extra consent is needed?

            One is accidental, the other is deliberate.

            Also, that's not just my opinion. Just read the other guardian article linked from this one and the statements linked there:

            https://www.theguardian.com/technology/2014/jun/30/facebook-...

            http://laboratorium.net/archive/2014/06/28/as_flies_to_wanto...

            http://www.maxmasnick.com/2014/06/28/facebook/

            • hackinthebochs 5 years ago

              >Why is that automatically the case and how would the user know that?

              Of course it is automatically the case. If a post can have emotional valence, but emotional valence isn't controlled for, then occasionally some sequence of posts will be more negative than average just by chance. This is accidental emotional manipulation.

              >Giving consent for one specific form of emotional manipulation does not give blanket consent to all forms - especially not to deliberate acts.

              But it's not all forms, its a specific type of maniupation: posts in your feed with more or less than average positive or negative emotional valence. This is just a consequence of having a feed at all.

              >One is accidental, the other is deliberate.

              You have a knack for restating the facts of the case and then (figuratively) going QED. It's a little frustrating. Again, why does intentional manipulation require more consent than accidental manipulation when the process by which it happens (alterations to the feed) already has blanket consent?

              >Also, that's not just my opinion. Just read the other guardian article linked from this one and the statements linked there:

              I'm not interested in more opinions, I'm interested in arguments.

              EDIT: One of your own links references a post that details the circumstances in which the "informed consent" requirement can be waived:

              "According to Prof. Fiske’s now-uncertain report of her conversation with the authors, by contrast, the local IRB approved the study “on the grounds that Facebook apparently manipulates people’s News Feeds all the time.” This fact actually is relevant to a proper application of the Common Rule to the study.

              Here’s how. Section 46.116(d) of the regulations provides:

              An IRB may approve a consent procedure which does not include, or which alters, some or all of the elements of informed consent set forth in this section, or waive the requirements to obtain informed consent provided the IRB finds and documents that:

              The research involves no more than minimal risk to the subjects;

              The waiver or alteration will not adversely affect the rights and welfare of the subjects;

              The research could not practicably be carried out without the waiver or alteration; and

              Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

              The Common Rule defines “minimal risk” to mean “that the probability and magnitude of harm or discomfort anticipated in the research are not greater in and of themselves than those ordinarily encountered in daily life . . . .” The IRB might plausibly have decided that since the subjects’ environments, like those of all Facebook users, are constantly being manipulated by Facebook, the study’s risks were no greater than what the subjects experience in daily life as regular Facebook users, and so the study posed no more than “minimal risk” to them."

              More discussion on: https://www.thefacultylounge.org/2014/06/how-an-irb-could-ha...

fogof 5 years ago

How sad did this experiment really make people? The chart in the study says that people who saw fewer positive posts used one percent fewer positive words and about 0.3% more negative words. But on the other hand, they were seeing fewer positive posts, so maybe they were just replying to the posts they saw in a way that was natural, without their inner emotional state being very affected.

  • seattle_spring 5 years ago

    Excuse me, but this is the HN bi-hourly "Facebook is evil" post (tm). Questioning the content is against the rules.

    • pindab0ter 5 years ago

      There’s no need for this.

      • bryan_w 5 years ago

        I mean it is a bit odd that this is from 2012 and was reposted today. It's almost like it was posted because there hadn't been a negative story in while

mgraczyk 5 years ago

As a former FB employee who worked on ranking, I just want to point out how absolutely tiny the effect size is here when we think about normal human emotional experience. On a per person basis, this experiment had an effect that is comparable to other potential changes like slightly altering the padding on the "like" button, or showing 1 extra news post per day, or sending an extra notification once per month.

Facebook didn't make people sad. It made a population post things with slightly more negative words, only significant when measured across hundreds of thousands of users.

yeuxardents 5 years ago

This was the last straw for me, thats when I permabanned facebook from my life.

This was an unauthorized, unguided, unethical, mass psychological experiment on human beings. Anyone involved should have gone to jail for crimes against humanity.

  • jbverschoor 5 years ago

    luckily we don't have the dependency on fb anynmore, since nobody forces "login with facebook" anymore thesem days

  • neonological 5 years ago

    No it's not a crime against humanity, let's not go that far and let's not unreasonably promote cancel culture by burning the witch at the stake before giving the issue some deep thought.

    It's a much more complicated issue then you describe. Many corporate organizations have been selectively distributing information to deliberately influence sentiment for decades. Facebook is not the first nor will they be the last.

    Restricting any entity from doing this is a complicated issue that has to do with restricting freedom of speech.

    Fox news comes to mind when I think of another organization that does this deliberately. To add more complication to the issue one should consider the fact that the success of Fox news can also be attributed to it's customers: people. People choose what they want to hear, and many people prefer news viewed through a biased right leaning lens.

    Just like how the above poster wants to paint this issue as a crime against humanity, I think part of the problem is that on some level we all want to lie to ourselves. We want to view the world in a very specific and certain black and white way.

    • xg15 5 years ago

      Ok, so clandestine manipulation is freedom of speech. Is that what the founding fathers intended?

      • neonological 5 years ago

        No but when you restrict clandestine manipulation you restrict freedom of speech. Giving the government such power gives them the oversight to use the "clandestine manipulation" excuse on other things as well.

        Like I said, not so clear cut. It's a complex world we live in. But I get why someone would vote me down. It's easier to see everything in terms of black and white. The truth, however, is grey.

        Take freedom of religion for example. Freedom of religion allows cults like Scientology to form. If we allow the government to have the power to eliminate Scientology what kind of power does it give the government over Christianity or Buddhism?

        Personally my view is that the cost is worth the benefit, because the alternative is much worse.

        • xg15 5 years ago

          > Giving the government such power gives them the oversight to use the "clandestine manipulation" excuse on other things as well.

          Like for example?

          > Personally my view is that the cost is worth the benefit, because the alternative is much worse.

          A comfortable view if the cost is mainly paid by others.

          • neonological 5 years ago

            Like for example you’re clandestinely manipulating me right now. Everything you say is manipulation of me towards your opinion.

            And you say it’s a cost paid by others? Can you give me an example of how I don’t pay the cost? Right you can’t because you didn’t know that I’m a user of Facebook. Logically it is impossible for you to know whether I “pay” for these costs or not.

            Basically without knowing who I am you made up a lie out of thin air. There is no other logic that can explain what you just said as you know nothing about me. Is lying in itself not clandestine manipulation? Are you not lying in attempt to clandestinely manipulate me and others reading this towards your opinion?

            You obviously have no skin in the game, you don’t even use Facebook, you’re just harping off about it because you don’t give a shit about freedom of speech at all.

            Is the paragraph above true or a manipulative lie? Did lying like that make you angry? What about that garbage lie you made up about me? Is this the same brainless response and fury everyone is reacting to Facebook “experimenting” on us without deeper consideration?

            Does me or you being angry about manipulative bullshit give either of us the right to clamp down on our right to say it? Should I have the right to call the police on you for making up shit and lying?

            No. And honestly outside of the context of “giving you an example” I suggest you check yourself. The drivel coming out of your mouth is deliberately offensive. “A cost paid by others” is an outright lie made up for the express purpose of manipulation and inciting anger. Emotional manipulation is no less evil then experimental manipulation.

            Careful what you do to your own freedom of speech rights... If you succeed, one day your ideas will be muzzling your own manipulative vitriol like a dog.

            Perhaps I can call out to dang, have him come over and moderate this thread. No let’s make it so that the government can moderate Facebook, which then in turn means they can moderate you.

            What do you think is the justification the CCP gives themselves when they moderate the entire internet in China? It’s the same justification you’ve given me, they seek to moderate clandestine manipulation.

  • mgraczyk 5 years ago

    The outcome of this experiment is that for a few days, around 150k people posted ~0.3% more negative words on Facebook.

    Is that something engineers and data scientists should be imprisoned over? I think most US politicians cause bigger effects every time they tweet.

    • yeuxardents 5 years ago

      Yes, is it known whether anyone affected committed suicide? Was that a part of the test, did they take that in to effect, did they ensure there were signals that if that could happen they would lay off the negativity for those people? Any psychology course at a University on ethics shows this is mass experimentation without proper oversight.

      I for one, do not believe that a bunch of twats in silicon valley give a crap about ethics, do you?

  • Google234 5 years ago

    Would you be happier if FB censored every negative post on their site? I’m sure there would be less unhappiness but would it be “better”?

  • hackinthebochs 5 years ago

    People keep using the words "psychological experiment" as if it proves the point. No, its just a more emotional description of what happened. Why, exactly, is controlled manipulation to gather scientific data require extra consent than doing random manipulation they already have consent for?

    • yeuxardents 5 years ago

      Because this is clearly altering emotional states. Is it known whether any individuals became more sad than others and committed suicide as a cause? No? Well thats because studies like this are required to go through IRB and review processes to ensure outcomes like that DO NOT HAPPEN. That did not happen here.

ACow_Adonis 5 years ago

I admit confusion. isn't unsolicited experimental psychological manipulation without consent just another word for most modern marketing?

I mean don't get me wrong, I dont like it and try to exclude it from my life and my families life, but there's pretty wide acceptance socially for this type of behaviour.

I would of thought the beauty industry would be an old perpetrator that should generally be investigated. going to shut down that?

and negatively instilling fear and distrust in a population without their consent for personal gain is about as old as politics itself?

isn't this practically mainstream media behaviour?

fomo? status anxiety? conspicuous consumption? I don't see why Facebook should be singled out for society-wide mandated and culturally supported practices.

niyikiza 5 years ago

Perhaps some A/B testing at The Guardian showed that headlines condemning tech giants are more clickable.

wruza 5 years ago

Honestly that sounds like “this stuff dealer is bad because they make people sad while they wait for a dose; I quit, clean for 2d 6h 19min”. No doubt facebook is evil, corporate monster, etc, but hey what about stopping being a junkie. The problem is not someone experimenting with your unhealthy addiction, it is your unhealthy addiction.

At the early stages of the internet, most of the content was hidden, waiting for you to actively discover and bookmark it. Like you do with good places in your town - you find one, add it to your address book and visit occasionally. It was a slow process, full of findings, enjoyment and variety. Now everyone seems to sit at their mailbox, desperately waiting for another pack of junk mail to arrive. Facebook is just that - a postman who chooses from a variety of crap to push into your inbox. It doesn’t change lives unless people are too lazy to live by themselves.

  • xg15 5 years ago

    > The problem is not someone experimenting with your unhealthy addiction, it is your unhealthy addiction.

    Why is it my unhealthy addiction if a multi-billion dollar company uses all tools at their disposal to push me into said addiction?

    • wruza 5 years ago

      I mean, it’s not their addiction and not their problem. Everyone tries to push/trick/force you into it, from drug cartels to facebook. You can’t do much with neither of them for a variety of reasons, thus it sounds reasonable to perceive it as something that you should refrain from by yourself.

      And you can do that even without Quitting Facebook, just bookmark an url that has no feed in it. Personally I’m not using facebook, but e.g. youtube always tries to push me into viewing something. But I simply do not scroll the feed, and go to subs or search instead when I want something new. Only user’s laziness allows them to take control, and only a user is in control of their laziness. All of those who complain about the modern internet forget that old internet required you to “surf” it very actively. FAANG exists as a multi-trillion industry only because people are inert content junkies with a thumb.

      • xg15 5 years ago

        > FAANG exists as a multi-trillion industry only because pere inert content junkies with a thumb.

        Yes, but that's basic biology, not anything that has to do with strength of character or freedom of choice.

        "People" are also into heroine. Should we allow to build a multi-trillion dollar industry on that?

        • wruza 5 years ago

          My first impulse was it’s not the same effect, but thinking of it some more I see what you mean. The main problem with it is convincing everyone that fb actually sells a drug and not entertainment/news.

          Otoh, if you find out that you’re selling drugs in your cookie shop, should you stop? Because even non-targeting news sites do exactly that in this analogy. They post articles that “everyone” wants to see. Where is a borderline between “free coffee to breakfast” and “heroin store black friday”?

jancsika 5 years ago

Is it against the rules for me to use HN as a dating site? I'm going to pen-test it:

I enjoy cuddling, long walks on the beach, and services that do not run social experiments on me like something out of a cheap movie plot about a mad-scientist.

to contact this user please dial "jancsika" on your rotary phone now

  • neonological 5 years ago

    This is a psychological experiment on me and all the users on this site and I am now outraged.

  • alfl 5 years ago

    You asked our permission so unfortunately this experiment is not comparable with Facebook’s.

    • jancsika 5 years ago

      Fortunately and wittingly it is not comparable!

xg15 5 years ago

> But the issue of consent also doesn't quite explain why we're comfortable with some types of uninformed research on us, but not others. Like almost every major tech firm, Facebook practices A/B testing

Are "we" actually comfortable with that practice? (And who is "we" in the first place? The general public? Tech people? Journalists?)

It seems to me, this is simply something the industry does because it can get away with it - and most users don't object because they don't even know it's happening.

Why tech journalists see the Facebook thing as objectionable but A/B tests without consent as perfectly fine might be a question worth discussing.

sidcool 5 years ago

Facebook has indeed done much harm to individuals and the society as a whole. But at the same time, their tenacity to continue making money is impressive. Villiams too have some quaint evil power.

jimbob45 5 years ago

I don’t know anyone that uses Facebook anymore that likes it. Everyone I know who uses it says, “I’m thinking I should delete it soon”. Universally, the number one criticism is, “All I ever wanted to see are my friends’ posts and every update shows me less and less of those”.

Does anyone actually know people who avidly use and love Facebook? It seems like Facebook is like the Christian church where the church and everyone says they go every Sunday but it’s really more like once a year at this point.

  • alistairSH 5 years ago

    Same boat here. Wife deleted her account. I keep mine only to use the market (currently, the best place to sell bicycle parts locally). I'll probably delete my account once I do my next round of parts bin purging.

dhosek 5 years ago

9 years later and still a whole lot of straw left.

luckylion 5 years ago

"News media deliberately make people outraged. This ought to be the final straw".

I don't have issues with the study in general. You do want to know whether and how you can influence people in a positive or negative way, especially if you want to avoid it. There's really no other way to find out than to study it. They should've gotten clear consent for participation in that study, but that's about it from my point of view.

  • seoaeu 5 years ago

    "There was nothing wrong except for the lack of consent" is really rather missing the point... In other contexts that's the difference between acceptable behavior and a felony.

    • luckylion 5 years ago

      I don't understand the article (and most of the comments) to be about the lack of consent, but about the act itself: deliberately making people sad.

      In most medical studies, that's not being done. You might give someone placebos and watch what happens when you don't give them the medication you want to test, but you're not giving people cancer to see what helps best.

      I'm not sure consent would've made a fundamental difference to that reaction to it.

vmception 5 years ago

Do we know what specific week that was? Would like to see it affected trading patterns or the VIX.

imwillofficial 5 years ago

What if somebody had killed themselves? This type of unconsensual experimentation is criminal.

alfl 5 years ago

When they announced that they had this capability a couple of years ago I deleted my account.

cryptoz 5 years ago

Article published 2014 not 2012 I think, btw. Event happened in 2012.

chris_wot 5 years ago

That was when I started to realise just how bad they were.

jb775 5 years ago

I noticed during the election that whenever I glanced at the fb "watch" video section, it was videos of guys getting into fist fights, or videos containing disturbing violence. I don't ever watch or search for videos like this, so it's not like it was selected based on my watch history or something.

At the time, I figured it was a psychological trick to suck me into viewing more ads since violence has that "can't look away" nature to it...but now that I think about it, they could have been intentionally stirring up angry emotions within the general population during the election cycle. Anyone else notice this?

  • bob33212 5 years ago

    I rarely use facbook but when I do they show me some woodworking projects. I have never said anything about woodworking or joined any wordworking groups on facebook. They are just making a guess that someone like me would find those interesting.

    • beforeolives 5 years ago

      You're breaking some data scientist's heart by calling their model "just making a guess".

  • tittenfick 5 years ago

    Social media algorithms are designed to show you things which make you upset because that type of content is highly likely to "engage."

    • dbtc 5 years ago

      aren’t they designed to optimize for most engagement, in whichever way they can (which turns out to be with upsetting content)?

      Or are people actually selecting specifically upsetting stuff?

      • tittenfick 5 years ago

        Yes, they are designed to select for most engagement, regardless of the content.

    • secondcoming 5 years ago

      It's not just algorithms, MSM do this too. 'Fear sells'

      • smsm42 5 years ago

        If it bleeds, it leads. That was invented long before social media and ML social manipulation algorithms.

  • shadowgovt 5 years ago

    Much of that content is based on other people's watch histories.

    It's similar to the process of trending topics on Twitter (and one of the reasons that space is such a garbage fire).

  • mgraczyk 5 years ago

    I worked at Facebook on ranking systems at that time. No, facebook was not trying to intentionally provoke any emotional response. They were trying to increase product metrics like number of videos watched, subject to constraints on the % of watches that include things like violence or bullying. Ultimately the goal is to make the Facebook product good so they can continue selling ads.

    Facebook would make all its users happy and positive if that were possible to do. Nobody at Facebook is trying to manipulate you, except by trying to show you content that keeps you coming back to Facebook.

    • ajdude 5 years ago

      > Nobody at Facebook is trying to manipulate you

      Except when they’re running psychological experiments to purposely manipulate you into being sad

  • fenderbluesjr 5 years ago

    I watch fight videos all the time. I enjoy them and I know many guys that do. It doesn't surprise me at all that it might suggest them for you if you are a young man and i don't think it has anything to do with the election

bserge 5 years ago

Yes, indeed, we should all learn not to use our emotions in online discussions. People will say and do things that they would never say in real life, even under their real name, because of the disconnection from the actual person hearing/reading/seeing that.

Funnily enough, this is how I quit Imgur for good. It's amazing. I posted something that I believed was right and got downvoted to hell.

So I started asking people "why, why do you downvote?" and got mostly laughs, memes and people calling me stupid. Except one person, who said "you care way too much about this". Indeed, I did. Thank you random person!

Not sure why but it affected me more on Imgur. Maybe it's the length of the comments? The memes that encompass a thousand words, as they say? Regardless, I just deleted my account and never went back. It's great.

Still trying to quit Reddit and HN, but they're good resources if you ignore all the stupidity. Imgur is bottom of the barrel social media, but it was fun.

And of course, this is used by various media outlets, has been for a long time.

It's all about eliciting emotions, which come from the primitive part of the brain, bypassing any advanced conscious analysis and engaging the impulse to do either what you're told (good for sales) or the opposite of it (good for spreading a message) or something in between, but it's a response that you will remember and most likely take action.

I forgot what I was trying to say. Somehow Facebook never got me, it's just a useless platform aside from contacting people.

  • towergratis 5 years ago

    It does affect you. Because you get the feeling you are "part" of the community.

    I used to be very active in HN until I got into a heated discussion with someone and since then my HN "flag" privileges were stripped away.

    I know I shouldn't care, but I couldn't help it. Now I am just lurking on HN and rarely reply with this new account using TOR and care as little about "karma" as I do about "producing value" with my replies to HN.

    • doublerabbit 5 years ago

      Feel the same.

      HN is exactly the same as Reddit, just a different crowd and less edge. It's nice to think that you can have a civil discussion on the HN platform but you can't. The whole internet karma/points is a flawed system.

      For HN: You should have to give a reason to down votes. To give someone downvote permissions and then allow them to downvote on something they bias, isn't far.

      • NateEag 5 years ago

        > For HN: You should have to give a reason to down votes. To give someone downvote permissions and then allow them to downvote on something they bias, isn't far.

        Because no one would ever lie or put noise in the "why I downvoted" field.

        • doublerabbit 5 years ago

          And In that case they don't get to down vote. Problem solved.

          See downvoted already, point proven.

          • anoncake 5 years ago

            You can't detect lies.

        • shadowgovt 5 years ago

          I believe slashdot experimented with meta-moderation for that issue, but I actually never found out what came of that system.

stadium 5 years ago

The ultimate propaganda machine.

chatmasta 5 years ago

Pre-2013 was a wild time on the internet. It seems like that’s when a lot of its nasty underbelly went mainstream.

The Snowden leaks were a turning point, I think, when people realized “the NSA and corporations are spying on you” wasn’t just a tinfoil hat conspiracy theory.

It’s mind blowing to think that most major sites on the internet (including Amazon) were not using HTTPS at that time. It’s possible Amazon used it on its payment pages, but it certainly didn’t for much of the site. Tools like FireSheep existed for years before anyone started to care that everyone from your coffeeshop to your ISP could read your plaintext traffic.

Now 9 years later we’re finally about to fix DNS (albeit with protocols hostile to reverse engineering). Then hopefully up next is fixing BGP before the bad guys realize how absurdly vulnerable it is.

All this is to say, Facebook could maybe be excused for this experiment, because our standards for “that’s messed up” were already so much lower in 2012 than they are now.

  • cryptoz 5 years ago

    There is no excuse for this. This was a morally bankrupt and exceedingly awful thing to do. I was absolutely furious when this happened and made it a life mission to help other people know this was going on. 99% of people don't know this event happened, still.

    I don't trust Facebook one bit. I assume they do this kind of illegal 'experiment' all the time now, and the only change is they stopped bragging about it.

    So gross, this whole thing, it makes me mad there were no real repercussions from this. Worse yet is how many people actually support this unethical and sick experiment from Facebook.

    Lots of people don't see anything wrong with it, even technical people like those on HN, and that is just as disturbing to me as Facebook actually carrying it out.

    ---

    Edit: I'm now blocked from posting on HN (for a while I guess?), so here is my response to a poster below.

    @antiterra says,

    > That you’d prefer they either stayed ignorant or unconcerned to what changes they made and their effect is exceedingly awful and morally bankrupt.

    I have no idea how you reached the conclusion that I would prefer either of those things. There is a massive chasm between 'perform illegal, unethical mass psychological experiments' and 'I don't care about stuff'.

    ---

    @chatmasta says I'm not blocked, but I am. I cannot reply to you either. It says I'm posting too fast and won't let me submit any content.

    • chatmasta 5 years ago

      > I assume they do this kind of illegal 'experiment' all the time now

      Oh 100% they do. But hey at least they've contributed Presto back to the community!

      > Lots of people don't see anything wrong with it

      Surely that's not true. In 2021, Facebook is largely hated by people on all sides of all political spectrums. [Remember when people thought Zuck would run for president? Lol!]

      (Edit:

      > I'm now blocked from posting on HN

      You're not blocked; you just can't reply to someone within a minute of their post past a certain nesting level.

      (Edit edit: Oh guess you're right! That's annoying. :/ But I am enjoying our conversation via edit nonetheless!)

      )

    • antiterra 5 years ago

      That you’d prefer they either stayed ignorant or unconcerned to what changes they made and their effect is of questionable morality.

      Edit re your edit response:

      It is absurd to me that deciding whether to show more or less ‘I got a new car’ posts based on the resulting behavior is some kind of sinister event. Even the author of this article struggles to articulate why this isn’t just standard A/B testing.

  • jwilber 5 years ago

    It sounds like you’re trying to relate the ever changing adoption and development of web standards with longstanding ethical expectations in research.

    Nevermind the fact that the former aren’t direct actions on customers (Amazon wasn’t not showing https for only certain users) while the latter is a direct, concentrated, understood action on customers.

    The two aren’t similar at all.

    • chatmasta 5 years ago

      No, I'm comparing the complete disregard for user's privacy / well-being with the status quo at the time, which was "major payment portals don't even implement HTTPS."

      If you want to document Facebook's ethical lapses, you could go all the way back to 2004. But it's only the past few years people started actually caring.