This article is written with compassion, and my heart goes out to men and women who want to both thrive financially and enjoy a loving family. The economy has morphed in ways that aren't good for many men. Society is not providing the nudge it used to towards marital fidelity and having children.
A lot of those "nudges" were incredibly bad for women. Removing them isn't advantaging women; it's removing disadvantages. Restoring them is going to look more like "force" than "nudge".
The article doesn't mention adoption. Perhaps that would be the first step.
I concur that it's much better to have two parents, but they can double the size of their potential partner pool by seeking a male companion. If they're straight, they don't have to have sex; they can seek female sex partners elsewhere. And that way we can solve the problem for two men at once.
The article doesn't mention adoption. Perhaps that would be the first step.
unfortunately, adoption is an exhausting process that would suck the life and will to live out of you. not saying they should be giving kids away easily to whomever but I know several families personally that gave up during the process …
Can you please stop posting religious and/or ideological battle comments to HN? We're trying to avoid getting stuck on those topics here.
Also, you're linking to your site too much in comments. That will eventually cause HN readers to classify you as a spammer (we're already getting complaints about this), and it will also cause HN's software to classify your account as "primarily promotional" (see https://news.ycombinator.com/newsguidelines.html) and start filtering your site.
You're welcome to participate on HN but it needs to be within the site guidelines, which means using the site for intellectual curiosity, not self-promotion, and mostly not using it for hot-button flamewar topics, even though those topics may be of great social importance.
Btw, from reading your posts I don't have the sense that you're intending to use this site for self-promotion, but it's going to land that way with the community here because you're breaking the conventions about how to participate.
(p.s. I'm an admin here, in case that wasn't clear)
Dang, I'm not OP, but in fairness, wouldn't it be more appropriate to just remove the controversial article/posts driving this altogether, rather than police one side of a discussion's comments which will inevitably come up in any rational discussion (on this subject matter).
I've seen a few of these moderation posts, and I don't mean to be critical but you end up contradicting yourself over time, in what seems like value-based judgments.
The HN rules aren't applied or written consistently with proper definition, and some guidelines contradict other rules. At its core, they are without proper unique definition, and so become moving goalposts that any reader or commenter will never be able to follow consistently while continuing to participate.
To follow the guidelines consistently would only be possible by never participating. It fails basic rules needed to have a consistent rational basis, which is needed to be able to actually do as you ask in the first place.
Its circular without unambiguous definition.
Its hard for me to see this as ideological, when you can objectively see this isn't valid when there is literature and evidence (though not specifically referenced) that shows psychology impacts decisions, albeit indirectly. Indoctrination is a real thing, a real problem even, and is most common to totalitarian states.
I can appreciate wanting to keep a community clean, and the large amount of work that necessarily goes into that, and having personally moderated other communities, I've found the rules need to be consistent and non-contradictory if you want them followed. Otherwise you just create an endless pile of extra work for yourself, where mistakes in moderation will happen, and you get a bad reputation when it happens often.
When the article misinforms or is biased and thus gets things wrong through omission, isn't it the communities job to provide such counterbalance discussion so false or incorrect and misleading information is discarded by the thoughtful individual. Without such counter-balance you get a positive feedback system, rather than a negative feedback system, which lends itself more readily towards chaos and delusion.
As an example, I seem to recall reading a comment where some people promoted ingesting known toxic chemicals, to falsely promote health, which you said did not merit moderation.
If these conversations are a problem, removing the linked article in its entirety, seems to be a better approach rather than squelching only part of the conversation (which in my opinion has validity).
Moderators really don't have the time to research claim correctness, and targeted actions outside egregious violations run the risk of promoting the censored topic.
There can be no basis for arguments of value-based bias, and the distorted reflected appraisal that comes with sentiment manipulation, if the article is removed wholesale in its entirety.
There is a body of literature showing birth rate is directly impacted by economics, which is not necessarily just monetary but includes the personal costs that may go into decisions, psychology and indoctrination affect this as well, and many of these subtle aspects are often unrecognized or may be misclassified as ideological when they do have basis in objective external measure, ie. in reality.
While I wish that many of these things were fanciful unbased opinion, there are a number of experts who have written in-depth case studies on indoctrination, torture, cults, and brainwashing from first-hand accounts that credibly back indoctrination arguments and are what prompted the WHO to change their definition of torture to include psychological harm through maladaptive behavior induction.
Robert Lifton, and Joost Meerloo wrote many books on the subject matter.
If you cut off communication that would normally seek to raise the community to a higher understanding, all conversation degrades and falls instead to the stagnant lowest common denominator creating more work in moderation.
I can't get into Robert Lifton or any of that, but I can answer this:
> wouldn't it be more appropriate to just remove the controversial article/posts driving this
No, for two reasons: (1) HN commenters are expected to post thoughtful, curious comments and learn from each other, not fight ideological causes; and (2) it wouldn't work anyway, because there's no agreement about what the "controversial articles" actually are.
> rather than police one side of a discussion's comments
We don't moderate just one side, and if you think we do, I suggest you track my moderation comments more closely. It shouldn't take long to see that we're moderating commenters who break the site guidelines, regardless of which side of an argument they're on.
That doesn't mean we moderate everyone who breaks the rules, but that's not because we're only moderating one side, it's because we don't see everything that gets posted—there's far too much for us to read it all.
I can see your point about detailed specific defining of controversial topics, but guidelines to set expectations are possible. No politics, no hateful or abusive posts, etc. Its been done before.
I've seen moderator actions on here where people were warned for what amounted to correcting false information under the eschew flamebait rule, like they should have known and let false information stand uncontested. Not just warned verbally, but downvoted too, which has a cost on keeping any registered account active. Too many downvotes prevents karma buildup for extended periods or gets your account banned.
You could post guidelines on topics that are unacceptable. Other places have, you set the expectation and remove the people who don't follow it.
> We don't moderate just one side, and if you think we do, I suggest you track my moderation comments more closely.
I can see your point if you only consider the direct effects of moderation acts, but your moderation system doesn't primarily use strategies of direct moderation. Its primarily indirect by structure. Direct moderation requires consistent guidelines, uniquely specific, and follow rational principles and method. Indirect is about inspiring repression and fear.
You seem to neglect the indirect but prominent chilling effects caused by the overall use of an anaconda in the chandelier strategy, where the lack of a source of truth creates an echo chamber effect from any action you signal even if its unintentional, gets silenced and those discussing it punished.
This is a common structure used under repressive regimes by design for its network and fear effects, which touch on subject matter from Robert Lifton's and Joost Meerloo's work on the totalitarian state and mind, don't worry I won't be going further into this subject matter than this.
A sound rational community is largely defined by its capability to analyse the merits of a discussion containing disagreement, without threats of retribution or cost being imposed for expressing dissenting opinion or discussion that may be unfavorable (so long as its not abusive or destructive).
In fact disagreement is expected before agreement in such valuable discussions, and when allowed the process elevates both parties understanding even when an impasse is reached.
This is not the same in collectivist repressive communities, any disagreement is an attack on the group, where the person expressing such is an enemy to coerce, impose cost, be it personal or in more tangible terms.
In such systems a privileged agent signals, and thus controls, and the mob takes care of the rest. It always changes in unknowable ways ahead of time, and its a crazymaking spiral of guessing and reading tea leaves.
The rules in such cases are designed with the intent that they be broken, so they can be applied arbitrarily, and it turns into a race to the bottom with a spiral of diminishment.
As a moderator, when you signal to the community ambiguously that some things or views are breaking the rules without being uniquely specific, any point of view following similar lines regardless of rational soundness or validity can no longer be discussed without fear of retribution.
The guy had religious overtones, but indoctrination, is a real thing and worth of discussion with regards to childlessness, especially when the alternative of no discussion in this context is self-extinction of humanity. Birthrate replacement is 2.1, we are at 1.44 if I'm up to date on the metrics. Within two generations (40 years) we'd lose roughly 66% of the current population, and its still decreasing.
How many people are talking or commenting about indoctration or anything about that article now? No one. How many commented after you posted? No one.
That's how these repressive structures work, they operate indirectly so the person signaling can plausibly say we don't do this, when the mechanics of the structures in-place do this for them. Its misleading and could be construed as deceitful if one discarded goodfaith.
Its an inherent part of the insidiousness of such structures.
No one commented after because of the fear of retribution, not even to correct omissions made by that article.
You may not be you doing the extended punishing on the moderation side, but the people who you've signaled to are, and they do punish like any thuggish collective.
Point in case, even me drawing your attention to this to have an honest conversation with you and pointing out these observations in a non-combative way has already cost my account negatively. According to the karma score shown on my client, I'm down 5 points, or 8% of what would be needed to ban my account, for letting you know about a problem that you seem to have overlooked.
How can you ever get objective feedback to know about a problem from anyone, if it costs those same people dearly to give it to you.
Anyone who doesn't agree, even expressed marginally based on your signaling gets downvoted or squelched which is a debit on their karma bank. How can you talk about anything in such environments?
When this uses up the karma bank and goes too far, your systems exclude those people from the community by banning them, its all indirect, but the outcomes reflect the insidious design. Few upvote even thoughtful posts unless their is universal agreement.
These are problems that destroy community, and while it may seem like things are working fine right now. These type of systems tend to have a hallowing out effect where you don't notice the pitfalls eating away underneath undermining the structural integrity until after its too late to do anything.
A system is what a system actually does, not what people claim or intend it to do.
As for "no hateful or abusive posts", hateful is certainly prohibited by the guidelines. Abusive also, but that's perhaps a vaguer term.
> warned for what amounted to correcting false information under the eschew flamebait rule, like they should have known and let false information stand uncontested
I'd like to see some specific links where we allegedly warned or banned someone just for correcting false information. That does not sound like us at all, and is certainly not what we intend. In most cases of this nature, it turns out that the commenter was breaking the site guidelines by, e.g., adding snark or name-calling to their comment. It's things like that which cause us to post warnings. Simply correcting false information in a neutral way is fine.
The rest of your comment gets harder for me to follow, and when you talk about us using insidious repressive anaconda chilling-effect tactics, I find it harder, because there's nothing in my experience (either outer or inner) that I can easily relate this to. It's not that I don't want to engage or reply, but I need something more specific than that.
I'll see if I can't find a few of them so I can link them directly then.
I certainly remember seeing a few.
As for the chilling effects, the problem with these things has always been a low visibility problem since you have no direct reference to compare against.
The nature is still an active area of research, the going understanding seems to be its a distortion of conformity to norms issue where views are distorted around actions you may signal under certain types of structures (uncertainty and ambiguity).
All you may see locally is skewed activity such as lower activity or less or no discussion in the words (regardless of meaning context) of posts signaled negatively in this way, very little disagreement.
You'd see more collective downvoting for neutral and any subjects related or touching on subjects signaled. Not enough for you to notice anything thought.
In many respects it follows a similar structure in education and reeducation under Mao, called the hot-potato, Lifton covers the details.
Objectively, you'd see higher downvotes, as well for anyone that did try to discuss these things, higher squelch marks, even neutral positions that may be tangentially related may lose speaking power/karma.
Unless you are running statistical analysis on a basket of subjects with a solid baseline, you likely wouldn't notice a thing, and that's by structure, which is why its insidious.
Certain structures of elements short circuit perception and take advantage of a psychological blindspots, while reducing our ability to react and recognize the problem.
I don't want to get too theory laden, but here's an example you should be able to do at any retail store.
If you go to a retail store, there are indicators of what the store owner values or doesn't value. If you see cameras everywhere watching everyone gratuitously, things are locked up even the cheap items, you get a sense that the store owner or operator doesn't value or trust their customers. This observation uses reflective appraisal, and once its made, we may choose to ignore it, but it still impacts our reasoning in our choices to remain consistent beings.
From the owner's perspective, they may need it to collect data for insurance, or prevent theft, but what matters is what's reflected.
These judgments largely happen beneath critical thought and perception, and we can examine parts of that critically afterwards, but not in any way that will really change our judgment because of a consistency blindspot.
Internally our psychology morphs to remain consistent, and that's a bias we all have. When you see car salesmen seeking agreement through carefully constructed phrasing, they are using this blindspot to influence you to buy a car. Its predictable so if you answer in any but a few different specific ways, they know you'll buy if there are no other external factors.
Even if what we see was distorted intentionally, we won't notice.
The action with the structure described creates a distorted reflected appraisal centered around each action, keeping readers and commenters guessing, its not something people will consciously notice unless they've been trained to, but it shows up in statistically significant ways.
Its impossible to determine the exact measure of it because its impossible to determine a reference measure of what people would have said or done if it didn't exist. It is significant enough to show drops in viewing certain Wikipedia pages (related to the NSA/PRISM disclosures) during a time of high news coverage.
I've linked a few articles that are relevant to this if you are inclined that are fairly well respected in the academic literature, hopefully that is sufficient to demonstrate its a real thing. Its abstract, and largely unnoticed, but the effects are not.
As for how it can become destructive, it usually comes down to the evaporative cooling effect in social networking, which follows a much older understanding related to volunteerism, where volunteers stop volunteering when they are coerced (or have cost imposed).
There are many places this is true quite commonly with entities like HOA board of directors where directors who volunteered may resign under social pressure and harassment of neighbors. There were some studies done in the 80s iirc that confirmed this but none that really stand out in my mind that I can link here.
This article is written with compassion, and my heart goes out to men and women who want to both thrive financially and enjoy a loving family. The economy has morphed in ways that aren't good for many men. Society is not providing the nudge it used to towards marital fidelity and having children.
A lot of those "nudges" were incredibly bad for women. Removing them isn't advantaging women; it's removing disadvantages. Restoring them is going to look more like "force" than "nudge".
The article doesn't mention adoption. Perhaps that would be the first step.
I concur that it's much better to have two parents, but they can double the size of their potential partner pool by seeking a male companion. If they're straight, they don't have to have sex; they can seek female sex partners elsewhere. And that way we can solve the problem for two men at once.
The article doesn't mention adoption. Perhaps that would be the first step.
unfortunately, adoption is an exhausting process that would suck the life and will to live out of you. not saying they should be giving kids away easily to whomever but I know several families personally that gave up during the process …
[flagged]
Can you please stop posting religious and/or ideological battle comments to HN? We're trying to avoid getting stuck on those topics here.
Also, you're linking to your site too much in comments. That will eventually cause HN readers to classify you as a spammer (we're already getting complaints about this), and it will also cause HN's software to classify your account as "primarily promotional" (see https://news.ycombinator.com/newsguidelines.html) and start filtering your site.
You're welcome to participate on HN but it needs to be within the site guidelines, which means using the site for intellectual curiosity, not self-promotion, and mostly not using it for hot-button flamewar topics, even though those topics may be of great social importance.
Btw, from reading your posts I don't have the sense that you're intending to use this site for self-promotion, but it's going to land that way with the community here because you're breaking the conventions about how to participate.
(p.s. I'm an admin here, in case that wasn't clear)
Dang, I'm not OP, but in fairness, wouldn't it be more appropriate to just remove the controversial article/posts driving this altogether, rather than police one side of a discussion's comments which will inevitably come up in any rational discussion (on this subject matter).
I've seen a few of these moderation posts, and I don't mean to be critical but you end up contradicting yourself over time, in what seems like value-based judgments.
The HN rules aren't applied or written consistently with proper definition, and some guidelines contradict other rules. At its core, they are without proper unique definition, and so become moving goalposts that any reader or commenter will never be able to follow consistently while continuing to participate.
To follow the guidelines consistently would only be possible by never participating. It fails basic rules needed to have a consistent rational basis, which is needed to be able to actually do as you ask in the first place.
Its circular without unambiguous definition.
Its hard for me to see this as ideological, when you can objectively see this isn't valid when there is literature and evidence (though not specifically referenced) that shows psychology impacts decisions, albeit indirectly. Indoctrination is a real thing, a real problem even, and is most common to totalitarian states.
I can appreciate wanting to keep a community clean, and the large amount of work that necessarily goes into that, and having personally moderated other communities, I've found the rules need to be consistent and non-contradictory if you want them followed. Otherwise you just create an endless pile of extra work for yourself, where mistakes in moderation will happen, and you get a bad reputation when it happens often.
When the article misinforms or is biased and thus gets things wrong through omission, isn't it the communities job to provide such counterbalance discussion so false or incorrect and misleading information is discarded by the thoughtful individual. Without such counter-balance you get a positive feedback system, rather than a negative feedback system, which lends itself more readily towards chaos and delusion.
As an example, I seem to recall reading a comment where some people promoted ingesting known toxic chemicals, to falsely promote health, which you said did not merit moderation.
If these conversations are a problem, removing the linked article in its entirety, seems to be a better approach rather than squelching only part of the conversation (which in my opinion has validity).
Moderators really don't have the time to research claim correctness, and targeted actions outside egregious violations run the risk of promoting the censored topic.
There can be no basis for arguments of value-based bias, and the distorted reflected appraisal that comes with sentiment manipulation, if the article is removed wholesale in its entirety.
There is a body of literature showing birth rate is directly impacted by economics, which is not necessarily just monetary but includes the personal costs that may go into decisions, psychology and indoctrination affect this as well, and many of these subtle aspects are often unrecognized or may be misclassified as ideological when they do have basis in objective external measure, ie. in reality.
While I wish that many of these things were fanciful unbased opinion, there are a number of experts who have written in-depth case studies on indoctrination, torture, cults, and brainwashing from first-hand accounts that credibly back indoctrination arguments and are what prompted the WHO to change their definition of torture to include psychological harm through maladaptive behavior induction.
Robert Lifton, and Joost Meerloo wrote many books on the subject matter.
If you cut off communication that would normally seek to raise the community to a higher understanding, all conversation degrades and falls instead to the stagnant lowest common denominator creating more work in moderation.
That's what I've seen over the years.
I can't get into Robert Lifton or any of that, but I can answer this:
> wouldn't it be more appropriate to just remove the controversial article/posts driving this
No, for two reasons: (1) HN commenters are expected to post thoughtful, curious comments and learn from each other, not fight ideological causes; and (2) it wouldn't work anyway, because there's no agreement about what the "controversial articles" actually are.
> rather than police one side of a discussion's comments
We don't moderate just one side, and if you think we do, I suggest you track my moderation comments more closely. It shouldn't take long to see that we're moderating commenters who break the site guidelines, regardless of which side of an argument they're on.
That doesn't mean we moderate everyone who breaks the rules, but that's not because we're only moderating one side, it's because we don't see everything that gets posted—there's far too much for us to read it all.
I can see your point about detailed specific defining of controversial topics, but guidelines to set expectations are possible. No politics, no hateful or abusive posts, etc. Its been done before.
I've seen moderator actions on here where people were warned for what amounted to correcting false information under the eschew flamebait rule, like they should have known and let false information stand uncontested. Not just warned verbally, but downvoted too, which has a cost on keeping any registered account active. Too many downvotes prevents karma buildup for extended periods or gets your account banned.
You could post guidelines on topics that are unacceptable. Other places have, you set the expectation and remove the people who don't follow it.
> We don't moderate just one side, and if you think we do, I suggest you track my moderation comments more closely.
I can see your point if you only consider the direct effects of moderation acts, but your moderation system doesn't primarily use strategies of direct moderation. Its primarily indirect by structure. Direct moderation requires consistent guidelines, uniquely specific, and follow rational principles and method. Indirect is about inspiring repression and fear.
You seem to neglect the indirect but prominent chilling effects caused by the overall use of an anaconda in the chandelier strategy, where the lack of a source of truth creates an echo chamber effect from any action you signal even if its unintentional, gets silenced and those discussing it punished.
This is a common structure used under repressive regimes by design for its network and fear effects, which touch on subject matter from Robert Lifton's and Joost Meerloo's work on the totalitarian state and mind, don't worry I won't be going further into this subject matter than this.
A sound rational community is largely defined by its capability to analyse the merits of a discussion containing disagreement, without threats of retribution or cost being imposed for expressing dissenting opinion or discussion that may be unfavorable (so long as its not abusive or destructive).
In fact disagreement is expected before agreement in such valuable discussions, and when allowed the process elevates both parties understanding even when an impasse is reached.
This is not the same in collectivist repressive communities, any disagreement is an attack on the group, where the person expressing such is an enemy to coerce, impose cost, be it personal or in more tangible terms.
In such systems a privileged agent signals, and thus controls, and the mob takes care of the rest. It always changes in unknowable ways ahead of time, and its a crazymaking spiral of guessing and reading tea leaves.
The rules in such cases are designed with the intent that they be broken, so they can be applied arbitrarily, and it turns into a race to the bottom with a spiral of diminishment.
As a moderator, when you signal to the community ambiguously that some things or views are breaking the rules without being uniquely specific, any point of view following similar lines regardless of rational soundness or validity can no longer be discussed without fear of retribution.
The guy had religious overtones, but indoctrination, is a real thing and worth of discussion with regards to childlessness, especially when the alternative of no discussion in this context is self-extinction of humanity. Birthrate replacement is 2.1, we are at 1.44 if I'm up to date on the metrics. Within two generations (40 years) we'd lose roughly 66% of the current population, and its still decreasing.
How many people are talking or commenting about indoctration or anything about that article now? No one. How many commented after you posted? No one.
That's how these repressive structures work, they operate indirectly so the person signaling can plausibly say we don't do this, when the mechanics of the structures in-place do this for them. Its misleading and could be construed as deceitful if one discarded goodfaith.
Its an inherent part of the insidiousness of such structures.
No one commented after because of the fear of retribution, not even to correct omissions made by that article.
You may not be you doing the extended punishing on the moderation side, but the people who you've signaled to are, and they do punish like any thuggish collective.
Point in case, even me drawing your attention to this to have an honest conversation with you and pointing out these observations in a non-combative way has already cost my account negatively. According to the karma score shown on my client, I'm down 5 points, or 8% of what would be needed to ban my account, for letting you know about a problem that you seem to have overlooked.
How can you ever get objective feedback to know about a problem from anyone, if it costs those same people dearly to give it to you.
Anyone who doesn't agree, even expressed marginally based on your signaling gets downvoted or squelched which is a debit on their karma bank. How can you talk about anything in such environments?
When this uses up the karma bank and goes too far, your systems exclude those people from the community by banning them, its all indirect, but the outcomes reflect the insidious design. Few upvote even thoughtful posts unless their is universal agreement.
These are problems that destroy community, and while it may seem like things are working fine right now. These type of systems tend to have a hallowing out effect where you don't notice the pitfalls eating away underneath undermining the structural integrity until after its too late to do anything.
A system is what a system actually does, not what people claim or intend it to do.
> guidelines to set expectations are possible. No politics, no hateful or abusive posts, etc.
"No politics" is not possible. We experimented with that once, and found that it made the site more political. I'll try to dig up a link or two about this. (Edit: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...)
As for "no hateful or abusive posts", hateful is certainly prohibited by the guidelines. Abusive also, but that's perhaps a vaguer term.
> warned for what amounted to correcting false information under the eschew flamebait rule, like they should have known and let false information stand uncontested
I'd like to see some specific links where we allegedly warned or banned someone just for correcting false information. That does not sound like us at all, and is certainly not what we intend. In most cases of this nature, it turns out that the commenter was breaking the site guidelines by, e.g., adding snark or name-calling to their comment. It's things like that which cause us to post warnings. Simply correcting false information in a neutral way is fine.
The rest of your comment gets harder for me to follow, and when you talk about us using insidious repressive anaconda chilling-effect tactics, I find it harder, because there's nothing in my experience (either outer or inner) that I can easily relate this to. It's not that I don't want to engage or reply, but I need something more specific than that.
I'll see if I can't find a few of them so I can link them directly then. I certainly remember seeing a few.
As for the chilling effects, the problem with these things has always been a low visibility problem since you have no direct reference to compare against.
The nature is still an active area of research, the going understanding seems to be its a distortion of conformity to norms issue where views are distorted around actions you may signal under certain types of structures (uncertainty and ambiguity).
All you may see locally is skewed activity such as lower activity or less or no discussion in the words (regardless of meaning context) of posts signaled negatively in this way, very little disagreement.
You'd see more collective downvoting for neutral and any subjects related or touching on subjects signaled. Not enough for you to notice anything thought.
In many respects it follows a similar structure in education and reeducation under Mao, called the hot-potato, Lifton covers the details.
Objectively, you'd see higher downvotes, as well for anyone that did try to discuss these things, higher squelch marks, even neutral positions that may be tangentially related may lose speaking power/karma.
Unless you are running statistical analysis on a basket of subjects with a solid baseline, you likely wouldn't notice a thing, and that's by structure, which is why its insidious.
Certain structures of elements short circuit perception and take advantage of a psychological blindspots, while reducing our ability to react and recognize the problem.
I don't want to get too theory laden, but here's an example you should be able to do at any retail store.
If you go to a retail store, there are indicators of what the store owner values or doesn't value. If you see cameras everywhere watching everyone gratuitously, things are locked up even the cheap items, you get a sense that the store owner or operator doesn't value or trust their customers. This observation uses reflective appraisal, and once its made, we may choose to ignore it, but it still impacts our reasoning in our choices to remain consistent beings.
From the owner's perspective, they may need it to collect data for insurance, or prevent theft, but what matters is what's reflected.
These judgments largely happen beneath critical thought and perception, and we can examine parts of that critically afterwards, but not in any way that will really change our judgment because of a consistency blindspot.
Internally our psychology morphs to remain consistent, and that's a bias we all have. When you see car salesmen seeking agreement through carefully constructed phrasing, they are using this blindspot to influence you to buy a car. Its predictable so if you answer in any but a few different specific ways, they know you'll buy if there are no other external factors.
Even if what we see was distorted intentionally, we won't notice.
The action with the structure described creates a distorted reflected appraisal centered around each action, keeping readers and commenters guessing, its not something people will consciously notice unless they've been trained to, but it shows up in statistically significant ways.
Its impossible to determine the exact measure of it because its impossible to determine a reference measure of what people would have said or done if it didn't exist. It is significant enough to show drops in viewing certain Wikipedia pages (related to the NSA/PRISM disclosures) during a time of high news coverage.
I've linked a few articles that are relevant to this if you are inclined that are fairly well respected in the academic literature, hopefully that is sufficient to demonstrate its a real thing. Its abstract, and largely unnoticed, but the effects are not.
https://digitalcommons.osgoode.yorku.ca/scholarly_works/3080... https://btlj.org/data/articles2016/vol31/31_1/0117_0182_Penn...
As for how it can become destructive, it usually comes down to the evaporative cooling effect in social networking, which follows a much older understanding related to volunteerism, where volunteers stop volunteering when they are coerced (or have cost imposed).
There are many places this is true quite commonly with entities like HOA board of directors where directors who volunteered may resign under social pressure and harassment of neighbors. There were some studies done in the 80s iirc that confirmed this but none that really stand out in my mind that I can link here.
https://blogs.cornell.edu/info2040/2015/10/14/the-evaporativ...
[dead]