I think this is a really thoughtful post. I'd encourage us (readers in general) not to trip over the specific illustrations that are being used to show a general point—namely, that when humans attempt to identify cause-effect relationships we're nearly always performing some kind of selection on top of the "raw" data available to us (which is, itself, nearly always a mere sub-set of all the "raw" data that could exist on that subject).
This observation, if we take it seriously, will encourage us (humans in general trying to make sense of the world) to be humble about the conclusions we reach and mildly skeptical about the conclusions others reach, even when those conclusions feel really incontrovertible.
I agree it's a thoughtful post and it makes a good point, but at the same time, it's an issue with the way a lot of people take Scott Alexander's posts. He is clearly not putting this out there as absolute truth, just saying here is an explanation, it sounds plausible, and I can't find much evidence immediately to disprove it. But he also says there are other equally plausible explanations he can't disprove. He often, if not always, puts an "epistemic status" disclaimer at the top of his posts, and the wider web community doesn't seem to know how to deal with those. They are used to interpret bloggers and the opinion sphere as 100% believing and promoting anything they write about. But that is very much not what Scott is doing. He understands quite well that causality is way too complicated to ever conclusively answer a question like "why did history turn out the way it did?"
Cont'd: It's one thing to observe "Humans make mistakes all the time, therefore I should be humble about my conclusions and mildly skeptical toward others'."
It's another thing to have a FUNCTIONAL MODEL of HOW AND WHY humans make certain categories of mistake.
Such a thing can help us sort through proposed answers to complex questions with better discernment than just the basic realization that "this was arrived at by a fallible human, therefore it could be wrong, so I won't hold it too dogmatically."
It's not causal explanations you need to avoid, it's junk! And while I'm glad that the article criticises one terrible theory promoted by Scott Alexander, I'm not impressed by their alternative LSD theory, or their many bold, unsubstantiated claims about what affected what (we do not, for instance, know that political assassinations affected civil rights law in both directions).
Causal graphs are only circular if you are sloppy with time! We do not know if LSD in 1968 caused hippies in 1969, but we can rule out that hippies in 1969 caused LSD in 1968. And it may seem trivial, but it's not. That things don't happen before their causes is the main tool we have to break endless debates about what caused what, but both the article writer, Scott A. and the author he reviews seem to thrive a little too well in the world of muddy explanations.
> Causal graphs are only circular if you are sloppy with time! We do not know if LSD in 1968 caused hippies in 1969, but we can rule out that hippies in 1969 caused LSD in 1968. And it may seem trivial, but it's not.
I think you're missing most of the point of those diagrams, and the concept of feedback loops in general.
Yes, the cause cannot happen before the effect. But there was no singular "LSD" event in 1968 and no singular "hippies" event in 1969 - the two phenomena were feeding on each other over time. 1968 and 1969 are just years in which people now agree the phenomena crossed some notability thresholds.
Both here and in the general case, in such feedback loops you often won't know what the original cause was - the starting point has been lost to history, and the earliest records you can find are already from when the feedback loop already started. Did some random person ingest some LSD and, because of the experience, became the very first OG hippe? Or did the OG hippie do some LSD with their friends, and the shared experience made those friends more susceptible to the hippie ideas? Who knows?
Then there's an issue of boundaries. Even if you can point at some concrete events in time, like e.g. date of founding a company, it's probably not what you're looking for anyway. That decision itself is a discrete outcome of a different feedback system that's been running for some time, and that feedback system - not the singular signing of a document - provided the influences you're really looking for.
Just want to say that this is the type of comment we want less of. Well over 75% of it is name calling, and the rest can barely be called an argument, let alone an effective one.
That's funny, because that's what I say about them. As I see it, I'm not calling them names, I'm merely saying I think their arguments are shoddy and not at all persuasive.
> Causal graphs are only circular if you are sloppy with time! We do not know if LSD in 1968 caused hippies in 1969, but we can rule out that hippies in 1969 caused LSD in 1968.
One hippie in 1986 could have ~dreams of a massive population of newly minted hippies in 1969 (in this case: virtual hippies), and that dream could motivate them to manufacture and distribute LSD in pursuit of that goal.
Lots of causality is due to false beliefs, and at the cognitive level distortion of time may be just another distortion. What time "is" is to some degree a bit hand wavy in the first place, isn't it?
Causal relationships can go in reverse time. For instance, I know I will be hungry tomorrow so today I go to the grocery store. Tomorrow’s breakfast causes todays shopping.
Perhaps better not to think too much on this though.
As David Hume points out, you don't actually know you'll be hungry tomorrow. You assume you will be based on induction from similar past events, but this is not certain. (He does concede that it's reasonable to plan as if it were a certainty in such cases, in the absence of any falsifying evidence.)
The famous counterexample metaphor is farmyard turkeys. For months, for their entire lives, the turkeys observe the diligent farmer bringing food and water, providing for their every need so that they don't have to do anything. On the day before Thanksgiving, a turkey philosopher confidently declares that, based on this prior evidence, we can conclude that the farmer, their great benefactor, exists only to serve and further the interests of turkeydom.
Perfect. The philosopher turkey views the past as causing the future while in fact the farmer’s future meal caused the turkey’s past experiences. A great example of how serious an error rejecting reverse causality can be.
The turkey is simply wrong in its inductive reasoning. Your hunger tomorrow will be a consequence of the forward causal process of metabolism. That the turkey's demise occurs specifically on thanksgiving day is partly determined by a cultural attitude having its origin in past events.
Aristotle, of course, used a broader ontology of causes than is commonly used today, but there should be no confusion if we are careful to define what we mean. In particular, final causes are more commonly regarded as psychological attitudes, whereby if someone acts now to bring about a certain circumstance in the future, the circumstance is caused by what they were thinking prior to the action.
Hume uses the example of the sun rising: though the sun has (so far) always risen in the east, there is no logical reason to assume that it will _necessarily_ do the same tomorrow tomorrow. Though we can reasonably plan for this as the most likely scenario based on available evidence, it is not _necessarily_ going to happen.
There is, likewise, no logically necessary reason to assume that that causal process of metabolism will continue to work tomorrow as it always has in the past, though it is indeed reasonable to prepare for that outcome as the most likely outcome.
The future is possibilities that are not actualized yet. So, whether or not I actually eat breakfast or not is irrelevant as the future potentiality is the cause of my behavior today. Once tomorrow is today, it is no longer the future.
Your present prediction is the cause. You could even be predicting something that is logically impossible, i.e. there is no potentiality, still the prediction (in the present) would cause you to do something about it.
Thoughts aren't non-existent though, they have a different type of existence.
I think time may be similar, in that while we seem to have an innate conscious experience/perception of ~absolute time and absolute reality, the way the subconscious mind implements the portion of reality it is the source of may be substantially different.
This is a logical inconsistency. Tomorrow's breakfast does not yet exist, only your "knowing" about it. Your thought has already happened today, therefore "knowing" you will be hungry is the true cause, not the potential for breakfast itself.
Also Vietnam vets and LSD-fueled hippies exist in the same set, and each can have effects that go in different directions.
On top of that, merging their effects (in policy setting for example) sort of is a barrier to attributing causality. It forces a 2 dimensional signal to go to 1 dimension. Neural networks can backpropagate attribution through that, but only with the convention that this attribution is the marginal effect. And that's a complex problem: who would be responsible for a policy change when there's a population contributing long-term for 48%, but a short term population adds 4%. Which population is the real cause? the 48% or the 4%? Is there a right answer to this question?
Your past experiences have caused you to expect you'll be hungry tomorrow, but shy of someone building a time machine there's no way for causal relationships to actually go backwards. If you wake up tomorrow craving bacon there's no mechanism for you to cause your past self to pick some up.
> Scott A. and the author he reviews seem to thrive a little too well in the world of muddy explanations.
The problem of this causal explanation perhaps lies with the source material Scott Alexander cited without adequate critical analysis, namely David Brooks’ _BoBos in Paradise_.
“If Books Could Kill”, co-hosted by Michael Hobbes and Peter Shamshiri, analyzes and humorously takes down book-length transgressions against critical thinking. A recent episode adroitly and conclusively disassembles Brooks’ _BoBos in Paradise_ as quite harmful and comically wrongheaded. [0]
I liked the article - I'm reading a management book that uses system diagrams to describe business processes and it separates resources from rates and qualities. The article muddies that with its diagrams but still gets the point across.
An example from the book are the resources of recurring customers and staff. If you add marketing, you'll gain customers, and those customers will feedback and bring in more customers by word of mouth if your service is high quality (positive feedback), but you'll also lose customers with poor qualify service and that can be negative feedback. These feedbacks aren't always flowing in only one way. If marketing leads to a surge in customers, and you don't hire more staff
quality will sink, and staff might leave due to being
overworked . This feeds back into having less staff. The book grabs hold of the causality, you have to understand how your actions will affect the entire system and plan for it.
In terms of determining causality, it’s true that major events aren’t ever simply driven by a single cause, but philosophy has studied this to deep depths and that’s where I think the article leaves the discussion too early. The arrow of time is not discussed and neither are necessary sufficient or contributory causes. LSD and Harvard Admissions may be contributory to the events of the 1960s, but not sufficient or necessary to explain assassinations.
What is the difference between causal projection and just not having enough data to determine the structure? Is it just that someone has picked sides and thus tests a slightly different graph?
Also I've noticed that many substack articles are doubled up. I read the article, and then it repeats itself. What's up with that?
I think most of the cases of "casual projection" happen organically - someone trying to explain B checks out various lines of reasoning until, at some point, they find a linear sequence[0] linking A and B that seems particularly strong in effect[1], and they focus on developing and promoting it.
To do the reverse - to start with a full causal graph and trim it down to a DAG - requires you to explicitly, intentionally make your argument flawed. Doing this would be malicious. I don't think this happens often, because developing a full causal graph takes a lot of work, which you don't need if your goal is to just lie to people.
--
[0] - Or a DAG, but you rarely see people capable of conceptualizing DAGs on the Internet. Even the software industry is still mostly stuck glorifying trees, and UX experts keep telling us most people aren't capable of working with trees either!
[1] - Or interesting for other reasons. This is how you get motivated arguments which aren't technically wrong, just don't tell the whole story, but they happen to align with someone's interests.
How much of our propensity to make causal statements depends on the language we use?
In English, we structure sentences as either a question, answer, or command of causality. We have "independent clauses" but the moment a clause gains utility is when dependence is added to it.
The only methods we have to express noncausal ideas are additive. Like an expansion pack for the English language, we just slap on some extra grammar and jargon, then call it good. It's a lot of messy boilerplate that we have to use every time we need abstraction. It's no wonder that we have ended up with doctor-speak and programmer jargon.
This has me wondering: what if we did have first-class noncausal language? What would that even look like? How would we structure clauses? Is there even a way to do it, or is what I am proposing simply what one finds searching the antonym of causal itself: redundant, frivolous, irrelevant, or meaningless?
Made me think of Markov chains, and the PageRank algorithm. We should construct causal graphs, then give each of them 1/n weight and iterate over the casual weight splitting out, until the Markov chain reaches a steady state. Then you know what is important causally and what isnt.
"Causal Explanations Considered Harmful" seems to clearly want to invoke Dijkstra's "GOTO statements considered harmful", which played an important role in changing programming style. But do we really want to see causal explanations to go the way of the "goto"? Good luck getting your car fixed, and god help you if you are ill. I'm more comfortable with the essay "superficial analyses considered harmful".
I view the article as aspirational, and a lot of the comments here seem to fetishize a trinity of time and certitude and causality.
I appreciate attempts to explore diagramming, and so I appreciate the article. I am also informed by past exposure to Markov chains.
I am aware of Prigogine, and so I accept that reasoning about a dynamic system relies on its dynamic state. A simple example would be:
opening the gates causes water to be released from the dam
But that's predicated on there being water captured behind the dam. Which requires rainfall. It also requires a dam.
A different example would be:
adding additional propellant makes it go faster
This is reasonably straightforward construction for a projectile, less so for a jet engine or a piston engine which dynamically reaches the functional state which we wish to reason about.
As for LSD, how did the author miss Tim Leary? Referring to a Leary personality diagnosis radar chart I can't find an obvious mapping in the dynamic of post vs comments. Best WAG is that neither is expressing dominance (both are seeking order and guidance), with the article more in the region of acceptance and the comments more in the range of suspicion and hostility.
This is why we do randomized experiments, just FYI.
Obviously, there are many situations where we can't do them (e.g., randomizing people to smoking / not smoking). In those cases, the challenges the article discusses become relevant.
The reason why we know having no father figure leads to a higher chance of committing violent crime is not some kind of guesswork of causality, it is because it is the best predictor.
I'll add that causal relationships should be multi-dimensional: the Vietnam vet and LSD-fueled hippy having their own truth can be modeled as subgroups that add arrows in their dimensions to the graph.
The rabbit hole goes deeper when you introduce the concepts of blame and responsibility. People all too often fully assign blame to one piece of the giant network of causes that result in an outcome.
Ok, sure, when David Brooks, or the Freakonomics guys, or the Guns, Germs, Steel guy, claim they've nailed the causality of something, that's often facile gibberish.
All this does nothing to undercut the validity of causality itself. I hope that's obvious, despite the clickbait title of the OP.
I think this is a really thoughtful post. I'd encourage us (readers in general) not to trip over the specific illustrations that are being used to show a general point—namely, that when humans attempt to identify cause-effect relationships we're nearly always performing some kind of selection on top of the "raw" data available to us (which is, itself, nearly always a mere sub-set of all the "raw" data that could exist on that subject).
This observation, if we take it seriously, will encourage us (humans in general trying to make sense of the world) to be humble about the conclusions we reach and mildly skeptical about the conclusions others reach, even when those conclusions feel really incontrovertible.
I agree it's a thoughtful post and it makes a good point, but at the same time, it's an issue with the way a lot of people take Scott Alexander's posts. He is clearly not putting this out there as absolute truth, just saying here is an explanation, it sounds plausible, and I can't find much evidence immediately to disprove it. But he also says there are other equally plausible explanations he can't disprove. He often, if not always, puts an "epistemic status" disclaimer at the top of his posts, and the wider web community doesn't seem to know how to deal with those. They are used to interpret bloggers and the opinion sphere as 100% believing and promoting anything they write about. But that is very much not what Scott is doing. He understands quite well that causality is way too complicated to ever conclusively answer a question like "why did history turn out the way it did?"
Cont'd: It's one thing to observe "Humans make mistakes all the time, therefore I should be humble about my conclusions and mildly skeptical toward others'."
It's another thing to have a FUNCTIONAL MODEL of HOW AND WHY humans make certain categories of mistake.
Such a thing can help us sort through proposed answers to complex questions with better discernment than just the basic realization that "this was arrived at by a fallible human, therefore it could be wrong, so I won't hold it too dogmatically."
All models are wrong, some are useful
It's not causal explanations you need to avoid, it's junk! And while I'm glad that the article criticises one terrible theory promoted by Scott Alexander, I'm not impressed by their alternative LSD theory, or their many bold, unsubstantiated claims about what affected what (we do not, for instance, know that political assassinations affected civil rights law in both directions).
Causal graphs are only circular if you are sloppy with time! We do not know if LSD in 1968 caused hippies in 1969, but we can rule out that hippies in 1969 caused LSD in 1968. And it may seem trivial, but it's not. That things don't happen before their causes is the main tool we have to break endless debates about what caused what, but both the article writer, Scott A. and the author he reviews seem to thrive a little too well in the world of muddy explanations.
> Causal graphs are only circular if you are sloppy with time! We do not know if LSD in 1968 caused hippies in 1969, but we can rule out that hippies in 1969 caused LSD in 1968. And it may seem trivial, but it's not.
I think you're missing most of the point of those diagrams, and the concept of feedback loops in general.
Yes, the cause cannot happen before the effect. But there was no singular "LSD" event in 1968 and no singular "hippies" event in 1969 - the two phenomena were feeding on each other over time. 1968 and 1969 are just years in which people now agree the phenomena crossed some notability thresholds.
Both here and in the general case, in such feedback loops you often won't know what the original cause was - the starting point has been lost to history, and the earliest records you can find are already from when the feedback loop already started. Did some random person ingest some LSD and, because of the experience, became the very first OG hippe? Or did the OG hippie do some LSD with their friends, and the shared experience made those friends more susceptible to the hippie ideas? Who knows?
Then there's an issue of boundaries. Even if you can point at some concrete events in time, like e.g. date of founding a company, it's probably not what you're looking for anyway. That decision itself is a discrete outcome of a different feedback system that's been running for some time, and that feedback system - not the singular signing of a document - provided the influences you're really looking for.
Just want to say that this is the type of comment we want less of. Well over 75% of it is name calling, and the rest can barely be called an argument, let alone an effective one.
That's funny, because that's what I say about them. As I see it, I'm not calling them names, I'm merely saying I think their arguments are shoddy and not at all persuasive.
> Causal graphs are only circular if you are sloppy with time! We do not know if LSD in 1968 caused hippies in 1969, but we can rule out that hippies in 1969 caused LSD in 1968.
One hippie in 1986 could have ~dreams of a massive population of newly minted hippies in 1969 (in this case: virtual hippies), and that dream could motivate them to manufacture and distribute LSD in pursuit of that goal.
Lots of causality is due to false beliefs, and at the cognitive level distortion of time may be just another distortion. What time "is" is to some degree a bit hand wavy in the first place, isn't it?
Causal relationships can go in reverse time. For instance, I know I will be hungry tomorrow so today I go to the grocery store. Tomorrow’s breakfast causes todays shopping.
Perhaps better not to think too much on this though.
As David Hume points out, you don't actually know you'll be hungry tomorrow. You assume you will be based on induction from similar past events, but this is not certain. (He does concede that it's reasonable to plan as if it were a certainty in such cases, in the absence of any falsifying evidence.)
The famous counterexample metaphor is farmyard turkeys. For months, for their entire lives, the turkeys observe the diligent farmer bringing food and water, providing for their every need so that they don't have to do anything. On the day before Thanksgiving, a turkey philosopher confidently declares that, based on this prior evidence, we can conclude that the farmer, their great benefactor, exists only to serve and further the interests of turkeydom.
Perfect. The philosopher turkey views the past as causing the future while in fact the farmer’s future meal caused the turkey’s past experiences. A great example of how serious an error rejecting reverse causality can be.
The turkey is simply wrong in its inductive reasoning. Your hunger tomorrow will be a consequence of the forward causal process of metabolism. That the turkey's demise occurs specifically on thanksgiving day is partly determined by a cultural attitude having its origin in past events.
Aristotle, of course, used a broader ontology of causes than is commonly used today, but there should be no confusion if we are careful to define what we mean. In particular, final causes are more commonly regarded as psychological attitudes, whereby if someone acts now to bring about a certain circumstance in the future, the circumstance is caused by what they were thinking prior to the action.
Hume uses the example of the sun rising: though the sun has (so far) always risen in the east, there is no logical reason to assume that it will _necessarily_ do the same tomorrow tomorrow. Though we can reasonably plan for this as the most likely scenario based on available evidence, it is not _necessarily_ going to happen.
There is, likewise, no logically necessary reason to assume that that causal process of metabolism will continue to work tomorrow as it always has in the past, though it is indeed reasonable to prepare for that outcome as the most likely outcome.
That is true, but here I am discussing the different issue (raised by simple-thoughts) of whether causality can work backwards in time.
If someone buys something to eat at breakfast tomorrow, but dies before then, there was no future event to cause the purchase.
By that logic if for whatever reason you skip breakfast tomorrow, a non-existent thing will have been a cause of your actions.
The future is possibilities that are not actualized yet. So, whether or not I actually eat breakfast or not is irrelevant as the future potentiality is the cause of my behavior today. Once tomorrow is today, it is no longer the future.
Your present prediction is the cause. You could even be predicting something that is logically impossible, i.e. there is no potentiality, still the prediction (in the present) would cause you to do something about it.
Thoughts aren't non-existent though, they have a different type of existence.
I think time may be similar, in that while we seem to have an innate conscious experience/perception of ~absolute time and absolute reality, the way the subconscious mind implements the portion of reality it is the source of may be substantially different.
The idea of tomorrow's breakfast causes today's shopping. You're probably right that tomorrow you'll need to eat breakfast, but you might not be.
>Tomorrow's breakfast causes todays shopping.
This is a logical inconsistency. Tomorrow's breakfast does not yet exist, only your "knowing" about it. Your thought has already happened today, therefore "knowing" you will be hungry is the true cause, not the potential for breakfast itself.
Also Vietnam vets and LSD-fueled hippies exist in the same set, and each can have effects that go in different directions.
On top of that, merging their effects (in policy setting for example) sort of is a barrier to attributing causality. It forces a 2 dimensional signal to go to 1 dimension. Neural networks can backpropagate attribution through that, but only with the convention that this attribution is the marginal effect. And that's a complex problem: who would be responsible for a policy change when there's a population contributing long-term for 48%, but a short term population adds 4%. Which population is the real cause? the 48% or the 4%? Is there a right answer to this question?
Your past experiences have caused you to expect you'll be hungry tomorrow, but shy of someone building a time machine there's no way for causal relationships to actually go backwards. If you wake up tomorrow craving bacon there's no mechanism for you to cause your past self to pick some up.
> Scott A. and the author he reviews seem to thrive a little too well in the world of muddy explanations.
The problem of this causal explanation perhaps lies with the source material Scott Alexander cited without adequate critical analysis, namely David Brooks’ _BoBos in Paradise_.
“If Books Could Kill”, co-hosted by Michael Hobbes and Peter Shamshiri, analyzes and humorously takes down book-length transgressions against critical thinking. A recent episode adroitly and conclusively disassembles Brooks’ _BoBos in Paradise_ as quite harmful and comically wrongheaded. [0]
[0] https://podcasts.apple.com/us/podcast/if-books-could-kill/id...
I liked the article - I'm reading a management book that uses system diagrams to describe business processes and it separates resources from rates and qualities. The article muddies that with its diagrams but still gets the point across.
An example from the book are the resources of recurring customers and staff. If you add marketing, you'll gain customers, and those customers will feedback and bring in more customers by word of mouth if your service is high quality (positive feedback), but you'll also lose customers with poor qualify service and that can be negative feedback. These feedbacks aren't always flowing in only one way. If marketing leads to a surge in customers, and you don't hire more staff quality will sink, and staff might leave due to being overworked . This feeds back into having less staff. The book grabs hold of the causality, you have to understand how your actions will affect the entire system and plan for it.
In terms of determining causality, it’s true that major events aren’t ever simply driven by a single cause, but philosophy has studied this to deep depths and that’s where I think the article leaves the discussion too early. The arrow of time is not discussed and neither are necessary sufficient or contributory causes. LSD and Harvard Admissions may be contributory to the events of the 1960s, but not sufficient or necessary to explain assassinations.
What is the difference between causal projection and just not having enough data to determine the structure? Is it just that someone has picked sides and thus tests a slightly different graph?
Also I've noticed that many substack articles are doubled up. I read the article, and then it repeats itself. What's up with that?
I think most of the cases of "casual projection" happen organically - someone trying to explain B checks out various lines of reasoning until, at some point, they find a linear sequence[0] linking A and B that seems particularly strong in effect[1], and they focus on developing and promoting it.
To do the reverse - to start with a full causal graph and trim it down to a DAG - requires you to explicitly, intentionally make your argument flawed. Doing this would be malicious. I don't think this happens often, because developing a full causal graph takes a lot of work, which you don't need if your goal is to just lie to people.
--
[0] - Or a DAG, but you rarely see people capable of conceptualizing DAGs on the Internet. Even the software industry is still mostly stuck glorifying trees, and UX experts keep telling us most people aren't capable of working with trees either!
[1] - Or interesting for other reasons. This is how you get motivated arguments which aren't technically wrong, just don't tell the whole story, but they happen to align with someone's interests.
How much of our propensity to make causal statements depends on the language we use?
In English, we structure sentences as either a question, answer, or command of causality. We have "independent clauses" but the moment a clause gains utility is when dependence is added to it.
The only methods we have to express noncausal ideas are additive. Like an expansion pack for the English language, we just slap on some extra grammar and jargon, then call it good. It's a lot of messy boilerplate that we have to use every time we need abstraction. It's no wonder that we have ended up with doctor-speak and programmer jargon.
This has me wondering: what if we did have first-class noncausal language? What would that even look like? How would we structure clauses? Is there even a way to do it, or is what I am proposing simply what one finds searching the antonym of causal itself: redundant, frivolous, irrelevant, or meaningless?
Made me think of Markov chains, and the PageRank algorithm. We should construct causal graphs, then give each of them 1/n weight and iterate over the casual weight splitting out, until the Markov chain reaches a steady state. Then you know what is important causally and what isnt.
"Causal Explanations Considered Harmful" seems to clearly want to invoke Dijkstra's "GOTO statements considered harmful", which played an important role in changing programming style. But do we really want to see causal explanations to go the way of the "goto"? Good luck getting your car fixed, and god help you if you are ill. I'm more comfortable with the essay "superficial analyses considered harmful".
I apologize this may sound a little strange, but let me ask:
What do you think of this quote? What about the person who says "it worked yesterday" when yesterday they were using somebody else's computer?Just asking for your thoughts...
"Considered Harmful" Essays Considered Harmful
https://news.ycombinator.com/item?id=9744916
I view the article as aspirational, and a lot of the comments here seem to fetishize a trinity of time and certitude and causality.
I appreciate attempts to explore diagramming, and so I appreciate the article. I am also informed by past exposure to Markov chains.
I am aware of Prigogine, and so I accept that reasoning about a dynamic system relies on its dynamic state. A simple example would be:
But that's predicated on there being water captured behind the dam. Which requires rainfall. It also requires a dam.A different example would be:
This is reasonably straightforward construction for a projectile, less so for a jet engine or a piston engine which dynamically reaches the functional state which we wish to reason about.As for LSD, how did the author miss Tim Leary? Referring to a Leary personality diagnosis radar chart I can't find an obvious mapping in the dynamic of post vs comments. Best WAG is that neither is expressing dominance (both are seeking order and guidance), with the article more in the region of acceptance and the comments more in the range of suspicion and hostility.
About David Brooks explanations:
https://statmodeling.stat.columbia.edu/2015/06/16/the-david-...
This is why we do randomized experiments, just FYI.
Obviously, there are many situations where we can't do them (e.g., randomizing people to smoking / not smoking). In those cases, the challenges the article discusses become relevant.
The reason why we know having no father figure leads to a higher chance of committing violent crime is not some kind of guesswork of causality, it is because it is the best predictor.
Dishonesty is considered harmful.
I'll add that causal relationships should be multi-dimensional: the Vietnam vet and LSD-fueled hippy having their own truth can be modeled as subgroups that add arrows in their dimensions to the graph.
The rabbit hole goes deeper when you introduce the concepts of blame and responsibility. People all too often fully assign blame to one piece of the giant network of causes that result in an outcome.
Another "considered harmful" post. Please stop.
Idea: a "Considered Harmful Posts Considered Harmful" post.
From 2015: https://news.ycombinator.com/item?id=9744916
Recursive harm
Not so original
Ok, sure, when David Brooks, or the Freakonomics guys, or the Guns, Germs, Steel guy, claim they've nailed the causality of something, that's often facile gibberish.
All this does nothing to undercut the validity of causality itself. I hope that's obvious, despite the clickbait title of the OP.
Aren't "Considered Harmful" statements themselves a causal explanation?