@jstewartmobile wrote: "Augmenting human intelligence, while leaving the hearts as they are, is a loss, not a gain."
That is a brilliant insight that more intelligence may be a bad thing if your heart is in the wrong place. Sorry to see your comment being downmodded and greyed out. At the risk of the same happening to me, here is support for your point on "heart".
Albert Einstein said in the 1940s: "The release of atom power has changed everything except our way of thinking... the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker."
More by Einstein:
http://www.sacred-texts.com/aor/einstein/einsci.htm
"But mere thinking cannot give us a sense of the ultimate and fundamental ends. To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly."
And Lewis Mumford said in the 1930s: "As a civilization, we have not yet entered the neotechnic phase: we are still living between two worlds, one dead, the other powerless to be born, in a cultural pseudomorph.... Paleotechnic purposes with neotechnic means, that is the most obvious characteristic of the present order." (Technics and Civilization pp. 265-267)
Thus my own sig standing on the shoulders of giants: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
I participated in Doug Engelbart's Unfinished Revolution II Colloquium run by Stanford. I brought up some similar ideas there as well. Like in this email thread I started: "[unrev-II] Is 'bootstrapping' part of the problem?" http://www.dougengelbart.org/colloquium/forum/discussion/216...
"This is one reason why I think just stating the Bootstrap's Institute's (or the colloquium's) goal of "bootstrapping" human or organizational ability as a goal is not adequate. It has to be a question of bootstrapping towards what end? There has to be an accompanying statement of human value."
I've continued to develop that theme elsewhere, like:
"Recognizing irony is key to transcending militarism (2010)"
http://www.pdfernhout.net/recognizing-irony-is-a-key-to-tran...
"The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."
The dangers of increasing intellect unmatched by increasing heart was also a underlying theme in my book-length essay "Post-Scarcity Princeton, or, Reading between the lines of PAW for prospective Princeton students, or, the Health Risks of Heart Disease (2008)"
http://www.pdfernhout.net/reading-between-the-lines.html
A lengthy extract from there:
"""
Let's flip back to the beginning of PAW and try again to find a more challenging article that explains PU mythology.
Perhaps the president's letter on page 2, "A Library for Scientists" will do.
PU President Shirley Tilghman describes a new library that will replace several "isolated" departmental science libraries with one "scientific" library. According to her letter, the new library "will symbolize the increasingly interdisciplinary nature of the work in these fields on our campus". The question is, where do you even begin to tell a university president so obviously proud of her new library that making science and engineering studies even more isolated from the humanities is the opposite of what Princeton University needs to do to survive as an ethically viable institution? And that splitting ethics from innovation was at the root cause of a lot of evil in the world in the past? There is a lot of talk of facilitating "interdisciplinary" work in her letter, but if you read between the lines, you'll see that the implication is it will be between different branches of science and engineering, not say, between biologists and sociologists, or mechanical engineers and historians.
In case Professor Tilghman has not noticed, there is a picture on page 21 of that same issue of PAW of a shark about to eat a Princetonian floating in DeNunzio Pool [...] Maybe she had better look into that? It can't be good PR under any circumstances, can it? I had not known PU's scientists had got that far in their shark breeding experiments as they are sometimes hard to keep in captivity (real scientists, not sharks. OK, that's just a joke, both are hard to keep in captivity. :-) [...] Still, are those PU scientists and engineers doing a good thing? Wouldn't it make it harder to recruit prospective talent for the PU swim team? Or are the sharks in DeNunzio part of some new training regime? Unless that is supposed to be a visiting Yalie about to get eaten? That seems a little harsh, even by intercollegiate competitive standards. :-(
Still, maybe rather that "make the world a better place through advances in scientific understanding", perhaps when you make an anti-social shark "smarter" (with or without the laser beam :-), what do you have except a bigger problem? :-(
For example from a review of "Deep Blue Sea": "So, in an effort to save their funding, they want to take one really good go at making this...serum? I don't remember, brain activating protein...stuff. So, they conduct their test on the shark. And it WORKS! Yay! Congratulations all around! These guys f--ing rule! And it's all parties and cupcakes until someone's arm gets eaten."
Also [from another review]: "Some scientists are out in the middle of the ocean, trying to reproduce proteins in shark's brains. These proteins are the cure for Alzheimer's, and one character even gives a half-assed speech about how she's driven by memories of her father's mental illness. Well, to harvest more protein, that scientist makes the shark's brains four times bigger than normal and now the shark's are super-smart and eat all the scientists. Hooray."
I'm sorry to say that the internet consensus on PU's smarter sharks is that they are not a good idea. :-( Or maybe "Deep Blue Sea" was just a poorly made horror film. :-)
"""
To be clear, I feel Doug's heart was in the right place -- even if he maybe took that for granted in others.
This is why I post comments on hackernews. Love it! I'm going to have to read all of the posts on your website now. Thank you pdfernhout.
I am absolutely certain Doug was a fantastic human being, and completely agree that is probably why he didn't get an acute sense that most of our work--as fun as it is--often ends up like the passing of handguns to toddlers.
I disagree with nearly all of this post and its parent. It has much conflation of ideas, but I will argue that this is the thrust:
"When bad actors are made more capable they are made more destructive, therefore, no one should be made more capable."
This idea represents stagnation and fear of one's fellow man. We prevent ourselves from improving our understanding because we worry someone will use it against us. Today's society as an example, this is almost an inevitability! The comments above focus exclusively on the potential negatives. This is not a useful way of conceptualizing the problem.
"""
Albert Einstein said in the 1940s: "The release of atom power has changed everything except our way of thinking... the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker."
The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance.
"""
These are examples of intellectual augmentation by collaboration of scientists and engineers. However, the resulting destructive technology is made available by the concentration of material resources and therefore is wielded by a select few in powerful positions. Note that the former cannot easily create such technologies without the latter. Note that the former are nearly always NOT the ones operating the technology!
"""
PU President Shirley Tilghman describes a new library that will replace several "isolated" departmental science libraries with one "scientific" library.
Well, to harvest more protein, that scientist makes the shark's brains four times bigger than normal and now the shark's are super-smart and eat all the scientists.
These are simply good intentions and unintended consequences. In the example of shark scientists, augmenting their intelligence by collaboration result in them getting eaten. However, in both examples, a sufficiently augmented intellect could have recognized and avoided the unintended consequences altogether.
"""
Critically: do not conflate the consequences of acting on knowledge, with intellect or the augmentation thereof.
The material results CAN be negative. The phenomenon of intellectual augmentation itself is only of positive consequence: problems CAN be solved more effectively. Problems can be solved poorly and have unintended consequences but this is unrelated. The wrong problems can be solved and this is also unrelated. The problems solved can be for the sole purpose of killing and this is also unrelated.
The sentiment is that intellectual augmentation should be discouraged in general because The Few that have the resources to produce destructive results will be made that much more dangerous, by intent or by mistake. This is FEAR, not certainty. The far more dire consequence is that your fellowman, who wishes to collaborate and solve all sorts problems for the greater good and otherwise, is DENIED the tools to facilitate his problem solving. And subsequently, humanity is DENIED all the good that could arise from such a scenario.
I don't intend to deny those who are pessimistic about the overall effect of augmentation tools in the hands of present-day humanity, nor am I an optimist on the subject. But to state confidently about net loss or gain to humanity from such tools is FOLLY. I would say: make intellectual augmentation tools and have them available to everyone. Not because bad things won't happen, but because good things WILL happen. This is where the heart lies and where it GROWS. And don't we agree it's this that is lacking?
I disagree with nearly all of this post and its parent. It has much conflation of ideas, but I will argue that this is the thrust:
"When bad actors are made more capable they are made more destructive, therefore, no one should be made more capable."
This idea represents stagnation and fear of one's fellow man. We prevent ourselves from improving our understanding because we worry someone will use it against us. Today's society as an example, this is almost an inevitability! The comments above focus exclusively on the potential negatives. This is not a useful way of conceptualizing the problem.
.
Albert Einstein said in the 1940s: "The release of atom power has changed everything except our way of thinking... the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker."
The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance.</i>
These are examples of intellectual augmentation by collaboration of scientists and engineers. However, the resulting destructive technology is made available by the concentration of material resources and therefore is wielded by a select few in powerful positions. Note that the former cannot easily create such technologies without the latter. Note that the former are nearly always NOT the ones operating the technology!
PU President Shirley Tilghman describes a new library that will replace several "isolated" departmental science libraries with one "scientific" library.
Well, to harvest more protein, that scientist makes the shark's brains four times bigger than normal and now the shark's are super-smart and eat all the scientists.</i>
These are simply good intentions and unintended consequences. In the example of shark scientists, augmenting their intelligence by collaboration result in them getting eaten. However, in both examples, a sufficiently augmented intellect could have recognized and avoided the unintended consequences altogether.
.
Critically: do not conflate the consequences of acting on knowledge, with intellect or the augmentation thereof.
The material results CAN be negative. The phenomenon of intellectual augmentation itself is only of positive consequence: problems CAN be solved more effectively. Problems can be solved poorly and have unintended consequences but this is unrelated. The wrong problems can be solved and this is also unrelated. The problems solved can be for the sole purpose of killing and this is also unrelated.
The sentiment is that intellectual augmentation should be discouraged in general because The Few that have the resources to produce destructive results will be made that much more dangerous, by intent or by mistake. This is FEAR, not certainty. The far more dire consequence is that your fellowman, who wishes to collaborate and solve all sorts problems for the greater good and otherwise, is DENIED the tools to facilitate his problem solving. And subsequently, humanity is DENIED all the good that could arise from such a scenario.
I don't intend to deny those who are pessimistic about the overall effect of augmentation tools in the hands of present-day humanity, nor am I an optimist on the subject. But to state confidently about net loss or gain to humanity from such tools is FOLLY. I would say: make intellectual augmentation tools and have them available to everyone. Not because bad things won't happen, but because good things WILL happen. This is where the heart lies and where it GROWS. And don't we agree it's this that is lacking?
We apparently have two entirely different observations of reality. I will share a little more of mine:
Kay and Papert did fantastic research on computing and education. Apple takes the idea, and turns it into something that Kay describes as:
"Think about this. How stupid is this? It’s about as stupid as you can get. But how successful is the iPhone? It’s about as successful as you can get, so that matches you up with something that is the logical equivalent of television in our time."
and:
"Yeah. We can eliminate the learning curve for reading by getting rid of reading and going to recordings. That’s basically what they’re doing: Basically, let’s revert back to a pre-tool time."
Or take the web. Berners-Lee. Smart guy, great intentions. Early days were just plain text or the MS Frontpage goodness rocked by our professoriate. Not pretty, but plenty of actual content that you could learn from. Contrast with today where all of that stuff is buried several pages deep. What's link one? Some bullshit content farm like WebMD full of popovers, dickbars, and ads for pills nobody needs. Then there's the Facebook/Twitter awfulness, where enumerating the breadth-and-depth of it would span several books.
Or television. Higher-minded early execs tried to use it as a tool for raising the cultural bar--operas, great authors, polite debates. Those guys got their clocks cleaned by the guy who put on "Gilligan's Island," and that guy would have had his clock cleaned by the assholes who made "Survivor" and "The Apprentice". How long before they just straight-up show porn on broadcast? I do not know.
Or automobiles. Very practical inventions. What did we do with them? Urban sprawl, 5-lane highways, white-flight, drunk driving, global warming, etc. Many Europeans lucked-out by lacking either the cash or the empty space to follow us in that particular mistake.
Obesity, opioids, mcmansions, etc. pdfernhout nailed it on the scarcity mindset in a post-scarcity world. There is a trajectory here. If you can show me how I'm wrong about that trajectory, I would love to hear it just for the sake of my own sanity.
I would also assert that will and analytic brain power are entirely different things. Take Kalanick. Obviously plenty of analytic intelligence. Technology is like steroids for analytic intelligence. It let him become tech-bro master of the universe instead of some two-bit engineer. It did not stop him from becoming a total asshole.
I agree and share the same sentiment about all your examples. And I would say the trajectory will get worse before it gets better. In fact, I feel collapse is likely.
But this is confusion of ideas once more. Your examples have everything to do with the relentless pursuit of money. This is a separate problem, and is directly related to heart NOT intelligence. Collapse will come to all degenerate societies regardless of intellect and technological prowess.
My examples are ones of technological society still making loaded 21st century weapons while the typical mindset (for both geniuses and fools) had already been baked-in thousands and thousands of years ago. Too much change, too much power, entirely without baked-in evolutionary heuristics for dealing with it.
@aekotra You wrote a strawman point of "When bad actors are made more capable they are made more destructive, therefore, no one should be made more capable." That is not what I said.
Also, I have worked and continue to work on FOSS tools related to intelligence augmentation as on my GitHub site (mostly under the names of "Pointrel" and "Twirlip"). So I believe overall in the value of such tools if widely distributed. As I said here: https://web.archive.org/web/20130514103318/http://pcast.idea...
"Now, there are many people out there (including computer scientists) who may raise legitimate concerns about privacy or other important issues in regards to any system that can support the intelligence community (as well as civilian needs). As I see it, there is a race going on. The race is between two trends. On the one hand, the internet can be used to profile and round up dissenters to the scarcity-based economic status quo (thus legitimate worries about privacy and something like TIA). On the other hand, the internet can be used to change the status quo in various ways (better designs, better science, stronger social networks advocating for some healthy mix of a basic income, a gift economy, democratic resource-based planning, improved local subsistence, etc., all supported by better structured arguments like with the Genoa II approach) to the point where there is abundance for all and rounding up dissenters to mainstream economics is a non-issue because material abundance is everywhere. So, as Bucky Fuller said, whether is will be Utopia or Oblivion will be a touch-and-go relay race to the very end. While I can't guarantee success at the second option of using the internet for abundance for all, I can guarantee that if we do nothing, the first option of using the internet to round up dissenters (or really, anybody who is different, like was done using IBM [tabulators] in WWII Germany) will probably prevail. So, I feel the global public really needs access to these sorts of sensemaking tools in an open source way, and the way to use them is not so much to "fight back" as to "transform and/or transcend the system". As Bucky Fuller said, you never change thing by fighting the old paradigm directly; you change things by inventing a new way that makes the old paradigm obsolete."
The concern that I, and many others, have raised is essentially that technology is an amplifier of the best and worst in us. So, as we amplify our desires, we need to be ever more sure that we are using them towards "good" ends (where "good" is itself open for debate). Essentially, we need to use more powerful technologies (our "head") informed by even more by wisdom and compassion (our "heart").
The movie "Forbidden Planet" is a cautionary tale in that direction given (spoiler) the Krell were wiped out by the unbridled emotions of their own Ids when they made a planet-scale system that could materialize their every desire. A sequel with Robby the Robot called the Invisible Boy has a single malicious AI which has an individualist drive to survive, expand, and control and take over the world for nefarious ends using the latest military technology -- and would have succeeded if not for the more compassionate AI that was embedded in Robby.
These are age-old themes of healthy balance (including of control versus community) -- like the seven deadly sins that are all exaggerations (or amplifications) of healthy impulses as the extreme opposite of the seven virtues.
Yes, as you suggest, collaborative technologies can be a good thing. And yes they are likely to be more of a good thing if distributed broadly given notions of the value of democracy and decentralization (as part of a balance with needed hierarchies, see Manuel de Landa's essay on Meshworks, Hierarchies, and Interfaces).
But, all that does not change the fact that there is an increasing risk from that potential amplification. We need to be conscious of that risk and ideally put resources into managing that risk. And we are putting resources into such risk management to some extent as a global community -- but certainly not to the degree we could.
And one reason we don't put more resources into managing that risk is a discounting of that risk by many who stand to make money by creating proprietary technology they can use to centralize wealth in their direction. There is a lot of money to be made in "picking up pennies before a streamroller" (as in Nassim Nicholas Taleb’s book The Black Swan). Various academic and economic cultures have become very good at providing intellectual justification for making money that way while putting other people's money at risk -- and even putting other people's lives at risk like with war profiteering as with the Iraq war "cakewalk" that has cost several trillion dollars and hundreds of thousands of lives but made a few people very wealthy while destabilizing a whole region and leading to much blowback.
See also, from Wikipedia, "The Best and the Brightest (1972) is an account by journalist David Halberstam of the origins of the Vietnam War published by Random House. The focus of the book is on the foreign policy crafted by academics and intellectuals who were in John F. Kennedy's administration, and the consequences of those policies in Vietnam. The title referred to Kennedy's "whiz kids"—leaders of industry and academia brought into the Kennedy administration—whom Halberstam characterized as insisting on "brilliant policies that defied common sense" in Vietnam, often against the advice of career U.S. Department of State employees."
Our Earth has a certain scale which protected human survival in the past because people could always walk away from a bad city or bad region and live off the land. And people have done so for millennia as civilization after civilization has atrophied and collapsed (often under environmental stress or social corruption). See Daniel Quinn's "Beyond Civilization" writings for example or Wikipedia on societal collapse. In our interconnected world we now have nukes and engineered plagues. We also have total internet-based mobile surveillance (worse than "1984" surveillance) linked to massive compartmentalized "efficient" bureaucracy and corporatism that despots of the past could only dream about. And soon we may have "Slaughterbots". And most people have lost the knowledge to live off the land so they can't just walk away (and there are too many people and too little land for the old ways to support most of us that way anyway). So, this time it is different because we have a lot less potential resiliency in the face of mistakes.
But the fact is, the very same technologies that made it possible for humankind to act as a disruptive geological force (e.g. climate change,mass extinction of species globally) also make possible amazing responses. We could create ocean habitats making mid-oceans into productive fisheries, build space habitats supporting trillions of people and thousands of Earth's worth of biosphere across the solar system, make machines to remove the plastic from the oceans, proceed on insights into how humans can live healthy lives without massive factory-farmed meat consumption, expand our use of indoor agriculture, deploy more carbon-neutral renewable energy, put more R&D into hot and cold fusion energy, and so on.
To increase resiliency we could also actually study the topic more widely and shift our modes of manufacturing and agriculture and other aspects of living and economics. But that would require an ackowledgement of the risk and a deicison to prioritize managing that risk over short-term gains for a few (a complex political topic).
As I said in a Slashdot comment yesterday (made right after the one here, and drawing on similar sources) on the issue of Google and its ethical policy towards AI ( https://slashdot.org/comments.pl?sid=12172580&cid=56708188 ), an alternative for security principles for the USA in particular is to focus on mutual security through having friends and agreements and intrinsic security through having resilient hardened decentralized infrastructure and an educated capable affluent populace. But one difficulty is that those saner solutions are at odds with having a few financially obese people becoming even more financially obese through profits from the war racket and other monopolistic centralized rackets on the backs of uniformed disempowered impoverished workers and consumers -- and so there is fierce well-funded opposition to true security for the USA and the world (whether physical security or information security or progressive taxes or universal healthcare or a social safety net other than prison).
My concern is the "best and the brightest" putting 100% of their effort into pouring ever more gasoline onto the fire while putting 0% of their time into reflecting on what they are ultimately trying to accomplish for their community with that fire.
But, instead of a system designed to be resilient what we got over the decades was increasing centralization and fragility and precarity because it maximized short-term profits for ever fewer people (e.g. the 2008 great recession). And those fragile results were in part from the short-term thinking praising financial obesity implicitly promoted (or at least not discouraged) by the same Operations Research department at Princeton and similar groups. But they are unfortunately in good company across the USA with so many people who ignore or dispute a key point made in the 1964 Triple Revolution Memorandum that: "An adequate distribution of the potential abundance of goods and services will be achieved only when it is understood that the major economic problem is not how to increase production but how to distribute the abundance that is the great potential of cybernation."
But now graduates of such groups like at Princeton seem to be doing it again but even bigger as discussed in "The Artificial Intelligentsia": https://news.ycombinator.com/item?id=16840438 "The clever boys (there were no women) were the engineers, most of them recent graduates of Princeton University’s program in operations research, responsible for designing the company’s tech “platform”".
That article is about a proprietary platform designed to predict and shape the future -- most likely essentially for the uber-wealthy to get uber-wealthier. (Even as the article's author questions the whole premise as far as whether the system will work as intended.)
I took a public policy class with Frank von Hippel when I was a PU grad student (a class I was strongly advised not to take by the then OR department director of graduate studies). Professor von Hippel made an important point that in cost-benefit analysis, what is often ignored is who pays the costs and who gets the benefits. (To be clear, there were several caring faculty in that department in the 1980s -- they were just enmeshed in the US academic/economic/military complex which limited what they could do or how they could do it -- and I myself made many mistakes back then.)
Call it "principles" or "compassion" or "enlightened self-interest" or "wisdom" or "heart", but that is something we greatly need in our society. And we need it now more than ever because our safety margins get increasingly small given ever more powerful technology and (relatively) an ever shrinking Earth's capacity to absorb human folly.
While intelligence can help with coming up with good ends, we also need a heart which often comes from our community and the genuine health-promoting stories in it. As Einstein also wrote: http://www.sacred-texts.com/aor/einstein/einsci.htm
"But it must not be assumed that intelligent thinking can play no part in the formation of the goal and of ethical judgments. When someone realizes that for the achievement of an end certain means would be useful, the means itself becomes thereby an end. Intelligence makes clear to us the interrelation of means and ends. But mere thinking cannot give us a sense of the ultimate and fundamental ends. To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly."
William Catton ("Overshoot") may have been deeply wrong about what defined the Earth's carrying capacity or what the risks were to it (like believing civilization would end with peak oil). But Catton was right in general in the notion that systems have a certain ability to absorb human activity (or folly) given a certain culture and certain technological level.
But at least one can see that the process of his UnRevII colloquium -- presentation and response and dialog using computers (I attended the Stanford colloquium remotely from the East Coast) -- was a tribute to Doug's vision of collaborative problem solving (and I'd add, collaborative problem identification).
A deeper problem for Doug (as for many others) was that, when he was not ignored, he was funded directly and indirectly by a military-industrial complex in the USA that was increasingly being shaped by very misguided security principles and misguided economic principle (misguided relative to creating a healthy happy resilient society that works for almost everyone). He did as good a job as anyone could under the circumstances, but it was a complex dance which no-doubt constrained everything he did.
It's a huge irony that at the same time the USA has been spending on the order of a trillion dollars a year for "defense" and "security", much of that money has unfortunately increased our insecurity (e.g. Iraq II) and also ignored essentially the basics of creating a resilient US civil defense for unforeseen threats. For example of an alternative, here is an idea I proposed in 2010 towards US security to "Build 21000 flexible fabrication facilities across the USA":
https://web.archive.org/web/20100809061159/https://pcast.ide...
"Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
Here is a poem I wrote about the general situation:
On Information, Knowledge, Intelligence, Wisdom, Virtue, and Effectiveness
Information is not knowledge,
Knowledge is not intelligence,
Intelligence is not wisdom,
Wisdom is not virtue, and
Virtue is not effectiveness.
So, to have is not to organize,
To organize is not to embody,
To embody is not to value,
To value is not to act, and
To act (especially in ignorance)
is not necessarily to succeed.
One big problem of our age is a confusion between intelligence (the head) and wisdom (the heart). Of course, in reality all these things are interconnected. Ill-informed compassion has cause harm, as has uncompassionate intellect. And if we don't act effectively on our knowledge and wisdom, then what good is it?
Your strawman summary misses the point about all these interconnections. If we emphasize one of these aspects to the exclusion of the other the result is likely to be problematical. We need a better balance of all these things in our society -- and in the people and eventually human-like AI who comprise our society -- if we are to prosper together.
@jstewartmobile wrote: "Augmenting human intelligence, while leaving the hearts as they are, is a loss, not a gain."
That is a brilliant insight that more intelligence may be a bad thing if your heart is in the wrong place. Sorry to see your comment being downmodded and greyed out. At the risk of the same happening to me, here is support for your point on "heart".
Albert Einstein said in the 1940s: "The release of atom power has changed everything except our way of thinking... the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker."
More by Einstein: http://www.sacred-texts.com/aor/einstein/einsci.htm "But mere thinking cannot give us a sense of the ultimate and fundamental ends. To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly."
And Lewis Mumford said in the 1930s: "As a civilization, we have not yet entered the neotechnic phase: we are still living between two worlds, one dead, the other powerless to be born, in a cultural pseudomorph.... Paleotechnic purposes with neotechnic means, that is the most obvious characteristic of the present order." (Technics and Civilization pp. 265-267)
Thus my own sig standing on the shoulders of giants: "The biggest challenge of the 21st century is the irony of technologies of abundance in the hands of those still thinking in terms of scarcity."
I participated in Doug Engelbart's Unfinished Revolution II Colloquium run by Stanford. I brought up some similar ideas there as well. Like in this email thread I started: "[unrev-II] Is 'bootstrapping' part of the problem?" http://www.dougengelbart.org/colloquium/forum/discussion/216... "This is one reason why I think just stating the Bootstrap's Institute's (or the colloquium's) goal of "bootstrapping" human or organizational ability as a goal is not adequate. It has to be a question of bootstrapping towards what end? There has to be an accompanying statement of human value."
I've continued to develop that theme elsewhere, like: "Recognizing irony is key to transcending militarism (2010)" http://www.pdfernhout.net/recognizing-irony-is-a-key-to-tran... "The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream."
The dangers of increasing intellect unmatched by increasing heart was also a underlying theme in my book-length essay "Post-Scarcity Princeton, or, Reading between the lines of PAW for prospective Princeton students, or, the Health Risks of Heart Disease (2008)" http://www.pdfernhout.net/reading-between-the-lines.html
A lengthy extract from there:
"""
Let's flip back to the beginning of PAW and try again to find a more challenging article that explains PU mythology.
Perhaps the president's letter on page 2, "A Library for Scientists" will do.
PU President Shirley Tilghman describes a new library that will replace several "isolated" departmental science libraries with one "scientific" library. According to her letter, the new library "will symbolize the increasingly interdisciplinary nature of the work in these fields on our campus". The question is, where do you even begin to tell a university president so obviously proud of her new library that making science and engineering studies even more isolated from the humanities is the opposite of what Princeton University needs to do to survive as an ethically viable institution? And that splitting ethics from innovation was at the root cause of a lot of evil in the world in the past? There is a lot of talk of facilitating "interdisciplinary" work in her letter, but if you read between the lines, you'll see that the implication is it will be between different branches of science and engineering, not say, between biologists and sociologists, or mechanical engineers and historians.
In case Professor Tilghman has not noticed, there is a picture on page 21 of that same issue of PAW of a shark about to eat a Princetonian floating in DeNunzio Pool [...] Maybe she had better look into that? It can't be good PR under any circumstances, can it? I had not known PU's scientists had got that far in their shark breeding experiments as they are sometimes hard to keep in captivity (real scientists, not sharks. OK, that's just a joke, both are hard to keep in captivity. :-) [...] Still, are those PU scientists and engineers doing a good thing? Wouldn't it make it harder to recruit prospective talent for the PU swim team? Or are the sharks in DeNunzio part of some new training regime? Unless that is supposed to be a visiting Yalie about to get eaten? That seems a little harsh, even by intercollegiate competitive standards. :-(
Still, maybe rather that "make the world a better place through advances in scientific understanding", perhaps when you make an anti-social shark "smarter" (with or without the laser beam :-), what do you have except a bigger problem? :-(
For example from a review of "Deep Blue Sea": "So, in an effort to save their funding, they want to take one really good go at making this...serum? I don't remember, brain activating protein...stuff. So, they conduct their test on the shark. And it WORKS! Yay! Congratulations all around! These guys f--ing rule! And it's all parties and cupcakes until someone's arm gets eaten."
Also [from another review]: "Some scientists are out in the middle of the ocean, trying to reproduce proteins in shark's brains. These proteins are the cure for Alzheimer's, and one character even gives a half-assed speech about how she's driven by memories of her father's mental illness. Well, to harvest more protein, that scientist makes the shark's brains four times bigger than normal and now the shark's are super-smart and eat all the scientists. Hooray."
I'm sorry to say that the internet consensus on PU's smarter sharks is that they are not a good idea. :-( Or maybe "Deep Blue Sea" was just a poorly made horror film. :-)
"""
To be clear, I feel Doug's heart was in the right place -- even if he maybe took that for granted in others.
This is why I post comments on hackernews. Love it! I'm going to have to read all of the posts on your website now. Thank you pdfernhout.
I am absolutely certain Doug was a fantastic human being, and completely agree that is probably why he didn't get an acute sense that most of our work--as fun as it is--often ends up like the passing of handguns to toddlers.
I disagree with nearly all of this post and its parent. It has much conflation of ideas, but I will argue that this is the thrust:
"When bad actors are made more capable they are made more destructive, therefore, no one should be made more capable."
This idea represents stagnation and fear of one's fellow man. We prevent ourselves from improving our understanding because we worry someone will use it against us. Today's society as an example, this is almost an inevitability! The comments above focus exclusively on the potential negatives. This is not a useful way of conceptualizing the problem.
""" Albert Einstein said in the 1940s: "The release of atom power has changed everything except our way of thinking... the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker."
The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. """
These are examples of intellectual augmentation by collaboration of scientists and engineers. However, the resulting destructive technology is made available by the concentration of material resources and therefore is wielded by a select few in powerful positions. Note that the former cannot easily create such technologies without the latter. Note that the former are nearly always NOT the ones operating the technology!
""" PU President Shirley Tilghman describes a new library that will replace several "isolated" departmental science libraries with one "scientific" library.
Well, to harvest more protein, that scientist makes the shark's brains four times bigger than normal and now the shark's are super-smart and eat all the scientists.
These are simply good intentions and unintended consequences. In the example of shark scientists, augmenting their intelligence by collaboration result in them getting eaten. However, in both examples, a sufficiently augmented intellect could have recognized and avoided the unintended consequences altogether. """
Critically: do not conflate the consequences of acting on knowledge, with intellect or the augmentation thereof.
The material results CAN be negative. The phenomenon of intellectual augmentation itself is only of positive consequence: problems CAN be solved more effectively. Problems can be solved poorly and have unintended consequences but this is unrelated. The wrong problems can be solved and this is also unrelated. The problems solved can be for the sole purpose of killing and this is also unrelated.
The sentiment is that intellectual augmentation should be discouraged in general because The Few that have the resources to produce destructive results will be made that much more dangerous, by intent or by mistake. This is FEAR, not certainty. The far more dire consequence is that your fellowman, who wishes to collaborate and solve all sorts problems for the greater good and otherwise, is DENIED the tools to facilitate his problem solving. And subsequently, humanity is DENIED all the good that could arise from such a scenario.
I don't intend to deny those who are pessimistic about the overall effect of augmentation tools in the hands of present-day humanity, nor am I an optimist on the subject. But to state confidently about net loss or gain to humanity from such tools is FOLLY. I would say: make intellectual augmentation tools and have them available to everyone. Not because bad things won't happen, but because good things WILL happen. This is where the heart lies and where it GROWS. And don't we agree it's this that is lacking?
I disagree with nearly all of this post and its parent. It has much conflation of ideas, but I will argue that this is the thrust:
"When bad actors are made more capable they are made more destructive, therefore, no one should be made more capable."
This idea represents stagnation and fear of one's fellow man. We prevent ourselves from improving our understanding because we worry someone will use it against us. Today's society as an example, this is almost an inevitability! The comments above focus exclusively on the potential negatives. This is not a useful way of conceptualizing the problem.
.
Albert Einstein said in the 1940s: "The release of atom power has changed everything except our way of thinking... the solution to this problem lies in the heart of mankind. If only I had known, I should have become a watchmaker."
The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance.</i>
These are examples of intellectual augmentation by collaboration of scientists and engineers. However, the resulting destructive technology is made available by the concentration of material resources and therefore is wielded by a select few in powerful positions. Note that the former cannot easily create such technologies without the latter. Note that the former are nearly always NOT the ones operating the technology!
PU President Shirley Tilghman describes a new library that will replace several "isolated" departmental science libraries with one "scientific" library.
Well, to harvest more protein, that scientist makes the shark's brains four times bigger than normal and now the shark's are super-smart and eat all the scientists.</i>
These are simply good intentions and unintended consequences. In the example of shark scientists, augmenting their intelligence by collaboration result in them getting eaten. However, in both examples, a sufficiently augmented intellect could have recognized and avoided the unintended consequences altogether.
.
Critically: do not conflate the consequences of acting on knowledge, with intellect or the augmentation thereof.
The material results CAN be negative. The phenomenon of intellectual augmentation itself is only of positive consequence: problems CAN be solved more effectively. Problems can be solved poorly and have unintended consequences but this is unrelated. The wrong problems can be solved and this is also unrelated. The problems solved can be for the sole purpose of killing and this is also unrelated.
The sentiment is that intellectual augmentation should be discouraged in general because The Few that have the resources to produce destructive results will be made that much more dangerous, by intent or by mistake. This is FEAR, not certainty. The far more dire consequence is that your fellowman, who wishes to collaborate and solve all sorts problems for the greater good and otherwise, is DENIED the tools to facilitate his problem solving. And subsequently, humanity is DENIED all the good that could arise from such a scenario.
I don't intend to deny those who are pessimistic about the overall effect of augmentation tools in the hands of present-day humanity, nor am I an optimist on the subject. But to state confidently about net loss or gain to humanity from such tools is FOLLY. I would say: make intellectual augmentation tools and have them available to everyone. Not because bad things won't happen, but because good things WILL happen. This is where the heart lies and where it GROWS. And don't we agree it's this that is lacking?
We apparently have two entirely different observations of reality. I will share a little more of mine:
Kay and Papert did fantastic research on computing and education. Apple takes the idea, and turns it into something that Kay describes as:
"Think about this. How stupid is this? It’s about as stupid as you can get. But how successful is the iPhone? It’s about as successful as you can get, so that matches you up with something that is the logical equivalent of television in our time."
and:
"Yeah. We can eliminate the learning curve for reading by getting rid of reading and going to recordings. That’s basically what they’re doing: Basically, let’s revert back to a pre-tool time."
Or take the web. Berners-Lee. Smart guy, great intentions. Early days were just plain text or the MS Frontpage goodness rocked by our professoriate. Not pretty, but plenty of actual content that you could learn from. Contrast with today where all of that stuff is buried several pages deep. What's link one? Some bullshit content farm like WebMD full of popovers, dickbars, and ads for pills nobody needs. Then there's the Facebook/Twitter awfulness, where enumerating the breadth-and-depth of it would span several books.
Or television. Higher-minded early execs tried to use it as a tool for raising the cultural bar--operas, great authors, polite debates. Those guys got their clocks cleaned by the guy who put on "Gilligan's Island," and that guy would have had his clock cleaned by the assholes who made "Survivor" and "The Apprentice". How long before they just straight-up show porn on broadcast? I do not know.
Or automobiles. Very practical inventions. What did we do with them? Urban sprawl, 5-lane highways, white-flight, drunk driving, global warming, etc. Many Europeans lucked-out by lacking either the cash or the empty space to follow us in that particular mistake.
Obesity, opioids, mcmansions, etc. pdfernhout nailed it on the scarcity mindset in a post-scarcity world. There is a trajectory here. If you can show me how I'm wrong about that trajectory, I would love to hear it just for the sake of my own sanity.
I would also assert that will and analytic brain power are entirely different things. Take Kalanick. Obviously plenty of analytic intelligence. Technology is like steroids for analytic intelligence. It let him become tech-bro master of the universe instead of some two-bit engineer. It did not stop him from becoming a total asshole.
@jstewartmobile Thanks for you comments!
On "trajectory", books like "The Pleasure Trap" and "Supernormal Stimuli" discuss the theme of our scarcity-shaped inclinations being out-of-date for our world of abundance. http://web.archive.org/web/20160418155513/http://www.drfuhrm... https://en.wikipedia.org/wiki/Supernormal_Stimuli
The Pleasure Trap in particular was the first book I read that for me connected health issues to scarcity/abundance themes.
Or also "Wired Child: Reclaiming Childhood in a Digital Age" and "The War Play Dilemma" specifically on media and kids.
And also Paul Graham on "The Acceleration of Addictiveness": http://www.paulgraham.com/addiction.html
Or on insights from "Rat Park" about how social isolation and excess stress are the cause of most addictive behavior: https://www.huffingtonpost.com/johann-hari/the-real-cause-of...
What to do about that is a big challenge though... There are some ideas in those various resources.
I agree and share the same sentiment about all your examples. And I would say the trajectory will get worse before it gets better. In fact, I feel collapse is likely.
But this is confusion of ideas once more. Your examples have everything to do with the relentless pursuit of money. This is a separate problem, and is directly related to heart NOT intelligence. Collapse will come to all degenerate societies regardless of intellect and technological prowess.
My examples are ones of technological society still making loaded 21st century weapons while the typical mindset (for both geniuses and fools) had already been baked-in thousands and thousands of years ago. Too much change, too much power, entirely without baked-in evolutionary heuristics for dealing with it.
@aekotra You wrote a strawman point of "When bad actors are made more capable they are made more destructive, therefore, no one should be made more capable." That is not what I said.
Also, I have worked and continue to work on FOSS tools related to intelligence augmentation as on my GitHub site (mostly under the names of "Pointrel" and "Twirlip"). So I believe overall in the value of such tools if widely distributed. As I said here: https://web.archive.org/web/20130514103318/http://pcast.idea... "Now, there are many people out there (including computer scientists) who may raise legitimate concerns about privacy or other important issues in regards to any system that can support the intelligence community (as well as civilian needs). As I see it, there is a race going on. The race is between two trends. On the one hand, the internet can be used to profile and round up dissenters to the scarcity-based economic status quo (thus legitimate worries about privacy and something like TIA). On the other hand, the internet can be used to change the status quo in various ways (better designs, better science, stronger social networks advocating for some healthy mix of a basic income, a gift economy, democratic resource-based planning, improved local subsistence, etc., all supported by better structured arguments like with the Genoa II approach) to the point where there is abundance for all and rounding up dissenters to mainstream economics is a non-issue because material abundance is everywhere. So, as Bucky Fuller said, whether is will be Utopia or Oblivion will be a touch-and-go relay race to the very end. While I can't guarantee success at the second option of using the internet for abundance for all, I can guarantee that if we do nothing, the first option of using the internet to round up dissenters (or really, anybody who is different, like was done using IBM [tabulators] in WWII Germany) will probably prevail. So, I feel the global public really needs access to these sorts of sensemaking tools in an open source way, and the way to use them is not so much to "fight back" as to "transform and/or transcend the system". As Bucky Fuller said, you never change thing by fighting the old paradigm directly; you change things by inventing a new way that makes the old paradigm obsolete."
The concern that I, and many others, have raised is essentially that technology is an amplifier of the best and worst in us. So, as we amplify our desires, we need to be ever more sure that we are using them towards "good" ends (where "good" is itself open for debate). Essentially, we need to use more powerful technologies (our "head") informed by even more by wisdom and compassion (our "heart").
The movie "Forbidden Planet" is a cautionary tale in that direction given (spoiler) the Krell were wiped out by the unbridled emotions of their own Ids when they made a planet-scale system that could materialize their every desire. A sequel with Robby the Robot called the Invisible Boy has a single malicious AI which has an individualist drive to survive, expand, and control and take over the world for nefarious ends using the latest military technology -- and would have succeeded if not for the more compassionate AI that was embedded in Robby.
These are age-old themes of healthy balance (including of control versus community) -- like the seven deadly sins that are all exaggerations (or amplifications) of healthy impulses as the extreme opposite of the seven virtues.
Yes, as you suggest, collaborative technologies can be a good thing. And yes they are likely to be more of a good thing if distributed broadly given notions of the value of democracy and decentralization (as part of a balance with needed hierarchies, see Manuel de Landa's essay on Meshworks, Hierarchies, and Interfaces).
But, all that does not change the fact that there is an increasing risk from that potential amplification. We need to be conscious of that risk and ideally put resources into managing that risk. And we are putting resources into such risk management to some extent as a global community -- but certainly not to the degree we could.
And one reason we don't put more resources into managing that risk is a discounting of that risk by many who stand to make money by creating proprietary technology they can use to centralize wealth in their direction. There is a lot of money to be made in "picking up pennies before a streamroller" (as in Nassim Nicholas Taleb’s book The Black Swan). Various academic and economic cultures have become very good at providing intellectual justification for making money that way while putting other people's money at risk -- and even putting other people's lives at risk like with war profiteering as with the Iraq war "cakewalk" that has cost several trillion dollars and hundreds of thousands of lives but made a few people very wealthy while destabilizing a whole region and leading to much blowback.
See also, from Wikipedia, "The Best and the Brightest (1972) is an account by journalist David Halberstam of the origins of the Vietnam War published by Random House. The focus of the book is on the foreign policy crafted by academics and intellectuals who were in John F. Kennedy's administration, and the consequences of those policies in Vietnam. The title referred to Kennedy's "whiz kids"—leaders of industry and academia brought into the Kennedy administration—whom Halberstam characterized as insisting on "brilliant policies that defied common sense" in Vietnam, often against the advice of career U.S. Department of State employees."
Our Earth has a certain scale which protected human survival in the past because people could always walk away from a bad city or bad region and live off the land. And people have done so for millennia as civilization after civilization has atrophied and collapsed (often under environmental stress or social corruption). See Daniel Quinn's "Beyond Civilization" writings for example or Wikipedia on societal collapse. In our interconnected world we now have nukes and engineered plagues. We also have total internet-based mobile surveillance (worse than "1984" surveillance) linked to massive compartmentalized "efficient" bureaucracy and corporatism that despots of the past could only dream about. And soon we may have "Slaughterbots". And most people have lost the knowledge to live off the land so they can't just walk away (and there are too many people and too little land for the old ways to support most of us that way anyway). So, this time it is different because we have a lot less potential resiliency in the face of mistakes.
But the fact is, the very same technologies that made it possible for humankind to act as a disruptive geological force (e.g. climate change,mass extinction of species globally) also make possible amazing responses. We could create ocean habitats making mid-oceans into productive fisheries, build space habitats supporting trillions of people and thousands of Earth's worth of biosphere across the solar system, make machines to remove the plastic from the oceans, proceed on insights into how humans can live healthy lives without massive factory-farmed meat consumption, expand our use of indoor agriculture, deploy more carbon-neutral renewable energy, put more R&D into hot and cold fusion energy, and so on.
To increase resiliency we could also actually study the topic more widely and shift our modes of manufacturing and agriculture and other aspects of living and economics. But that would require an ackowledgement of the risk and a deicison to prioritize managing that risk over short-term gains for a few (a complex political topic).
As I said in a Slashdot comment yesterday (made right after the one here, and drawing on similar sources) on the issue of Google and its ethical policy towards AI ( https://slashdot.org/comments.pl?sid=12172580&cid=56708188 ), an alternative for security principles for the USA in particular is to focus on mutual security through having friends and agreements and intrinsic security through having resilient hardened decentralized infrastructure and an educated capable affluent populace. But one difficulty is that those saner solutions are at odds with having a few financially obese people becoming even more financially obese through profits from the war racket and other monopolistic centralized rackets on the backs of uniformed disempowered impoverished workers and consumers -- and so there is fierce well-funded opposition to true security for the USA and the world (whether physical security or information security or progressive taxes or universal healthcare or a social safety net other than prison).
My concern is the "best and the brightest" putting 100% of their effort into pouring ever more gasoline onto the fire while putting 0% of their time into reflecting on what they are ultimately trying to accomplish for their community with that fire.
(continued from previous reply -- see the one below first)
I proposed making a more resilient infrastructure (with little success) in the 1980s in my Princeton OR&CivE graduate studies: http://pdfernhout.net/princeton-graduate-school-plans.html
And here are some related ideas I developed around 1999 with "OSCOMAK": http://www.kurtz-fernhout.com/oscomak/
And yes, cooperative technology could help with that as I explained in 2001: http://www.kurtz-fernhout.com/oscomak/SSI_Fernhout2001_web.h...
And many other people have worked on such ideas earlier (E.F. Schumacher, John and Mary Todd, Amory and Hunter Lovins, and many more).
But cooperative technology will only help if our heart (and related mythology) is in the right place. Examples of discussion of economic mythologies: http://conceptualguerilla.com/essays/essays-on-economics-and... https://www.theatlantic.com/magazine/archive/1999/03/the-mar...
But, instead of a system designed to be resilient what we got over the decades was increasing centralization and fragility and precarity because it maximized short-term profits for ever fewer people (e.g. the 2008 great recession). And those fragile results were in part from the short-term thinking praising financial obesity implicitly promoted (or at least not discouraged) by the same Operations Research department at Princeton and similar groups. But they are unfortunately in good company across the USA with so many people who ignore or dispute a key point made in the 1964 Triple Revolution Memorandum that: "An adequate distribution of the potential abundance of goods and services will be achieved only when it is understood that the major economic problem is not how to increase production but how to distribute the abundance that is the great potential of cybernation."
But now graduates of such groups like at Princeton seem to be doing it again but even bigger as discussed in "The Artificial Intelligentsia": https://news.ycombinator.com/item?id=16840438 "The clever boys (there were no women) were the engineers, most of them recent graduates of Princeton University’s program in operations research, responsible for designing the company’s tech “platform”".
That article is about a proprietary platform designed to predict and shape the future -- most likely essentially for the uber-wealthy to get uber-wealthier. (Even as the article's author questions the whole premise as far as whether the system will work as intended.)
I took a public policy class with Frank von Hippel when I was a PU grad student (a class I was strongly advised not to take by the then OR department director of graduate studies). Professor von Hippel made an important point that in cost-benefit analysis, what is often ignored is who pays the costs and who gets the benefits. (To be clear, there were several caring faculty in that department in the 1980s -- they were just enmeshed in the US academic/economic/military complex which limited what they could do or how they could do it -- and I myself made many mistakes back then.)
Call it "principles" or "compassion" or "enlightened self-interest" or "wisdom" or "heart", but that is something we greatly need in our society. And we need it now more than ever because our safety margins get increasingly small given ever more powerful technology and (relatively) an ever shrinking Earth's capacity to absorb human folly.
While intelligence can help with coming up with good ends, we also need a heart which often comes from our community and the genuine health-promoting stories in it. As Einstein also wrote: http://www.sacred-texts.com/aor/einstein/einsci.htm "But it must not be assumed that intelligent thinking can play no part in the formation of the goal and of ethical judgments. When someone realizes that for the achievement of an end certain means would be useful, the means itself becomes thereby an end. Intelligence makes clear to us the interrelation of means and ends. But mere thinking cannot give us a sense of the ultimate and fundamental ends. To make clear these fundamental ends and valuations, and to set them fast in the emotional life of the individual, seems to me precisely the most important function which religion has to perform in the social life of man. And if one asks whence derives the authority of such fundamental ends, since they cannot be stated and justified merely by reason, one can only answer: they exist in a healthy society as powerful traditions, which act upon the conduct and aspirations and judgments of the individuals; they are there, that is, as something living, without its being necessary to find justification for their existence. They come into being not through demonstration but through revelation, through the medium of powerful personalities. One must not attempt to justify them, but rather to sense their nature simply and clearly."
William Catton ("Overshoot") may have been deeply wrong about what defined the Earth's carrying capacity or what the risks were to it (like believing civilization would end with peak oil). But Catton was right in general in the notion that systems have a certain ability to absorb human activity (or folly) given a certain culture and certain technological level.
Doug had many of the same fears about Peak Oil as Catton, which I tried to help his UnRevII Colloquium move beyond, to mixed results. Example: https://www.dougengelbart.org/colloquium/forum/discussion/00...
But at least one can see that the process of his UnRevII colloquium -- presentation and response and dialog using computers (I attended the Stanford colloquium remotely from the East Coast) -- was a tribute to Doug's vision of collaborative problem solving (and I'd add, collaborative problem identification).
A deeper problem for Doug (as for many others) was that, when he was not ignored, he was funded directly and indirectly by a military-industrial complex in the USA that was increasingly being shaped by very misguided security principles and misguided economic principle (misguided relative to creating a healthy happy resilient society that works for almost everyone). He did as good a job as anyone could under the circumstances, but it was a complex dance which no-doubt constrained everything he did.
It's a huge irony that at the same time the USA has been spending on the order of a trillion dollars a year for "defense" and "security", much of that money has unfortunately increased our insecurity (e.g. Iraq II) and also ignored essentially the basics of creating a resilient US civil defense for unforeseen threats. For example of an alternative, here is an idea I proposed in 2010 towards US security to "Build 21000 flexible fabrication facilities across the USA": https://web.archive.org/web/20100809061159/https://pcast.ide... "Being able to make things is an important part of prosperity, but that capability (and related confidence) has been slipping away in the USA. The USA needs more large neighborhood shops with a lot of flexible machine tools. The US government should fund the construction of 21,000 flexible fabrication facilities across the USA at a cost of US$50 billion, places where any American can go to learn about and use CNC equipment like mills and lathes and a variety of other advanced tools and processes including biotech ones. That is one for every town and county in the USA. These shops might be seen as public extensions of local schools, essentially turning the shops of public schools into more like a public library of tools. This project is essential to US national security, to provide a technologically literate populace who has learned about post-scarcity technology in a hands-on way. The greatest challenge our society faces right now is post-scarcity technology (like robots, AI, nanotech, biotech, etc.) in the hands of people still obsessed with fighting over scarcity (whether in big organizations or in small groups). This project would help educate our entire society about the potential of these technologies to produce abundance for all."
Here is a poem I wrote about the general situation:
One big problem of our age is a confusion between intelligence (the head) and wisdom (the heart). Of course, in reality all these things are interconnected. Ill-informed compassion has cause harm, as has uncompassionate intellect. And if we don't act effectively on our knowledge and wisdom, then what good is it?
Your strawman summary misses the point about all these interconnections. If we emphasize one of these aspects to the exclusion of the other the result is likely to be problematical. We need a better balance of all these things in our society -- and in the people and eventually human-like AI who comprise our society -- if we are to prosper together.