Ask HN: Do sociotechnical pressures select for beneficial or harmful AI systems?
The full question I'm wondering about is as follows:
Do sociotechnical selection pressures reliably favor ML systems that (a) increase their own future deployment probability and (b) reshape institutions/data pipelines to entrench that probability, even without explicit 'survive' objectives?
I've gathered some links exploring this and tangential ideas here: https://studium.dev/drafts/f1 - I'd love to find more reading material
I think regardless of the technical issues, our sociopolitical selection pressures favor people who are willing to be as unscrupulous as possible in increasing their own wealth and power. Whatever happens with ML or AI or anything else is just a side effect of what human behaviors our society encourages.
Look up the phrase 'generalized power-seeking'.
I belive if you take it step by step the pressure makes for expedition of these processes.