Ask HN: How to create AGI or prove whether consciousness is an emergent property

4 points by king07828 5 years ago

1 Set up a first computer with a neural network that 1) takes as input the audio and video output from a second computer (including a microphone input) and 2) outputs keyboard and mouse commands to the second computer.

2 Train the first computer neural network to operate the second computer like a human operator in response to instructions (voice commands) from a human controller (i.e., the first computer neural network being the operator of the second computer and the human controller giving instructions to the neural network operator).

3 The neural network may be similar to the Openai Five network [1] with CNNs plus embedding and FC layers at the audio and video inputs that feed into a long LSTM layer that feeds into FC layers for the keyboard and mouse outputs.

4 The "reward" for the neural network is something like a laptop battery life meter that drains over time as the neural network operates the second computer and gets refilled at the whim of the human operator, e.g., after the successful completion of commands. Eventually, with a large enough network and enough compute, the commands transition from "open this" and "do that" to "what will happen next?" and "how does that make you feel?". Sessions using human operators can be recorded and replayed for the neural networks to begin learning and then graduate to live interaction with a human controller.

5 If emotions, intelligence, and consciousness are emergent properties, then those properties should emerge from the neural network behavior.

6 If consciousness is not emergent, then the neural network may still be trained to operate a computer. E.g., as a software programmer that does the grunt work of programming (performing a correct refactorization) while the human operator does the creative work (deciding what to refactor).

[1] see https://towardsdatascience.com/the-science-behind-openai-five-that-just-produced-one-of-the-greatest-breakthrough-in-the-history-b045bcdc2b69

p1esk 5 years ago

I don’t see any questions here. Seems like you figured it out. Let us know when you build it.

rl3 5 years ago

I fail to see how real emotion let alone consciousness would be emergent properties here.

At best, you're training a tool AI to emulate humans, including their emotional response. Thus, the origin of emotion would not be endogenous. "Emotion" would simply lay in the same output probability space as mundane tasks.

The counter-argument here is that human emotion may be learned to some degree, but you have to consider that the input a human receives is vastly more complex than simply keyboard, mouse and audio. The chemical signalling mechanisms and architecture in the human brain is just incredibly intricate by comparison.

peteradio 5 years ago

How do you give feedback to the computer for the "what will happen next" and especially the "how does that make you feel" questions?

dosy 5 years ago

emotions are just the way we experience rewards and losses.

we build up our emotion response network over time through experience and sublimating initial basic rewards like for getting fed, attention, loved, petted with more compound/sophisticated rewards.

what if consciousness as we experience it is independent of the substrate and doesn't emerge from it but is picked up/entangled with any substrate of the right and sufficient expressiveness, function and structure?

what if there is another sort of synthetic consciousness that exists only within the substrate and emerges out of (or can be designed in) it?

do you want to build computers for consciousness to inhabit or computers for synthetic awareness/AGI to exist in?

sort of like the difference between Androids with a ghost and those without in ghost in the Shell.

croo 5 years ago

IMO emotions are not emergent properties of an agi because it's a machine. It could learn how to manipulate the operator to add more batteries but that's not the same.