james-revisoai a year ago

That could be a good idea. It already works quite well the other way, to have a sort of universal way to distil requirements into many languages, as Fireship elaborated here: https://www.youtube.com/watch?v=iO1mwxPNP5A

There are ways to coax the models into more and more convergent output (for now existing languages, like JS or Python), especially with contextual text. I think the Fireship guy got it to 80% the same output each time with a slightly non-zero temperature prompt, but it was consistent across many tasks which is fascinating (he doesn't fully elaborate on the set up like temperature/top-k - which must have been > 0 because it wasn't always the same output).

In many ways we are building flows of information, and as humans, we like to think back from the output through processes to the earliest building steps. maybe a new language could capture that, with intermittent steps not defined in the code - rather executed by recall from models or generated on the fly or by some kind of code-generation semantic cache- it would make for so much easier reading, debugging and reading. Afterall the main blocker to people, especailly business people, learning code is typically how disconnected most of the lines are from the actual output. Defining a class and then a main function is so illogical until you understand the frame. Moving away from that paradigm could open programming up to a lot of people who like talking about outputs, and first steps.

hmmSceptical a year ago

People get PHD's in designing programming languages / compilers.

GPT models are at least 5/10 years out from critiquing the choices those "very smart" people make.

Even when they do, we'll need to critique their critiques - I wouldn't worry about learning any language now :)