norton120 11 hours ago

This repo is primarily a think piece at the moment, with the intent to flesh out the accompanying code as the ideas take shape.

  • verdverm 11 hours ago

    LEAN manufacturing has a high-level belief that machines should be made to work around humans rather than making humans work around the machine. Humans shouldn't have to make complex movements when working with the machine, rather the machines should be designed in a way that accounts for how humans work. Doing so improved outcomes

    How might this similarly apply to AI & robots?

    • norton120 10 hours ago

      I think if we swap “human” with “worker” that is spot on with this line of thinking. In the case of LEAN manufacturing, the system is optimized for the thing doing the work (the worker). If our goal is for the “worker” to be an agentic process and not a human, then we should be optimizing the system for that worker’s needs and not forcing the process to do complex movements around a code base optimized for humans.

      • verdverm 10 hours ago

        The problem is that you are unlikely to be able to remove the human, so the worker will eventually become a human, because the liability will eventually land on a human and the code will eventually need to be read and modified by a human. The agents will never be mistake free, much like us

        • norton120 9 hours ago

          Humans will need to have visibility into application code, sure, but not likely optimized visibility - compiling and transpiling are good precedents for this. There was(is?) likely an argument to be made that humans need to be able to directly review bytecode or minified JS, but these certainly aren't optimized for humans. I do think there would be some version of an IDE that makes human traversal of a bot-optimized codebase easier, but the more we expect the LLM to do, the more we will need to favor optimizing for the LLM over optimizing for human developers.

          • verdverm 8 hours ago

            I tend to believe we are nowhere near the point it makes sense to deploy LLMs at a scale where it makes sense to optimize code for them. They make far too many mistakes. You can always reorganize the code when you send it to them, which is what is happening anyway since it all goes into a flat context as part of the stream of input tokens.

            Additionally

            - a number of issues around how specific languages work will arise because they expect code to be organized a specific way

            - the snippet assembly phase with LLMs will introduce non-determinism to the software building process, which in itself should give pause to reconsider the approach. Reproducibility and provenance are important to the software supply chain