I agree but I guess another point I didn't make clear was that the feedback is not the same as the code. We can not assume that all the feedback can be embodied in the code and even if we could my ultimate point was that we need system that can learn on the fly and current systems don't.
Take an intern for example, they learn only fly and they do this in a compounded fashion where previous learnings inform new ones. Current systems can't do this, the only way to do this effectively is to change their weights in real time but we don't know how to do that yet
Yes i guess with the current ai we don't have this kind of learning. What works best for me.is to store important learnings in a instructions file or skill. Also i generate these days always some extra documentation about the overall design.
This helps to get the ai on the loop for the nest iteration or feature m
But yes this is limited and also takes some of the rare resources of context window.
But as you said we cannot do this kind of learning with weight's with our current llms.
I guess it's all useful but it's also quite different from human thinking. And i guess we have to guide here to get the best out of it.
Hey, this was quite a long read. But the outcome seemed.to me as a.software engineer quite obvious.
I would never use any Workflow where there is no readable source code artifact as an outcome.
Sure first of all for Humans to understand and to avoid cognitive debts.
But also for the AI for later sessions and adjustments. Also an ai needs good readable code to understand the code in the long run.
I agree but I guess another point I didn't make clear was that the feedback is not the same as the code. We can not assume that all the feedback can be embodied in the code and even if we could my ultimate point was that we need system that can learn on the fly and current systems don't.
Take an intern for example, they learn only fly and they do this in a compounded fashion where previous learnings inform new ones. Current systems can't do this, the only way to do this effectively is to change their weights in real time but we don't know how to do that yet
Yes i guess with the current ai we don't have this kind of learning. What works best for me.is to store important learnings in a instructions file or skill. Also i generate these days always some extra documentation about the overall design.
This helps to get the ai on the loop for the nest iteration or feature m
But yes this is limited and also takes some of the rare resources of context window.
But as you said we cannot do this kind of learning with weight's with our current llms.
I guess it's all useful but it's also quite different from human thinking. And i guess we have to guide here to get the best out of it.
100%.
I will update the post to better illustrate my point and tie the intervention point and the continual learning point together. Thanks for the feedback
edited: I have updated the post