Different approach, which also works with Llama 3.0 8B: let the LLM write the tool invocation of choice in Python syntax, parse the LLM response with the Python AST package.
(PoC'y hackathon demo) code here: https://github.com/ndurner/aileen2.
Kind of. Those that are explicitly trained to do that with consistent formats will do it better. They'll also save you the extra tokens needed to explain the format/method of interacting with functions. But yeah, you can simulate this with any recent model and enough explanation.
Different approach, which also works with Llama 3.0 8B: let the LLM write the tool invocation of choice in Python syntax, parse the LLM response with the Python AST package. (PoC'y hackathon demo) code here: https://github.com/ndurner/aileen2.
https://ollama.com/blog/tool-support Ollama enables tool support for a variety of models now as well, including Llama-3.1.
So do most LLMs now days, no?
Kind of. Those that are explicitly trained to do that with consistent formats will do it better. They'll also save you the extra tokens needed to explain the format/method of interacting with functions. But yeah, you can simulate this with any recent model and enough explanation.
[dead]
[dead]