radarsat1 2 days ago

Regarding the typewriter approach, I've wondered for a while if anyone has explored simple backtracking with LLMs? Like, have the LLM be able to generate a backspace/delete token that lets it "undo" previously generated tokens in an append-only fashion. Not sure how this would work with teacher forcing but seems feasible with RL.

  • mattnewton a day ago

    So, not exactly the same thing at all, but the ARChitects do a really cool thing I didn't have time to talk about in this post, which is a kind of depth first search with a cumulative "minimum probability" threshold for backing out of a path. This does let the model kind of reason ahead a few tokens, and then back out if it doens't look like it's going well and try the next most likely token. https://github.com/da-fr/arc-prize-2024/blob/main/training_c...

    You can image something like that for any autoregressive llm, but it probably needs some heavy heuristics. Here there are like 11 valid tokens (end of line, 1-9, or end of sequence), and other use cases are going to have way more options making this more intractable.

  • _diyar 2 days ago

    With current LLMs, is meaningless because the current state is stored in the "context" (system prompt, user prompt, chat output so far). So if you apply a backspace token, you just end up where you started a second ago.

    I.e. At state A, you have decided to append token i to move to state B. Removing token i just sets you back to state A, where you would again just pick token i. (Note that this is ignoring the fact that there's a small probabilistic component to next token selection).

    In the RL/reasoning world of LLMs, you can instead just reward correct final output without policing the reasoning steps, and a strong model should learn to backtrack on their "thoughts" as appropriate (without removing it from the context).

    Edit: wording.

    • saurik 2 days ago

      I think the idea is that the backspace would be a token, indelibly in the history, as it is something that happened: if you record on editor traces, the premise that I previously typed something and chose to delete it matters for my current state.

      • cchance a day ago

        I do find that might be useful, as it might help the LLm realize that it already made a mistake and that the mistake and memory of that mistake still exists and isn't just erased from its context

      • radarsat1 2 days ago

        Exactly what I had in mind. After generating a few wrong tokens perhaps the model could realize it leads to a low probability path and have a way to "go back", while staying in context. Parent is right though that thinking models can kind of do that without some special token, I hadn't thought about that, nice observation.

        • namibj a day ago

          Just wanna remind that transformers are devoid of any "ordering/sequence" concept until you feed one in via positional encoding. It'd be easy to flag retracted tokens as such (e.g. pointing an input token one direction or the opposite similar to how RoPE encodes into directional modulation/wobble) but otherwise represent the malleable edit state with the positional encoding and accept overlap (just probably make sure autoregressive decoding/casual self attention makes it so the tokens are sufficiently able to interact preferentially with their immediate neighbors _of the same attempt/edit-revision_).

  • klintcho 5 hours ago

    How would it be different from regular beam search?

  • dev_hugepages 2 days ago

    Research on the backspace token: https://arxiv.org/abs/2306.05426 > [...] The IL framework also allows us to incorporate backtracking by introducing a backspace action into the generation process [...]

    • radarsat1 2 days ago

      Very interesting paper, even not considering the backspace stuff, thanks for the link. Pretty cool how that seems to tie in with more recent work on applying pure RL to LLM training.

mNovak 2 days ago

Really interesting to see the diffusion model solve the puzzles in an iterative way, which feels more similar to how I (and probably most humans) solve them.

Outwardly, it seems to be limited by unmasking too few tokens per round, even when the heatmap shows many more high-confidence guesses available. On some of the larger puzzles it looks like it's wasting many rounds filling in the 'obvious' shapes, and then gets the interesting bit in the last round. It also doesn't seem to have learned the idea of "the background is blue with shapes drawn on top," where background is often 50% of the solution in these puzzles.

  • namibj a day ago

    You need a retraction/correction mechanism so the diffusion isn't locked in on a bad choice in order to really reduce iteration count, sadly.

twotwotwo 2 days ago

It is kind of wild that most coding tasks are editing tasks, and we humans care a lot about code editing tools, but automated tools use code generation for editing where a valid block must be generated top-to-bottom in one go.

Fixing a mistake requires re-generating the file or block of code. Or, if something generated later has implications for earlier code--a new import or function parameter's required, something like that--the only option is to go back and re-generate a big chunk. That'd be inefficient for humans, not implausible it's wrong for other code generators too.

I don't know if diffusion specifically will be the approach. (Maybe there's something to generating edit sequences?) This post's note that diffusion kills KV caching is something I hadn't even considered. It does seem right to experiment with things other than strict start-to-end generation.

  • imtringued 2 days ago

    A massive problem with current generation llms is that they have a single globally ordered context and that the model is only allowed to append to the context.

    This is like having a single tape Turing machine. They can simulate a multi tape machine, but at O(n^2) complexity.

    The computation budget of an LLM is finite, so this has a massive practical impact.

    • cubefox 2 days ago

      The article explains that this is not a problem but an advantage.

      • Sabinus 4 hours ago

        Apart from execution speedup due to caching, how does the article explain this is an advantage?

  • namibj 2 days ago

    You can still cache prompts; this just affects the cache for during generation produced tokens. And that's fairly harmless relatively speaking.

    • yorwba 2 days ago

      If you completely do away with autoregression, prompt tokens can pay attention to generated tokens, so even the prompt tokens' KV vectors change at every step and you cannot cache anything.

      For this reason, models that generate text using diffusion typically generate blocks of tokens at a time, where tokens within a block freely attend to each other, but across blocks there's causal masking so that each block only depends on the preceding ones and we're back to autoregression again. That makes caching possible, but also means you still can't have diffusion change the beginning of a long text to match the end.

      • namibj a day ago

        I specifically mean prompts here, and I don't mean they'd have casual attention. Just run an encoder to get your KV cache pre-filling of the prompt, then do non-causal diffusion generation of the response referencing the cached prompt without re-encoding the prompt.

        You don't need to revert to chunks to get to enjoy prompt caching, especially if you use it in a RAG type way with minor provisions to allow KV caching the RAG fragments (a bunch of work has been done on that, iirc even DeepSeekV3 would allow that).

gen3 2 days ago

Incredibly cool work, and a great primer on diffusion

  • ProofHouse 2 days ago

    Are you aware of more in depth material outside research papers which I’ve mostly read already?