rafram 2 days ago

It’s an interesting idea, but still way too unreliable to use in production IMO. When a traditional OCR model can’t read the text, it’ll output gibberish with low confidence; when a VLM can’t read the text, it’ll output something confidently made up, and it has no way to report confidence. (You can ask it to, but the number will itself be made up.)

I tried using a VLM to recognize handwritten text in genealogical sources, and it made up names and dates that sort of fit the vibe of the document when it couldn’t read the text! They sounded right for the ethnicity and time period but were entirely fake. There’s no way to ground the model using the source text when the model is your OCR.

  • themanmaran 2 days ago

    Thing is, the majority of OCR errors aren't character issues, but layout issues. Things like complex tables with cells being returned under the wrong header. And if the numbers in an income statement are one column off creates a pretty big risk.

    Confidence intervals are a red herring. And only as good as the code interpreting them. If the OCR model gives you back 500 words all ranging from 0.70 to 0.95 confidence, what do you do? Reject the entire document if there's a single value below 0.90?

    If so you'd be passing every single document to a human review, and might as well not run the OCR. But if you're not rejecting based on CI, then you're exposed to just as much risk as using an LLM.

    • anon373839 2 days ago

      > But if you're not rejecting based on CI, then you're exposed to just as much risk as using an LLM.

      That's not true. LLMs and OCR have very different failure modes. With LLMs, there is unbounded potential for hallucination, and the entire document is at risk. For example: if something in the lower right-hand corner of the page takes the model to a sparsely sampled part of the latent space, it can end up deciding that it makes sense to rewrite the document title! Or anything else. LLMs also have a pernicious habit of "helpfully" completing partial sentences that appear at the beginning or end of a page of text.

      With OCR, errors are localized and have a greater chance of being detected when read.

      I think for a lot of cases, the best solution is to fine-tune a model like LayoutLM, which can classify the actual text tokens in a document (whether obtained from OCR or a native text layer) using visual and spatial information. Then, there are no hallucinations and you can use uncertainty information from both the OCR (if used) and the text classification. But it does mean that you have to do the work of annotating data and training a model, rather than prompt engineering...

      • tensor a day ago

        100% this, combining traditional OCR with VLMs that can work with bounding boxes so that you can correlate the two is the way to go.

    • tensor 2 days ago

      Having experience in this area, audit, legal, confidence intervals are essential. No, you don't end up "passing every single document" to human review. That's made up nonsense. But confidence intervals can pretty easily flag poorly OCR'd documents, and then yes they are done by human review.

      If you try to pitch hallucinations to these fields, they'll just choose 100% manual instead. It's a non-starter.

      • xattt 2 days ago

        I work in a health insurance adjacent field. I can see my work going the way of the dodo as soon as VLLs take off in interpreting historical health records with physicians’ handwriting.

        • gtirloni 2 days ago

          So never considering their handwriting :)

          That being said, all doctors I have consulted with in the past year or so used signed electronic prescriptions.

    • bayindirh 2 days ago

      The problem is, regardless of the confidence number, you can scan and mark document for grammatical errors.

      In VLM/LLM powered methods, the missing/misred data will be hallucinated and you can't know whether something scanned correctly or not. I personally scan and OCR tons of personal documents, I prefer "gibberish" rather than "hallucinations", because they're easier to catch.

      We had this problem before [0], on some Xerox scanners and copiers. Results will be disastrous. It's not a question of if, but when.

      I personally tried Gemini and OpenAI's models for OCR, and no, I won't continue using them further.

      [0]: https://www.theregister.com/2013/08/06/xerox_copier_flaw_mea...

    • rafram 2 days ago

      Then use an LLM to extract layout information. Don’t trust it to read the text.

      > If the OCR model gives you back 500 words all ranging from 0.70 to 0.95 confidence, what do you do? Reject the entire document if there's a single value below 0.90?

      No, of course not. You have a human review the words/segments with low confidence.

    • sudoshred a day ago

      That’s assuming that confidence intervals are even independently comparable. Anecdotally major OCR services with specific languages have average confidence intervals that are wildly divergent from similar services with different languages for the same relative quality of result. Acting as if confidence interval is in any way absolute or otherwise able to reliably and consistently indicate the relative quality of results is a mischaracterization at best. In the worst case CI is as good as an RNG. The value of the CI is in the ability to tune usage of the results based on observations of the users and characteristics of the request, sometimes it is meaningful but not always. In this case “good” code essentially hardcodes handling for all the idiosyncrasies of the common usage and the OCR service.

  • EarlyOom 2 days ago

    This is the main focus of VLM Run and typed extraction more generally. If you provide proper type constraints (e.g. with Pydantic) you can dramatically reduce the surface area for hallucination. Then there's actually fine-tuning on your dataset (we're working on this) to push accuracy beyond what you get from an unspecialized frontier model.

    • rafram 2 days ago

      Re type constraints: Not really. If one of the fields in my JSON is `name` but the model can’t read the name on the page, it will very happily make one up. Type constraints are good for making sure that your data is parseable, but they don’t do anything to fix the undetectable inaccuracy problem.

      Fine-tuning does help, though.

      • fzysingularity 2 days ago

        Yes, both false positives and false negatives like the one you mentioned happens when the schema is sometimes ill-defined. Making name optional via `name: str | None` actually turns out ensure that the model only fills it if it’s certain that field exists.

        These are some of the nuances we had to work with during VLM fine-tuning with structured JSON.

        • rafram 2 days ago

          You seem to be missing my point.

    • hashta 2 days ago

      An effective way that usually increases accuracy is to use an ensemble of capable models that are trained independently (e.g., gemini, gpt-4o, qwen). If >x% of them have the same output, accept it, otherwise reject and manually review

      • rafram 2 days ago

        There’s a very low chance that three separate models will come up with the same result. There are always going to be errors, small or large. Even if you find a way around that, running the process three times on every page is going to be prohibitively expensive, especially if you want to finetune.

        • vintermann 2 days ago

          No, running it two or three times for every page isn't prohibitive. In fact, one of the arguments for using modern general-purpose multimodal models for historical HTR is that it is cheaper and faster than Transkribus.

          What you can do is for instance to ask one model for a transcription, and ask a second model to compare the transcription to the image and correct any errors it finds. You actually have a lot of budget to try things like these if the alternative is to fine-tune your own model.

        • jjk166 a day ago

          The odds of them getting the same result for any given patch should be very high if it is the correct result and they aren't garbage. The only times where they are not getting the same result would be the times when at least one has made a mistake. The odds of 3 different models making the same mistake should be low (unless it's something genuinely ambiguous like 0 vs O in a random alphanumeric string).

          Best 2 out of 3 should be far more reliable than any model on its own. You could even weight their responses for different types of results, like say model B is consistently better for serif fonts, maybe their confidence counts for 1.5 times as much as the confidence of models A and C.

    • refulgentis 2 days ago

      That's not OCR.

      It is an absolute miracle.

      It is transmutating a picture into JSON.

      I never thought this would be possible in my lifetime.

      But that is different from what your interlocutor is discussing.

      • 1024core 2 days ago

        > I never thought this would be possible in my lifetime.

        I used to work in Computer Vision and Image Processing. These days I utter this sentence on an almost daily basis. :-D

  • constantinum 2 days ago

    The primary issue with LLMs is hallucination, which can lead to incorrect data and flawed business decisions.

    For example, Llamaparse(https://docs.llamaindex.ai/en/stable/llama_cloud/llama_parse...) uses LLMs for PDF text extraction but faces hallucination problems. See this issue for more details: https://github.com/run-llama/llama_parse/issues/420.

    For those interested, try LLMWhisperer(https://unstract.com/llmwhisperer/) for OCR. It avoids LLMs, eliminates hallucination issues, and preserves the input document layout for better context.

    Examples of extracting complex layout:

    https://imgur.com/a/YQMkLpA

    https://imgur.com/a/NlZOrtX

    https://imgur.com/a/htIm6cf

    • Hackbraten 2 days ago

      > try LLMWhisperer(https://unstract.com/llmwhisperer/) for OCR. It avoids LLMs

      The website you linked says it uses LLMs?

      • constantinum 2 days ago

        The tool doesn't use any LLMs for processing/parsing the data. It parses and converts into raw text.

        The final output(raw text) of the parsing is then fed to LLMs for data extraction. e.g. Extracting data from insurance, banking, and invoice documents.

    • ungerik 2 days ago

      Those images look exactly like what you get from every OCR tool out there if you use the XY information.

  • KoolKat23 2 days ago

    I've been using gemini 2 flash to extract financial data, within my sample which is perhaps small (probably 1000 entries so far), I've had one single error only so like a 99.9% success rate.

    (There's slightly more errors if I ask it to add numbers but this isn't OCR and a bit more of a reach, although it is very good at this too regardless).

    Many hallucinations can be avoided by telling it to use null if there is no number present.

    • CarolineRommer a day ago

      And by using two different systems (say Gem plus ChatGPT) you essentially reduce chances of hallucination to zero, no? You would need to be VERY unlucky to find to LLMs hallucinating the exact same response.

  • the8472 a day ago

    Shouldn't confidence be available at the sampler level and also be conditional on the vision input, not just the next-token prediction?

  • delichon 2 days ago

    How about calculating confidence in terms of which output regions are stable across the same input on multiple tries. Expensive, but the hallucinations should have more variable output and be fuzzier than higher confidence regions in averages.

  • staticman2 2 days ago

    I think it would be pretty reliable in controlled circumstances. If I take a picture of a book with my cell phone- google Gemini pro is much better at recognizing the text than Samsung's built in OCR.

    • Grimblewald 2 days ago

      I would think the same, the cause for hesitation is that we only think this, but cannot know it without thorough testing. Right now the scope of problems where things behave reliably and as expected and scope of problems where things get whacky are unknown. The borders are known to some rather fuzzy extent at best, by people who work with these things as a full-time job. This means we are just blindly gambling on it. For important things, archiving, etc. where truth matters, I will continue using traditional OCR until we can define the reliable use-case scope of LLM based OCR better. I am extremely enthusiastic about LLM's and the things these offer, but i am also a realist. LLM's are an infant technology, and no-where near the level of maturity that companies like openAI claim.

  • j_bum 2 days ago

    This is naive, but can you ask the model to provide a confidence rating for sections of the document?

    • thatjoeoverthr 2 days ago

      More broadly, it’s not trained to have any self awareness and this is a factor in other “hallucinations”. If you ask, for example, to describe the “marathon crater”, it doesn’t recognize that there’s no such thing in its corpus, but will instead start by writing an answer (“sure! The marathon crater is..”) and freestyle from there. Same if you ask it why it did something, or details about itself, etc. You should access one directly (not through an app like chatGPT) and build a careful suite of tests to learn more. Really fascinating.

      • _delirium a day ago

        Yes, there’s research showing that models’ self-assessment of probabilities (when you ask them via prompting) don’t even match the same models’ actual probabilities, in cases where you can measure the probabilities directly (e.g. by looking at the logits): https://arxiv.org/abs/2305.13264

        • anon291 a day ago

          Logits are not probabilities... at least not in the way you understand probability. Probabilities mathematically are anything that broadly behaves like a probability, whereas colloquially probabilities represent the likelihood or the preponderance of a particular phenomenon. Logits are not either of those.

          • _delirium 20 hours ago

            The probability of token generation is a function of the logits. Do you have an actual point related to the linked paper?

            • anon291 9 hours ago

              That is one way of sampling tokens. It is not the only way. Logits do not map neatly to belief, although it is convenient to behave as if they do

    • UnlockedSecrets 2 days ago

      You can ask, and it will be made up not grounded in reality

      • j_bum 2 days ago

        Sure, but I’m curious if it would serve to provide some self-regulation.

        E.g., all of this “thinking” trend that’s happening. It would be interesting if the model does a first pass, scored its individual outputs, then reviews its scores and censors/flags scores that are low.

        I know it’s all “made up”, but generally I have a lot of success asking the model to give 0-1 ratings on confidence for its answers, especially for new niche questions that are likely out of the training set.

        • rafram 2 days ago

          It doesn’t. Asking for confidence doesn’t prompt it to make multiple passes, and there’s no real concept of “passes” when you’re talking about non-reasoning models. The model takes in text and image tokens and spits out the text tokens that logically follow them. You can try asking it to think step by step, or you can use a reasoning model that essentially bakes that behavior into the training data, but I haven’t found that to be very useful for OCR tasks. If the encoded version of your image doesn’t resolve to text in the model’s latent space, it never will, no matter how much the model “reasons” (spits out intermediate text tokens) before giving a final answer.

    • ttyprintk 2 days ago

      It’s not naive; tesseract does this.

      • rafram 2 days ago

        Tesseract doesn’t use an LLM. LLMs don’t know how confident they are; Tesseract’s model does.

        • touisteur 2 days ago

          With most Machine Learning algorithms I used to get shapley values or other 'explainable AI' metrics (for a large cost compared to simple inference, yes), it's very unsettling and frustrating to work without them now on LLMs.

        • hansvm 2 days ago

          Kind of. Tesseract's confidence is just a raw model probability output. You could easily use the entropy associated with each token coming out of an LLM to do the same thing.

          • rafram 2 days ago

            True, but LLM token probability doesn't map nearly as cleanly to "how readable was the text".

            • hansvm a day ago

              Why not though? Both kinds of models jumble around the data and spit out a probability distribution. Why is the tesseract distribution inherently more explainable (aside from the UI/UX problem of the uncertainty being per-token instead of per-character)?

  • cratermoon 2 days ago

    Agree wholeheartedly. Modern OCR is astonishingly good, more importantly it's deterministically so. It's failure modes, when it's unable to read the text, are recognizably failures.

    Results for VLM accuracy & precision are not good. https://arxiv.org/html/2406.04470v1#S4

    • VeejayRampay a day ago

      which solutions would you classify as "modern OCR"

      are we talking tesseract or something?

      • criddell a day ago

        Probably something like Apple Vision Framework or Amazon Textract or Google's Cloud Vision.

        Tesseract does well under ideal conditions, but the world is messy.

        • cratermoon a day ago

          I was thinking ABBYY FineReader, but those, too. Instead of using VLMs or any sort of generative AI, they're build on good old-fashioned feature extraction and nearest neighbor classifiers such as the k-nearest neighbors algorithm. It's possible to build a working prototype of this technique using basic ML algorithms.

themanmaran 2 days ago

We recently published an open source benchmark [1] specifically for evaluating VLM vs OCR. And generally the VLMs did much better than the traditional OCR models.

VLM highlights:

- Handwriting. Being contextually aware helps here. i.e. they read the document like a human would, interpreting the whole word/sentence instead of character by character

- Charts/Infographics. VLMs can actually interpret charts or flow diagrams into a text format. Including things like color coded lines.

Traditional OCR highlights:

- Standardized documents (e.x. US tax forms that they've been trained on)

- Dense text. Imagine textbooks and multi column research papers. This is the easiest OCR use case, but VLMS really struggle as the number of output tokens increase.

- Bounding boxes. There still isn't really a model that gives super precise bounding boxes. Supposedly Gemini and Qwen were trained for it, but they don't perform as well as traditional models.

There's still a ton of room for improvement, but especially with models like Gemini the accuracy/cost is really competitive.

[1] https://github.com/getomni-ai/benchmark

  • fzysingularity 2 days ago

    Saw your benchmark, looks great. Will run our models against those benchmark and share some of our learnings.

    As you mentioned there are a few caveats to VLMs that folks are typically unaware of (not at all exhaustive, but the ones you highlighted):

    1. Long-form text (dense): Token limits of 4/8K mean that dense pages may go over limits of the LLM outputs. This requires some careful work to make them work as seamlessly as OCR.

    2. Visual grounding a.k.a. bounding boxes are definitely one of those things that VLMs aren't natively good at (partly because the cross-entropy losses used aren't really geared for bounding box regression). We're definitely making some strides here [1] to improve that so you're going to get an experience that is almost as good as native bounding box regression (all within the same VLM). [1]

    [1] https://colab.research.google.com/github/vlm-run/vlmrun-cook...

rendaw 2 days ago

Why do all these OCR services only show examples with flawless screenshots of digital documents? Are there that many people trying to OCR digital data? Why not just copy the HTML?

If it's not intended for digital documents, where are the screenshots with fold marks, slipping lines, lighting gradients, thumbs, etc etc.

ekidd 2 days ago

I've been experimenting with vlm-run (plus custom form definitions), and it works surprisingly well with Gemini 2.0 Flash. Costs, as I understand, are also quite low for Gemini. You'll have best results with simple to medium-complexity forms, roughly the same ones you could ask a human to process with less than 10 minutes of training.

If you need something like this, it's definitely good enough that you should consider kicking the tires.

orliesaurus 2 days ago

I think OCR tools are good at what they say on the box, recognizing characters on a piece of paper etc. If I understand this right, the advantage of using a vision language model is the added logic that you can say things like: "Clearly this is a string, but does it look like a timestamp or something else?"

  • EarlyOom 2 days ago

    VLMs are able to take context into account when filling in fields, following either a global or field specific prompt. This is great for e.g. unlabeled axes, checking a legend for units to be suffixed after a number, etc. Also, you catch lots of really simple errors with type hints (e.g. dates, addresses, country codes etc.).

  • raxxorraxor 2 days ago

    This has always been part of the complete OCR package as far as I know. The raw result of an OCR constantly fails to differentiate 1 l I i | or other similar symbols/letters.

    Maybe this necessary step can be improved and altered with a VLM. There is also the preprocessing where the image get its perspective corrected. Not sure how well a VLM performs here.

    As you said, I think combining these techniques will be the most efficient way forward.

  • vintermann 2 days ago

    You can also use it for robustness. Looking at e.g. historical censuses, it's amazing how many ways people found to not follow the written instructions for filling them out. Often the information you want is still there, but woe to you if you look at the columns one by one and assume the information in them to be accurate and neatly within its bounding box.

BrannonKing 2 days ago

What I want: take scan/photo of a document (including a full book), pass it to the language model, and then get out a Latex document that matches the original document exactly (minus the copier/camera glitches and angles). I feel like some kind of reinforcement learning model would be possible for this. It should be able to learn to generate Latex that reproduces the exact image, pixel for pixel (learning which pixels are just noise).

  • NoMoreNicksLeft 2 days ago

    A big difficulty there is typeface detection, some of these were never digital fonts. But, even if it could detect them, you likely don't have those fonts on your computer to be able to put it back together as a digital typesetting for any but the most trivial fonts.

    • retrorangular 2 days ago

      The tool could include all known open source fonts, and for the rest, maybe could have a model recreate missing fonts for non-patented fonts, as while font files (.ttf, .otf, .woff, etc.) are copyrighted, styles usually do not have design patents, so tracing and re-creating them is usually not an issue as far as I'm aware (not a lawyer.) [1]

      Though if it accidentally "traces" one of the few exceptions, then you've potentially committed a crime, and the big difficulty in typeface detection you mention increases those odds. That said, there are so few exceptions that even if the model couldn't properly identify a font, it might be able to identify whether a font is likely to have a design patent.

      I do think getting an AI to create a high quality vector font from a potentially low-res raster graphic is going to be quite challenging though. Raster to vector tools I've tried in the past left a bit to be desired.

      1. https://www.copyright.gov/comp3/chap900/ch900-visual-art.pdf

      > As a general rule, typeface, typefont, lettering, calligraphy, and typographic ornamentation are not registrable. 37 C.F.R. § 202.1(a), (e). These elements are mere variations of uncopyrightable letters or words, which in turn are the building blocks of expression. See id. The Office typically refuses claims based on individual alphabetic or numbering characters, sets or fonts of related characters, fanciful lettering and calligraphy, or other forms of typeface. This is true regardless of how novel and creative the shape and form of the typeface characters may be.

      > There are some very limited cases where the Office may register some types of typeface, typefont, lettering, or calligraphy, such as the following:

      > • Pictorial or graphic elements that are incorporated into uncopyrightable characters or used to represent an entire letter or number may be registrable. Examples include original pictorial art that forms the entire body or shape of the typeface characters, such as a representation of an oak tree, a rose, or a giraffe that is depicted in the shape of a particular letter.

      > • Typeface ornamentation that is separable from the typeface characters is almost always an add-on to the beginning and/or ending of the characters. To the extent that such flourishes, swirls, vector ornaments, scrollwork, borders and frames, wreaths, and the like represent works of pictorial or graphic authorship in either their individual designs or patterned repetitions, they may be protected by copyright. However, the mere use of text effects (including chalk, popup papercraft, neon, beer glass, spooky-fog, and weathered-and-worn), while potentially separable, is de minimis and not sufficient to support a registration.

      > The Office may register a computer program that creates or uses certain typeface or typefont designs, but the registration covers only the source code that generates these designs, not the typeface, typefont, lettering, or calligraphy itself. For a general discussion of computer programs that generate typeface designs, see Chapter 700, Section 723.

  • sva_ 2 days ago

    Did you try mathpix? Not sure about full pages, but it is pretty good at eqn

erulabs 2 days ago

You sort of have to use both. OCR and LLM and then correlate the two results. They are bad at very different things, but a subsequent call to a 2nd LLM to pair together the results does improve quality significantly, plus you get both document understanding and context as well as bounding boxes, etc.

I'm building a "never fill out paperwork again" app, if anyone is interested, would be happy to chat!

  • fzysingularity 2 days ago

    We think VLMs would outperform most OCR+LLM solutions in due time. I get that there’s need for these hybrid solutions today, but we’re comparing 20+ year mature tech vs something that’s roughly 1.5 years old.

    Also, VLMs are end-to-end trainable, unlike OCR+LLM solutions (that are trained separately), so it’s clear that these approaches scale much better for domain-specific use cases or verticals.

  • K0balt 2 days ago

    A VLM that invokes ocr tool use is a compelling idea that could result in pretty good results, I would expect.

  • cpursley 2 days ago

    Any tips on how to prompt that second pairing step? And what sort of things to ask the llm to extract in step 1?

leecarraher 2 days ago

maybe it was my prompt, but there seems to be far too much interpretation after the image embedding. In my examples it implicitly started to summarize parts of the text, unfortunately incorrectly. On an invoice with typed lettering it summarized that payments submitted would not post for 2-3 business days, when in reality the text said if you submitted after 2p on a friday, the payment would not post until the following monday. Which is significantly different. I'd be curious if you could ablate those layers in some way, because the one-shot structured text detection recognition was much better than vanilla ocr.

serjester 2 days ago

Good to see more work being done here, but I don't understand why this is tied to someone's proprietary API. Swapping model providers and adding some basic logging is not remotely painful enough to justify onboarding yet another vendor. Especially one that's handling something as sensitive as LLM prompts.

iLemming 2 days ago

What's the fastest and accurate CLI OCR tool? My use case is simple - I want to be able to grab a piece of screen (Flameshot is great for that), and OCR it. I need this for note-taking during pair-programming over Zoom.

Currently I'm using tesseract - it works, it's fast, but it also makes mistakes; it would be also great if it could discern tabular data and put them in ascii or markdown tables. I've tried docling, but it feels like a bit of an overkill. It seems to be slower - remember, I need to be able to grab the text from the screenshot very quickly. I have only tried default settings, maybe tweaking it would improve things.

Can anyone share some thoughts on this? Thanks!

  • acdha 2 days ago

    Anything using the Apple Vision framework is fast and surprisingly accurate:

    https://github.com/bytefer/macos-vision-ocr

    • cdolan 2 days ago

      Cool to see, may use this locally for OCR in some cases. But I think the "handwriting" example is a little misleading. Thats a font, not a scan of hand written material

    • wahnfrieden 2 days ago

      This uses the old APIs that are less accurate than the new Swift-only LiveText ones

  • ANighRaisin 2 days ago

    The AI OCR build into snipping tool in windows is better than tesseract, albeit more inconvenient than something like powertoys or Capture2Text, which use a quick shortcut.

temp0826 2 days ago

I've been looking for a solution to translate a dictionary for me. It is a Shipibo-Conibo (indigenous Peruvian language) to Spanish dictionary- I'd like to translate the Spanish to English (and leave the Shipibo intact). Curious for any thoughts here. I have the dictionary as a PDF (already searchable so I don't think it would need to be re-OCR'd...though that's possible too, it's not clearest scan).

  • wrs 2 days ago

    I wouldn’t be surprised to find that Claude/ChatGPT/etc. can just…do that. With the prompt you just gave.

    The output could be in Markdown, which is easily turned into a PDF. You would have to break up the input PDF into pages to avoid running out of output window.

  • zzleeper 2 days ago

    By any chance, would it be possible to share the PDF? I haven't heard shipibo language in a long while, and am quite curious about it.

gfiorav 2 days ago

I wonder what the speed of this approach vs traditional ocr techniques. Also, curious if this could be used for text detection (find a bounding box containing text within an image).

  • vunderba 2 days ago

    Was just coming here to say this, there does not yet exist a multimodal vision LLM approach that is capable of identifying bounding boxes of where the text occurs. I suppose you could manually cut the image up and send each part separately to the LLM but that feels like an kludge and it's still in-exact.

    • EarlyOom 2 days ago

      We can do bounding boxes too :) we just call it visual grounding https://github.com/vlm-run/vlmrun-cookbook/blob/main/noteboo...

      • what 2 days ago

        Kind of skeptical since you also provide a “confidence” value, which has to be entirely made up.

        Do you have an example that isn’t a sample drivers license? Something that is unlikely to have appeared in an LLM’s training data?

      • vunderba 2 days ago

        Wait what? That's pretty neat. I'm on my phone right now, so I can't really view the notebook very easily. How does this work? Are you using some kind of continual partitioning of the image and refeeding that back into the LLM to sort of pseudo-zoom in/out on the parts that contain non-cut off text until you can resolve that into rough coordinates?

    • chpatrick 2 days ago

      qwen 2.5 vl was specifically trained to produce bounding boxes I believe.

fl0under 2 days ago

Looks cool!

May also be interested in Allen AI's OCR tool olmOCR they just released too [1][2]. They say "convert a million PDF pages for only $190 USD".

[1] https://github.com/allenai/olmocr [2] https://arxiv.org/abs/2502.18443

  • TZubiri 2 days ago

    The issue with that promise is that anyone can convert pdfs, the question is whether the conversions are correct or whether you have

    Income Expenses 200 100

    On one document, and

    Income Expenses 20 0100

    On others.

    There's no shortage of products that tried to solve this problem from scratch (or by piggybacking on other projects) and called it a day without worrying about the huge problem that is quality and parseability.

    The most robust players just give you the coordinates of a glyph and you are on your own: Textract, PDFBox.

Inviz 2 days ago

Service doesnt inspire confidence. Openai-compatible api doesnt work (expects `content.str` in message to be a string - ???). Getting 500s on non-openai compatible endpoint - seems like timeouts(?). When it did work it missed a lot, and hallucinated a lot too on custom documents/schemas.

rasz 2 days ago

I rather see machine learning used to help OCR by

- recognizing/recreating exact font used

- helping align/rotate source

Not to hallucinate gibberish when source lacks enough data.

intalentive 2 days ago

What's the value-add here? The schemas?

  • fzysingularity 2 days ago

    We've seen so many different schemas and ways of prompting the VLMs. We're just standardizing it here, and making it dead-simple to try it out across model providers.

  • vlmrunadmin007 2 days ago

    Basically there is no model schema combination. IF you go ahead and prompt a open source model with the schema it doesn't produce the results in the expected format. The main contribution is how to make these model conform to your specific needs and in a structured format.

    • idiliv 2 days ago

      Wait, but we're doing that already, and it works well (Qwen 2.5 VL)? If need be, you can always resort to structured generation to enforce schema conformity?

cyp0633 2 days ago

Existing solutions like Tesseract already can embed text into the image, but I'm wondering if there's a way to combine LLM with Tesseract, so that LLMs can help correcting results and finding unidentified text, and finally still embed text back to the image

syntaxing 2 days ago

Maybe I’m being greedy but is it possible to have a vLLM detect when a portion is an image? I want to convert some handwritten notes into markdown but some portion are diagrams. I want the vLLM to extract the diagrams to embed into the markdown output

  • vlmrunadmin007 2 days ago

    We have successfully tested the model with vLLM and plan to release it across multiple inference server frameworks, including vLLM and OLAMA.

rasguanabana 2 days ago

Wouldn’t VLM be susceptible to prompt injection?

TZubiri 2 days ago

Wow thanks!

There's a client who had a startup idea that involved analyzing pdfs, I used textract, but it was too cumbersome and unreliable.

Maybe I can reach out to see if he wants to give it anothee go with this!

  • fzysingularity 2 days ago

    Let us know, I think >70% of OCR tasks today can be done with VLMs with a little bit of guidance ;). Ping us at contact "at" vlm.run

egorfine 2 days ago

I had a need to scan serial numbers from Apple's product boxes out of pictures taken by a clueless person on their phone. All OCR tools failed.

Vision model did the trick so well it's not even funny to discuss anything further.

"This is a picture of Apple product box. Find and return only the serial number of the product as found on a label. Return 'none' if no serial number can be found".

  • ptx 2 days ago

    Did you check if all the numbers were correct?

    • egorfine 2 days ago

      Of course. There was a little piece of code to query Apple for S/N data and it validated whether it was correct.

htrp 2 days ago

VLM's can't replace ocr one to one.. most hosted multimodal models seem to have a classical OCR (tesseract-based) step in their inference loop

LeoPanthera 2 days ago

What's the characters-per-Wh of an LLM compared to traditional OCR?

  • fzysingularity 2 days ago

    That's a tough one to answer right now, but to be perfectly honest, we're off by 2-3 orders of magnitude in terms of chars/W.

    That said, VLMs are extremely powerful visual learners with LLM-like reasoning capabilities making them more versatile than OCR for practically all imaging domains.

    In a matter of a few years, I think we'll essentially see models that are more cost-performant via distillation, quantization and the multitude of tricks you can do to reduce the inference overhead.

  • mlyle 2 days ago

    A lot worse. But, higher quality OCR will reduce the amount of human post-processing needed, and, in turn will allow us to reduce the number of humans. Since humans are relatively expensive in energy use, this can be expected to save a lot of energy.

    • rafram 2 days ago

      > Since humans are relatively expensive in energy use

      Are they? I'm seeing figures around 80 watts at rest, and 150 when exercising. The brain itself only uses about 20 watts [1]. That's 1/35 of a single H100's power consumption (700 watts - which doesn't even take into account the energy required to cool the data center, the humans who build and maintain it, ...).

      [1]: https://www.humanbrainproject.eu/en/follow-hbp/news/2023/09/...

      • mlyle 2 days ago

        The PUE of humans for that 80 watts is terrible, though. Ridiculous multiples of additional energy needed to convert solar power to a form of a energy that they can use, and even the manufacturing lifecycle and transport of humans to the datacenter is energy inefficient.

  • ambicapter 2 days ago

    People really only started talking about the cost of running things when LLMs came out. Most everything before that was too cheap to be a serious consideration.

submeta 2 days ago

Can I use this to convert flowcharts to yaml representations?

  • EarlyOom 2 days ago

    We convert to a JSON schema, but it would be trivial to convert this to yaml. There are some minor differences in e.g. tokens required to output JSON vs yaml which is why we've opted for our strategy.

gunian 2 days ago

replaced it with real humans -> nano tech in their brain -> transmit to server getting almost 99% accuracy

skbjml 2 days ago

This is awesome!

tgtweak 2 days ago

Not really interested until this can run locally without api keys :\

mmusson 2 days ago

Lol. The resume includes expert in Mia Khalifa easter egg.