submeta a minute ago

Is this able to convert pdf flowcharts into yaml or json representations of them? I have been experimenting with Claude 3.5. It has been very good at readig / understanding/ converting into representations of flow charts.

So I am wondering if this is more capable. Will try definitely, but maybe someone can chime in.

owenpalmer 3 hours ago

This is incredibly exciting. I've been pondering/experimenting on a hobby project that makes reading papers and textbooks easier and more effective. Unfortunately the OCR and figure extraction technology just wasn't there yet. This is a game changer.

Specifically, this allows you to associate figure references with the actual figure, which would allow me to build a UI that solves the annoying problem of looking for a referenced figure on another page, which breaks up the flow of reading.

It also allows a clean conversion to HTML, so you can add cool functionality like clicking on unfamiliar words for definitions, or inserting LLM generated checkpoint questions to verify understanding. I would like to see if I can automatically integrate Andy Matuschak's Orbit[0] SRS into any PDF.

Lots of potential here.

[0] https://docs.withorbit.com/

  • generalizations 2 hours ago

    Wait does this deal with images?

    • ezfe 2 hours ago

      The output includes images from the input. You can see that on one of the examples where a logo is cropped out of the source and included in the result.

kbyatnal 2 hours ago

We're approaching the point where OCR becomes "solved" — very exciting! Any legacy vendors providing pure OCR are going to get steamrolled by these VLMs.

However IMO, there's still a large gap for businesses in going from raw OCR outputs —> document processing deployed in prod for mission-critical use cases. LLMs and VLMs aren't magic, and anyone who goes in expecting 100% automation is in for a surprise.

You still need to build and label datasets, orchestrate pipelines (classify -> split -> extract), detect uncertainty and correct with human-in-the-loop, fine-tune, and a lot more. You can certainly get close to full automation over time, but it's going to take time and effort. But the future is on the horizon!

Disclaimer: I started a LLM doc processing company to help companies solve problems in this space (https://extend.app/)

  • dml2135 an hour ago

    One problem I’ve encountered at my small startup in evaluating OCR technologies is precisely convincing stakeholders that the “human-in-the-loop” part is both unavoidable, and ultimately beneficial.

    PMs want to hear that an OCR solution will be fully automated out-of-the-box. My gut says that anything offering that is snake-oil, and I try to convey that the OCR solution they want is possible, but if you are unwilling to pay the tuning cost, it’s going to flop out of the gate. At that point they lose interest and move on to other priorities.

    • kbyatnal 43 minutes ago

      Yup definitely, and this is exactly why I built my startup. I've heard this a bunch across startups & large enterprises that we work with. 100% automation is an impossible target, because even humans are not 100% perfect. So how we can expect LLMs to be?

      But that doesn't mean you have to abandon the effort. You can still definitely achieve production-grade accuracy! It just requires having the right tooling in place, which reduces the upfront tuning cost. We typically see folks get there on the order of days or 1-2 weeks (it doesn't necessarily need to take months).

  • techwizrd 33 minutes ago

    The challenge I have is how to get bounding boxes for the OCR, for things like redaction/de-identification.

    • kbyatnal 2 minutes ago

      yeah that's a fun challenge — what we've seen work well is a system that forces the LLM to generate citations for all extracted data, map that back to the original OCR content, and then generate bounding boxes that way. Tons of edge cases for sure that we've built a suite of heuristics for over time, but overall works really well.

  • risyachka an hour ago

    >> Any legacy vendors providing pure OCR are going to get steamrolled by these VLMs.

    -OR- they can just use these APIs, and considering that they have a client base - which would prefer to not rewrite integrations to get the same result - they can get rid of most code base, replace it with llm api and increase margins by 90% and enjoy good life.

Asraelite 2 hours ago

I never thought I'd see the day where technology finally advanced far enough that we can edit a PDF.

  • randomNumber7 2 hours ago

    I never thought driving a car is harder than editing a pdf.

    • pzo 2 hours ago

      It's not about harder but about what error you can tolerate. Here if you have accuracy 99% for many applications it's enough. If you have 99% accuracy per trip of no crash during self driving then you gonna be dead within a year very likely.

      For cars we need accuracy at least 99.99% and that's very hard.

      • rtsil an hour ago

        I doubt most people have 99% accuracy. The threshold of tolerance for error is just much lower for any self-driving system (and with good reason, because we're not familiar with them yet).

        • KeplerBoy 9 minutes ago

          How do you define 99% accuracy?

          I guess something like success rate for a trip (or mile) would be a more reasonable metric. Most people have a success rate far higher than 99% for averages trips.

          Most people who commute daily are probably doing something like a 1000 car rides a year and have minor accidents every few years. 99% success rates would mean monthly accidents.

  • toephu2 17 minutes ago

    I've been able to edit PDFs (95%+ of them) accurately for the past 10 years...

  • Apofis 2 hours ago

    Foxit PDF exists...

mvac 2 hours ago

Great progress, but unfortunately, for our use case (converting medical textbooks from PDF to MD), the results are not as good as those by MinerU/PDF-Extract-Kit [1].

Also the collab link in the article is broken, found a functional one [2] in the docs.

[1] https://github.com/opendatalab/MinerU [2] https://colab.research.google.com/github/mistralai/cookbook/...

  • owenpalmer 32 minutes ago

    I've been searching relentlessly for something like this! I wonder why it's been so hard to find... is it the Chinese?

    In any case, thanks for sharing.

serjester 2 hours ago

This is cool! With that said for anyone looking to use this in RAG, the downside to specialized models instead of general VLMs is you can't easily tune it to your use specific case. So for example, we use Gemini to add very specific alt text to images in the extracted Markdown. It's also 2 - 3X the cost of Gemini Flash - hopefully the increased performance is significant.

Regardless excited to see more and more competition in the space.

Wrote an article on it: https://www.sergey.fyi/articles/gemini-flash-2-tips

  • hyuuu an hour ago

    gemini flash is notorious for hallucinating the output of the OCR, be careful with it. For straight forward, semi-structured, low page count (under 5) it should perform well, but the more the context window is stretched the more the output gets more unreliable

vessenes 3 hours ago

Dang. Super fast and significantly more accurate than google, Claude and others.

Pricing : $1/1000 pages, or per 2k pages if “batched”. I’m not sure what batching means in this case: multiple pdfs? Why not split them to halve the cost?

Anyway this looks great at pdf to markdown.

  • sophiebits 3 hours ago

    Batched often means a higher latency option (minutes/hours instead of seconds), which providers can schedule more efficiently on their GPUs.

  • abiraja 3 hours ago

    Batching likely means the response is not real-time. You set up a batch job and they send you the results later.

    • ozim 2 hours ago

      If only business people I work with would understand 100GB even transfer over the network is not going to return immediately results ;)

    • vessenes 3 hours ago

      That makes sense. Idle time is nearly free after all.

  • kapitalx 2 hours ago

    From my testing so far, it seems it's super fast and responded synchronously. But it decided that the entire page is an image and returned `![img-0.jpeg](img-0.jpeg)` with coordinates in the metadata for the image, which is the entire page.

    Our tool, doctly.ai is much slower and async, but much more accurate and gets you the content itself as an markdown.

    • ralusek 2 hours ago

      I thought we stopped -ly company names ~8 years ago?

      • kapitalx 2 hours ago

        Haha for sure. Naming isn't just the hardest problem in computer science, it's always hard. But at some point you just have to pick something and move forward.

      • yieldcrv 2 hours ago

        if you talk to people gen-x and older, you still need .com domains

        for all those people that aren't just clicking on a link on their social media feed, chat group, or targeted ad

  • odiroot 2 hours ago

    May I ask as a layperson, how would you about using this to OCR multiple hundreds of pages? I tried the chat but it pretty much stops after the 2nd page.

    • beklein an hour ago

      You can check the example code on the Mistral documentation, you would _only_ have to change the value of the variable `document_url` to the URL of your uploaded PDF... and you need to change the `MISTRAL_API_KEY` to the value of your specific key that you can get from the Le Platforme webpage.

      https://docs.mistral.ai/capabilities/document/#ocr-with-pdf

    • sneak 2 hours ago

      Submit the pages via the API.

      • odiroot 16 minutes ago

        This worked indeed. Although I had to cut my document into smaller chunks. 900 pages at once ended with a timeout.

  • Tostino 3 hours ago

    Usually (With OpenAI, I haven't checked Mistral yet) it means an async api rather than a sync api.

    e.g. you submit multiple requests (pdfs) in one call, and get back an id for the batch. You then can check on the status of that batch and get the results for everything when done.

    It lets them use their available hardware to it's full capacity much better.

  • jacksnipe 3 hours ago

    I would assume this is 1 request containing 2k pages vs N requests whose total pages add up to 1000.

evmar 2 hours ago

I noticed on the Arabic example they lost a space after the first letter on the third to last line, can any native speakers confirm? (I only know enough Arabic to ask dumb questions like this, curious to learn more.)

Edit: it looks like they also added a vowel mark not present in the input on the line immediately after.

Edit2: here's a picture of what I'm talking about, the before/after: https://ibb.co/v6xcPMHv

  • resiros 2 hours ago

    Arabic speaker here. No, it's perfect.

    • evmar an hour ago

      I am pretty sure it added a kasrah not present in the input on the 2nd to last line. (Not saying it's not super impressive, and also that almost certainly is the right word, but I think that still means not quite "perfect"?)

      • gl-prod an hour ago

        Yes, it looks like it did add a kasrah to the word ظهري

    • gl-prod an hour ago

      He means the space between the wāw (و) and the word

      • evmar an hour ago

        I added a pic to the original comment, sorry for not being clear!

noloz 3 minutes ago

Are there any open source projects with the same goal?

opwieurposiu 3 hours ago

Related, does anyone know of an app that can read gauges from an image and log the number to influx? I have a solar power meter in my crawlspace, it is inconvenient to go down there. I want to point an old phone at it and log it so I can check it easily. The gauge is digital and looks like this:

https://www.pvh2o.com/solarShed/firstPower.jpg

  • dehrmann 3 hours ago

    You'll be happier finding a replacement meter that has an interface to monitor it directly or a second meter. An old phone and OCR will be very brittle.

    • haswell 3 hours ago

      Not OP, but it sounds like the kind of project I’d undertake.

      Happiness for me is about exploring the problem within constraints and the satisfaction of building the solution. Brittleness is often of less concern than the fun factor.

      And some kinds of brittleness can be managed/solved, which adds to the fun.

      • arcfour 2 hours ago

        I would posit that learning how the device works, and how to integrate with a newer digital monitoring device would be just as interesting and less brittle.

        • haswell 2 hours ago

          Possibly! But I’ve recently wanted to dabble with computer vision, so I’d be looking at a project like this as a way to scratch a specific itch. Again, not OP so I don’t know what their priorities are, but just offering one angle for why one might choose a less “optimal” approach.

  • ramses0 2 hours ago

    https://www.home-assistant.io/integrations/seven_segments/

    https://www.unix-ag.uni-kl.de/~auerswal/ssocr/

    https://github.com/tesseract-ocr/tesseract

    https://community.home-assistant.io/t/ocr-on-camera-image-fo...

    https://www.google.com/search?q=home+assistant+ocr+integrati...

    https://www.google.com/search?q=esphome+ocr+sensor

    https://hackaday.com/2021/02/07/an-esp-will-read-your-meter-...

    ...start digging around and you'll likely find something. HA has integrations which can support writing to InfluxDB (local for sure, and you can probably configure it for a remote influxdb).

    You're looking at 1xRaspberry PI, 1xUSB Webcam, 1x"Power Management / humidity management / waterproof electrical box" to stuff it into, and then either YOLO and DIY to shoot over to your influxdb, or set up a Home Assistant and "attach" your frankenbox as some sort of "sensor" or "integration" which spits out metrics and yadayada...

  • BonoboIO 2 hours ago

    Gemini Free Tier would surely work

  • renewiltord 3 hours ago

    4o transcribes it perfectly. You can usually root an old Android and write this app in ~2h with LLMs if unfamiliar. The hard part will be maintaining camera lens cleanliness and alignment etc.

    The time cost is so low that you should give it a gander. You'll be surprised how fast you can do it. If you just take screenshots every minute it should suffice.

sbarre 3 hours ago

6 years ago I was working with a very large enterprise that was struggling to solve this problem, trying to scan millions of arbitrary forms and documents per month to clearly understand key points like account numbers, names and addresses, policy numbers, phone numbers, embedded images or scribbled notes, and also draw relationships between these values on a given form, or even across forms.

I wasn't there to solve that specific problem but it was connected to what we were doing so it was fascinating to hear that team talk through all the things they'd tried, from brute-force training on templates (didn't scale as they had too many kinds of forms) to every vendor solution under the sun (none worked quite as advertised on their data)..

I have to imagine this is a problem shared by so many companies.

ChemSpider 3 hours ago

"World's best OCR model" - that is quite a statement. Are there any well-known benchmarks for OCR software?

  • WhitneyLand 3 hours ago

    It’s interesting that none of the existing models can decode a Scrabble board screen shot and give an accurate grid of characters.

    I realize it’s not a common business case, came across it testing how well LLMs can solve simple games. On a side note, if you bypass OCR and give models a text layout of a board standard LLMs cannot solve Scrabble boards but the thinking models usually can.

protonbob an hour ago

Wow this basically "solves" DRM for books as well as opening up the door for digitizing old texts more accurately.

sixhobbits an hour ago

Nice demos but I wonder how well it does on longer files. I've been experimenting with passing some fairly neat PDFs to various LLMs for data extraction. They're created from Excel exports and some of the data is cut off or badly laid out, but it's all digitally extractable.

The challenge isn't so much the OCR part, but just the length. After one page the LLMs get "lazy" and just skip bits or stop entirely.

And page by page isn't trivial as header rows are repeated or missing etc.

So far my experience has definitely been that the last 2% of the content still takes the most time to accurately extract for large messy documents, and LLMs still don't seem to have a one-shot solve for that. Maybe this is it?

  • hack_ml 35 minutes ago

    You will have to send one page at a time, most of this work has to be done via RAG. Adding a large context (like a whole PDF), still does not work that well in my experience.

qwertox an hour ago

We developers seem to really dislike PDFs, to a degree that we'll build LLMs and have them translate it into Markdown.

Jokes aside, PDFs really serve a good purpose, but getting data out of them is usually really hard. They should have something like an embedded Markdown version with a JSON structure describing the layout, so that machines can easily digest the data they contain.

Oras an hour ago

I feel this is created for RAG. I tried a document [0] that I tested with OCR; it got all the table values correctly, but the page's footer was missing.

Headers and footers are a real pain with RAG applications, as they are not required, and most OCR or PDF parsers will return them, and there is extract work to do to remove them.

[0] https://github.com/orasik/parsevision/blob/main/example/Mult...

cxie 3 hours ago

The new Mistral OCR release looks impressive - 94.89% overall accuracy and significantly better multilingual support than competitors. As someone who's built document processing systems at scale, I'm curious about the real-world implications.

Has anyone tried this on specialized domains like medical or legal documents? The benchmarks are promising, but OCR has always faced challenges with domain-specific terminology and formatting.

Also interesting to see the pricing model ($1/1000 pages) in a landscape where many expected this functionality to eventually be bundled into base LLM offerings. This feels like a trend where previously encapsulated capabilities are being unbundled into specialized APIs with separate pricing.

I wonder if this is the beginning of the componentization of AI infrastructure - breaking monolithic models into specialized services that each do one thing extremely well.

  • themanmaran 2 hours ago

    Excited to test this our on our side as well. We recently built an OCR benchmarking framework specifically for VLMs[1][2], so we'll do a test run today.

    From our last benchmark run, some of these numbers from Mistral seem a little bit optimistic. Side by side of a few models:

    model | omni | mistral |

    gemini | 86% | 89% |

    azure | 85% | 89% |

    gpt-4o | 75% | 89% |

    google | 68% | 83% |

    Currently adding the Mistral API and we'll get results out today!

    [1] https://github.com/getomni-ai/benchmark

    [2] https://huggingface.co/datasets/getomni-ai/ocr-benchmark

  • epolanski 3 hours ago

    At my client we want to provide an AI that can retrieve relevant information from documentation (home building business, documents detail how to install a solar panel or a shower, etc) and we've set up an entire system with benchmarks, agents, etc, yet the bottleneck is OCR!

    We have millions and millions of pages of documents and an off by 1 % error means it compounds with the AI's own error, which compounds with documentation itself being incorrect at times, which leads it all to be not production ready (and indeed the project has never been released), not even close.

    We simply cannot afford to give our customers incorrect informatiin

    We have set up a backoffice app that when users have questions, it sends it to our workers along the response given by our AI application and the person can review it, and ideally correct the ocr output.

    Honestly after an year of working it feels like AI right now can only be useful when supervised all the time (such as when coding). Otherwise I just find LLMs still too unreliable besides basic bogus tasks.

    • PeterStuer 2 hours ago

      As someone who has had a home built, and nearly all my friends and acquaintances report the same thing, having a 1% error on information in this business would mean not a 10x but a 50x improvement over the current practice in the field.

      If nobody is supervising building documents all the time during the process, every house would be a pile of rubbish. And even when you do stuff stills creeps in and has to be redone, often more than once.

  • janalsncm 3 hours ago

    I have done OCR on leases. It’s hard. You have to be accurate and they all have bespoke formatting.

    It would almost be easier to switch everyone to a common format and spell out important entities (names, numbers) multiple times similar to how cheques do.

    The utility of the system really depends on the makeup of that last 5%. If problematic documents are consistently predictable, it’s possible to do a second pass with humans. But if they’re random, then you have to do every doc with humans and it doesn’t save you any time.

  • kbyatnal 2 hours ago

    re: real world implications, LLMs and VLMs aren't magi, and anyone who goes in expecting 100% automation is in for a surprise (especially in domains like medical or legal).

    IMO there's still a large gap for businesses in going from raw OCR outputs —> document processing deployed in prod for mission-critical use cases.

    e.g. you still need to build and label datasets, orchestrate pipelines (classify -> split -> extract), detect uncertainty and correct with human-in-the-loop, fine-tune, and a lot more. You can certainly get close to full automation over time, but it's going to take time and effort.

    But for RAG and other use cases where the error tolerance is higher, I do think these OCR models will get good enough to just solve that part of the problem.

    Disclaimer: I started a LLM doc processing company to help companies solve problems in this space (https://extend.app/)

  • PeterStuer 3 hours ago

    I'd love to try it for my domain (regulation), but $1/1000 pages is significantly more expensive than my current local Docling based setup that already does a great job of processing PDF's for my needs.

    • yawnxyz 2 hours ago

      I think for regulated fields / high impact fields $1/1000 is well-worth the price; if the accuracy is close to 100% this is way better than using people, who are still error-prone

  • kergonath 3 hours ago

    > Has anyone tried this on specialized domains like medical or legal documents?

    I’ll try it on a whole bunch of scientific papers ASAP. Quite excited about this.

  • janis1234 an hour ago

    $1 for 1000 pages seems high to me. Doing a google search

    Rent and Reserve NVIDIA A100 GPU 80GB - Pricing Starts from $1.35/hour

    I just don't know if in 1 hour and with a A100 I can process more than a 1000 pages. I'm guessing yes.

    • blackoil an hour ago

      Is the model Open Source/Weight? Else the cost is for the model, not GPU.

  • salynchnew 3 hours ago

    Also interesting to see that parts of the training infrastructure to create frontier models is itself being monetized.

  • stavros 3 hours ago

    What do you mean by "free"? Using the OpenAI vision API, for example, for OCR is quite a bit more expensive than $1/1k pages.

  • unboxingelf 3 hours ago

    We’ll just stick LLM Gateway LLM in front of all the specialized LLMs. MicroLLMs Architecture.

    • cxie 3 hours ago

      I actually think you're onto something there. The "MicroLLMs Architecture" could mirror how microservices revolutionized web architecture.

      Instead of one massive model trying to do everything, you'd have specialized models for OCR, code generation, image understanding, etc. Then a "router LLM" would direct queries to the appropriate specialized model and synthesize responses.

      The efficiency gains could be substantial - why run a 1T parameter model when your query just needs a lightweight OCR specialist? You could dynamically load only what you need.

      The challenge would be in the communication protocol between models and managing the complexity. We'd need something like a "prompt bus" for inter-model communication with standardized inputs/outputs.

      Has anyone here started building infrastructure for this kind of model orchestration yet? This feels like it could be the Kubernetes moment for AI systems.

      • fnordpiglet 2 hours ago

        I’m doing this personally for my own project - essentially building an agent graph that starts with the image output, orients and cleans, does a first pass with tesseract LSTM best models to create PDF/HOCR/Alto, then pass to other LLMs and models based on their strengths to further refine towards markdown and latex. My goal is less about RAG database population but about preserving in a non manually typeset form the structure and data and analysis, and there seems to be pretty limited tooling out there since the goal generally seems to be the obviously immediately commercial goal of producing RAG amenable forms that defer the “heavy” side of chart / graphic / tabular reproduction to a future time.

      • arcfour 3 hours ago

        This is already done with agents. Some agents only have tools and the one model, some agents will orchestrate with other LLMs to handle more advanced use cases. It's pretty obvious solution when you think about how to get good performance out of a model on a complex task when useful context length is limited: just run multiple models with their own context and give them a supervisor model—just like how humans organize themselves in real life.

sunami-ai 3 hours ago

Making Transformers the same cost as CNN's (which are used in character-level ocr, as opposed to image-patch-level) is a good thing. The problem with CNN based character-level OCR is not the recognition models but the detection models. In a former life, I found a way to increase detection accuracy, and, therefore, overall OCR accuracy, and used that as an enhancement on top of Amazon and Google OCR. It worked really well. But the transformer approach is more powerful and if it can be done for $1 per 1000 pages, that is a game changer, IMO, at least of incumbents offering traditional character-level OCR.

  • menaerus 3 hours ago

    It certainly isn't the same cost if expressed as a non-subsidized $$$ one needs for the Transformers compute aka infra.

    CNNs trained specifically for OCR can run in real time on as small compute as a mobile device is.

SilentM68 3 hours ago

I would like to see how it performs with massively warped and skewed scanned text images, basically a scanned image where the text lines are wavy as opposed as straight horizontal, where the letters are elongated. One where the line widths are different depending on the position on the scanned image. I once had to deal with such a task that somebody gave me with OCR software, Acrobat, and other tools could not decode the mess so I had to recreate the 30 pages myself, manually. Not a fun thing to do but that is a real use case.

  • arcfour 2 hours ago

    Garbage in, garbage out?

    • edude03 2 hours ago

      "Yes" but if a human could do it "AI" should be able to do it too.

s4i an hour ago

I wonder how good it would be to convert sheet music to MusicXML. All the current tools more or less suck with this task, or maybe I’m just ignorant and don’t know what lego bricks to put together.

z2 3 hours ago

Is there a reliable handwriting OCR benchmark out there (updated, not a blog post)? Despite the gains claimed for printed text, I found (anecdotally) that trying to use Mistral OCR on my messy cursive handwriting to be much less accurate than GPT-4o, in the ballpark of 30% wrong vs closer to 5% wrong for GPT-4o.

Edit: answered in another post: https://huggingface.co/spaces/echo840/ocrbench-leaderboard

TriangleEdge 3 hours ago

One of my hobby projects while in University was to do OCR on book scans. Doing character recognition was solved, but finding the relationship between characters was very difficult. I tried "primitive" neural nets, but edge cases would often break what I built. Super cool to me to see such an order of magnitude in improvement here.

Does it do hand written notes and annotations? What about meta information like highlighting? I am also curious if LLMs will get better because more access to information if it can be effectively extracted from PDFs.

  • jcuenod 3 hours ago

    * Character recognition on monolingual text in a narrow domain is solved

climb_stealth an hour ago

Does this support Japanese? They list a table of language comparisons againat other approaches but I can't tell if it is exhaustive.

I'm hoping that something like this will be able to handle 3000-page Japanese car workshop manuals. Because traditional OCR really struggles with it. It has tables, graphics, text in graphics, the whole shebang.

hubraumhugo 2 hours ago

It will be interesting to see how all the companies in the document processing space adapt as OCR becomes a commodity.

The best products will be defined by everything "non-AI", like UX, performance and reliability at scale, and human-in-the loop feedback for domain experts.

  • trollied 2 hours ago

    They will offer integrations into enterprise systems, just like they do today.

    Lots of big companies don't like change. The existing document processing companies will just silently start using this sort of service to up their game, and keep their existing relationships.

  • hyuuu an hour ago

    I 100% agree with this, I think you can even extend this to any AI, in the end, IMO, as the llm is more commoditized, the surface of which the value is delivered will matter more

bsnnkv 2 hours ago

Someone working there has good taste to include a Nizar Qabbani poem.

notepad0x90 3 hours ago

I was just watching a science-related video containing math equations. I wondered how soon will I be able to ask the video player "What am I looking at here, describe the equations" and it will OCR the frames, analyze them and explain them to me.

It's only a matter of time before "browsing" means navigating HTTP sites via LLM prompts. although, I think it is critical that LLM input should NOT be restricted to verbal cues. Not everyone is an extrovert that longs to hear the sound of their own voices. A lot of human communication is non-verbal.

Once we get over the privacy implications (and I do believe this can only be done by worldwide legislative efforts), I can imagine looking at a "website" or video, and my expressions, mannerisms and gestures will be considered prompts.

At least that is what I imagine the tech would evolve into in 5+ years.

  • abrichr 3 hours ago

    > I wondered how soon will I be able to ask the video player "What am I looking at here, describe the equations" and it will OCR the frames, analyze them and explain them to me.

    Seems like https://aiscreenshot.app might fit the bill.

  • devmor 3 hours ago

    Good lord, I dearly hope not. That sounds like a coddled hellscape world, something you'd see made fun of in Disney's Wall-E.

    • notepad0x90 3 hours ago

      hence my comment about privacy and need for legislation :)

      It isn't the tech that's the problem but the people that will abuse it.

      • devmor 3 hours ago

        While those are concerns, my point was that having everything on the internet navigated to, digested and explained to me sounds unpleasant and overall a drain on my ability to think and reason for myself.

        It is specifically how you describe using the tech that provokes a feeling of revulsion to me.

        • notepad0x90 an hour ago

          Then I think you misunderstand. The ML system would know when you want things digested to you or not. Right now companies are assuming this and forcing LLM interaction. But when properly done, the system would know based on your behavior or explicit prompts what you want and provide the service. If you're staring at a paragraph intently and confused, it might start highlighting common phrases or parts of the text/picture that might be hard to grasp and based on your reaction to that, it might start describing things via audio,tool tips,side pane,etc.. In other words, if you don't like how and when you're interacting with the LLM ecosystem, then that is an immature and failing ecosystem, in my vision this would be a largely solved problems, like how we interact with keyboards,mouse and touchscreens today.

  • groby_b 3 hours ago

    Now? OK, you need to screencap and upload to LLM, but that's well established tech by now. (Where by "well established", I mean at least 9 months old ;)

    Same goes for "navigating HTTP sites via LLM prompts". Most LLMs have web search integration, and the "Deep Research" variants do more complex navigation.

    Video chat is there partially, as well. It doesn't really pay much attention to gestures & expressions, but I'd put the "earliest possible" threshold for that a good chunk closer than 5 years.

    • notepad0x90 3 hours ago

      Yeah, all these things are possible today, but getting them well polished and integrated is another story. Imagine all this being supported by "HTML6" lol. When apple gets around to making this part of safari, then we know it's ready.

      • groby_b 36 minutes ago

        That's a great upper-bound estimator ;)

        But kidding aside - I'm not sure people want this being supported by web standards. We could be a huge step closer to that future had we decided to actually take RDF/Dublin Core/Microdata seriously. (LLMs perform a lot better with well-annotated data)

        The unanimous verdict across web publishers was "looks like a lot of work, let's not". That is, ultimately, why we need to jump through all the OCR hoops. Not only did the world not annotate the data, it then proceeded to remove as many traces of machine readability as possible.

        So, the likely gating factor is probably not Apple & Safari & "HTML6" (shudder!)

        If I venture my best bet what's preventing polished integration: It's really hard to do via foundational models only, and the number of people who want to have deep & well-informed conversations via a polished app enough that they're willing to pay for an app that does that is low enough that it's not the hot VC space. (Yet?)

        Crystal ball: Some OSS project will probably get within spitting distance of something really useful, but also probably flub the UX. Somebody else will take up these ideas while it's hot and polish it in a startup. So, 18-36 months for an integrated experience from here?

pqdbr 3 hours ago

I tried with both PDFs and PNGs in Le Chat and the results were the worst I've ever seen when compared to any other model (Claude, ChatGPT, Gemini).

So bad that I think I need to enable the OCR function somehow, but couldn't find it.

  • troyvit 10 minutes ago

    It worked perfectly for me with a simple 2 page PDF that contained no graphics or formatting beyond headers and list items. Since it was so small I had the time to proof-read it and there were no errors. It added some formatting, such as bolding headers in list items and putting tics around file and function names. I won't complain.

  • computergert 37 minutes ago

    I'm experiencing the same. Maybe the sentence "Mistral OCR capabilities are free to try on le Chat." was a hallucination.

kapitalx 3 hours ago

Co-founder of doctly.ai here (OCR tool)

I love mistral and what they do. I got really excited about this, but a little disappointed after my first few tests.

I tried a complex table that we use as a first test of any new model, and Mistral OCR decided the entire table should just be extracted as an 'image' and returned this markdown:

``` ![img-0.jpeg](img-0.jpeg) ```

I'll keep testing, but so far, very disappointing :(

This document I try is the entire reason we created Doctly to begin with. We needed an OCR tool for regulatory documents we use and nothing could really give us the right data.

Doctly uses a judge, OCRs a document against multiple LLMs and decides which one to pick. It will continue to run the page until the judge scores above a certain score.

I would have loved to add this into the judge list, but might have to skip it.

  • bambax 2 hours ago

    Where did you test it? At the end of the post they say:

    > Mistral OCR capabilities are free to try on le Chat

    but when asked, Le Chat responds:

    > can you do ocr?

    > I don't have the capability to perform Optical Character Recognition (OCR) directly. However, if you have an image with text that you need to extract, you can describe the text or provide details, and I can help you with any information or analysis related to that text. If you need OCR functionality, you might need to use a specialized tool or service designed for that purpose.

    Edit: Tried anyway by attaching an image; it said it could do OCR and then output... completely random text that had absolutely nothing to do with the text in the image!... Concerning.

    Tried again with a better definition image, output only the first twenty words or so of the page.

    Did you try using the API?

    • kapitalx 2 hours ago

      Yes I used the API. They have examples here:

      https://docs.mistral.ai/capabilities/document/

      I used base64 encoding of the image of the pdf page. The output was an object that has the markdown, and coordinates for the images:

      [OCRPageObject(index=0, markdown='![img-0.jpeg](img-0.jpeg)', images=[OCRImageObject(id='img-0.jpeg', top_left_x=140, top_left_y=65, bottom_right_x=2136, bottom_right_y=1635, image_base64=None)], dimensions=OCRPageDimensions(dpi=200, height=1778, width=2300))] model='mistral-ocr-2503-completion' usage_info=OCRUsageInfo(pages_processed=1, doc_size_bytes=634209)

  • fnordpiglet 2 hours ago

    Interestingly I’m currently going through and scanning the hundreds of journal papers my grandfather authored in medicine and thinking through what to do about graphs. I was expecting to do some form of multiphase agent based generation of LaTeX or SVG rather than a verbal summary of the graphs. At least in his generation of authorship his papers clearly explained the graphs already. I was pretty excited to see your post naturally but when I looked at the examples what I saw was, effectively, a more verbose form of

    ``` ![img-0.jpeg](img-0.jpeg) ```

    I’m assuming this is partially because your use case is targeting RAG under various assumptions bur also partially because multimodal models aren’t near what I would need to be successful with?

    • kapitalx 2 hours ago

      We need to update the examples on the front page. Currently for things that are considered charts/graphs/figures we convert to a description. For things like logos or images we do an image tag. You can also choose to exclude them.

      The difference with this is that it took the entire page as an image tag (it's just a table of text in my document). rather than being more selective.

      I do like that they give you coordinates for the images though, we need to do something like that.

      Give the actual tool a try. Would love to get your feedback for that use case. It gives you 100 free credits initially but if you email me (ali@doctly.ai), I can give you an extra 500 (goes for anyone else here also)

  • niwtsol 2 hours ago

    If you have a judge system, and Mistral performs well on other tests, wouldn't you want to include it so if it scores the highest by your judges ranking it would select the most accurate result? Or are you saying that mistral's image markdown would score higher on your judge score?

    • kapitalx 2 hours ago

      We'll definitely be doing more tests, but the results I got on the complex tests would result in a lower score and might not be worth the extra cost of the judgement itself.

      In our current setup Gemini wins most often. We enter multiple generations from each model into the 'tournament', sometimes one generation from gemini could be at the top while another in the bottom, for the same tournament.

  • Grosvenor 2 hours ago

    Does doctly do handwritten forms like dates?

    I have a lot of "This document filed and registered in the county of ______ on ______ of _____ 2023" sort of thing.

    • kapitalx 2 hours ago

      We've been getting great results with those aswell. But ofcourse there is always some chance of not getting it perfect, specially with different handwritings.

      Give it a try, no credit cards needed to try it. If you email me (ali@doctly.ai) i can give you extra free credits for testing.

      • Grosvenor an hour ago

        Just tried it. Got all the dates correct and even extracted signatures really well.

        Now to figure out how many millions of pages I have.

  • infecto 3 hours ago

    Why pay more for doctly than an AWS Textract?

    • kapitalx 2 hours ago

      Great question. The language models are definitely beating the old tools. Take a look at Gemini for example.

      Doctly runs a tournament style judge. It will run multiple generations across LLMs and pick the best one. Outperforming single generation and single model.

  • the_mitsuhiko 3 hours ago

    Would love to see the test file.

    • Starlord2048 2 hours ago

      would be glad to see benchmarking results

      • kapitalx 2 hours ago

        This is a good idea. We should publish a benchmark results/comparison.

pilooch 2 hours ago

But what's the need exactly for OCR when you have multimodal LLMs that can read the same info and directly answer any questions about it ?

For a VLLM, my understanding is that OCR corresponds to a sub-field of questions, of the type 'read exactly what's written in this document'.

  • daemonologist an hour ago

    It's useful to have the plain text down the line for operations not involving a language model (e.g. search). Also if you have a bunch of prompts you want to run it's potentially cheaper, although perhaps less accurate, to run the OCR once and save yourself some tokens or even use a smaller model for subsequent prompts.

  • ks2048 an hour ago

    Tons of uses: Storage (text instead of images), search (user typing in a text box and you want instant retrieval from a dataset), etc. And costs: run on images once - then the rest of your queries will only need to run on text.

bob1029 3 hours ago

> It takes images and PDFs as input

If you are working with PDF, I would suggest a hybrid process.

It is feasible to extract information with 100% accuracy from PDFs that were generated using the mappable acrofields approach. In many domains, you have a fixed set of forms you need to process and this can be leveraged to build a custom tool for extracting the data.

Only if the PDFs are unknown or were created by way of a cellphone camera, multifunction office device, etc should you need to reach for OCR.

The moment you need to use this kind of technology you are in a completely different regime of what the business will (should) tolerate.

  • themanmaran 3 hours ago

    > Only if the PDFs are unknown or were created by way of a cellphone camera, multifunction office device, etc should you need to reach for OCR.

    It's always safer to OCR on every file. Sometimes you'll have a "clean" pdf that has a screenshot of an Excel table. Or a scanned image that has already been OCR'd by a lower quality tool (like the built in Adobe OCR). And if you rely on this you're going to get pretty unpredictable results.

    It's way easier (and more standardized) to run OCR on every file, rather than trying to guess at the contents based on the metadata.

    • bob1029 2 hours ago

      It's not guessing if the form is known and you can read the information directly.

      This is a common scenario at many banks. You can expect nearly perfect metadata for anything pushed into their document storage system within the last decade.

      • themanmaran an hour ago

        Oh yea if the form is known and standardized everything is a lot easier.

        But we work with banks on our side, and one of the most common scenarios is customers uploading financials/bills/statements from 1000's of different providers. In which case it's impossible to know every format in advance.

janalsncm 3 hours ago

The hard ones are things like contracts, leases, and financial documents which 1) don’t have a common format 2) are filled with numbers proper nouns and addresses which it’s really important not to mess up 3) cannot be inferred from context.

Typical OCR pipeline would be to pass the doc through a character-level OCR system then correct errors with a statistical model like an LLM. An LLM can help correct “crodit card” to “credit card” but it cannot correct names or numbers. It’s really bad if it replaces a 7 with a 2.

jervant 3 hours ago

I wonder how it compares to USPS workers at deciphering illegible handwriting.

shmoogy an hour ago

What's the general time for something like this to hit openrouter? I really hate having accounts everywhere when I'm trying to test new things.

101008 2 hours ago

Is this free in LeChat? I uploaded a handwritten text and it stopped after the 4th word.

coolspot 3 hours ago

This is $1 per 1000 pages. For comparison, Azure Document Intelligence is $1.5/1000 pages for general OCR and $30/1000 pages for “custom extraction”.

  • 0cf8612b2e1e 2 hours ago

    Given the wide variety of pricing on all of these providers, I keep wondering how the economics work. Do they have fantastic margin on some of these products or is it a matter of subsidizing the costs, hoping to capture the market? Last I heard, OpenAI is still losing money.

sureglymop 2 hours ago

Looks good but in the first hover/slider demo one can see how it could lead to confusion when handling side by side content.

Table 1 is referred to in section `2 Architectural details` but before `2.1 Multimodal Decoder`. In the generated markdown though it is below the latter section, as if it was in/part of that section.

Of course I am nitpicking here but just the first thing I noticed.

  • 0cf8612b2e1e 2 hours ago

    Does anything handle dual columns well? Despite being the academic standard, it seemingly throws off every generic tool.

cavisne an hour ago

Its funny how Gemini consistently beats googles dedicated document API.

  • jjice 44 minutes ago

    I'm not surprised honestly - it's just the newer better things vs their older offering

anovick an hour ago

How does one use it to identify bounding rectangles of images/diagrams in the PDF?

lokl 2 hours ago

Tried with a few historical handwritten German documents, accuracy was abysmal.

  • Thaxll 2 hours ago

    HTR ( Handwritten Text Recognition ) is a completely different space than OCR. What were you expecting exactly?

    • riquito 2 hours ago

      It fits the "use cases" mentioned in the article

      > Preserving historical and cultural heritage: Organizations and nonprofits that are custodians of heritage have been using Mistral OCR to digitize historical documents and artifacts, ensuring their preservation and making them accessible to a broader audience.

      • Thaxll 2 hours ago

        There is a difference between historical document and "my doctor prescription".

        Someone coming here and saying it does not work with my old german hanwriting doesn't say much.

  • rvnx 2 hours ago

    Probably they are overfitting the benchmarks, since other users also complain of the low accuracy

  • anothermathbozo 2 hours ago

    Optical Character Recognition (OCR) and Handwritten Text Recognition (HTR) are different tasks

roboben 3 hours ago

Le chat doesn’t seem to know about this change despite the blog post stating it. Can anyone explain how to use it in Le Chat?

andoando 3 hours ago

Bit unrelated but is there anything that can help with really low resolution text? My neighbor got hit and run the other day for example, and I've been trying every tool I can to make out some of the letters/numbers on the plate

https://ibb.co/mr8QSYnj

  • zinglersen 3 hours ago

    Finding the right subreddit and asking there is probably a better approach if you want to maximize the chances of getting the plate 'decrypted'.

  • rvnx 3 hours ago

    If it’s a video, sharing a few frames can help as well

  • dewey 3 hours ago

    To even get started on this you'd also need to share some contextual information like continent, country etc. I'd say.

    • andoando 30 minutes ago

      Its in CA, looks like paper plates which follow a specific format and the last two seem to be the numbers '64'. Police should be able to search for temp tag with partial match and match the make/model. Was curious to see if any software could help though

  • flutas 3 hours ago

    Looks like a paper temp tag. Other than that, I'm not sure much can be had from it.

  • busymom0 3 hours ago

    There are photo enhancers online. But your picture is way too pixelated to get any useful info from it.

    • tjoff 3 hours ago

      If you know the font in advance (which you often do in these cases) you can do insane reconstructions. Also keep in mind that it doesn't have to be a perfect match, with the help of the color and other facts (such as likely location) about the car you can narrow it down significantly.

    • zellyn 3 hours ago

      Maybe if you had multiple frames, and used something very clever?

atemerev 16 minutes ago

So, the only thing that stopped AI from learning from all our science and taking over the world was the difficulty of converting PDFs of academic papers to more computer readable formats.

Not anymore.

jcuenod 3 hours ago

Just tested with a multilingual (bidi) English/Hebrew document.

The Hebrew output had no correspondence to the text whatsoever (in context, there was an English translation, and the Hebrew produced was a back-translation of that).

Their benchmark results are impressive, don't get me wrong. But I'm a little disappointed. I often read multilingual document scans in the humanities. Multilingual (and esp. bidi) OCR is challenging, and I'm always looking for a better solution for a side-project I'm working on (fixpdfs.com).

Also, I thought OCR implied that you could get bounding boxes for text (and reconstruct a text layer on a scan, for example). Am I wrong, or is this term just overloaded, now?

  • nicodjimenez 3 hours ago

    You can get bounding boxes from our pdf api at Mathpix.com

    Disclaimer, I’m the founder

    • kergonath 2 hours ago

      Mathpix is ace. That’s the best results I got so far for scientific papers and reports. It understands the layout of complex documents very well, it’s quite impressive. Equations are perfect, figures extraction works well.

      There are a few annoying issues, but overall I am very happy with it.

alberth 3 hours ago

Curious to see how this performance against more real world usage of someone taking a photo of text (which the text then becomes slightly blurred) and performing OCR on it.

I can't exactly tell if the "Mistral 7B" image is an example of this exact scenario.

d_llon an hour ago

It's disappointing to see that the benchmark results are so opaque. I hope we see reproducible results soon, and hopefully from Mistral themselves.

1. We don't know what the evaluation setup is. It's very possible that the ranking would be different with a bit of prompt engineering.

2. We don't know how large each dataset is (or even how the metrics are calculated/aggregated). The metrics are all reported as XY.ZW%, but it's very possible that the .ZW% -- or even Y.ZW% -- is just noise.[1]

3. We don't know how the datasets were mined or filtered. Mistral could have (even accidentally!) filtered out particularly data points that their model struggled with. (E.g., imagine good-meaning engineer testing a document with Mistral OCR first, finding it doesn't work, and deducing that it's probably bad data and removing it.)

[1] https://medium.com/towards-data-science/digit-significance-i...

maCDzP an hour ago

Oh - on premise solution - awesome!

srinathkrishna 3 hours ago

Given the fact that multi-modal LLMs are getting so good at OCR these days, is it a shame that we can't do local OCR with high accuracy in the near-term?

newfocogi 3 hours ago

They say: "releasing the API mistral-ocr-latest at 1000 pages / $"

I had to reread that a few times. I assume this means 1000pg/$1 but I'm still not sure about it.

  • dgfl 3 hours ago

    Great example of how information is sometimes compartmentalized arbitrarily in the brain: I imagine you have never been confused by sentences such as “I’m running at 10 km/h”.

    • mkl 37 minutes ago

      Dollar signs go before the number, not after it like units. It needs to be 1000 pages/$1 to make sense, whereas 10km and 10h and 10/h all make sense so 10km/h does. I imagine you would be confused by km/h 10 but not $10.

  • svachalek 3 hours ago

    Yeah you can read it as "pages per dollar" or as a unit "pages/$", it all comes out the same meaning.

  • amelius 2 hours ago

    Hmm, can it read small print? ;)

  • bredren 3 hours ago

    Ya, presumably it is missing the number `1.00`.

    • groby_b 3 hours ago

      Not really. When you go 60 mph (or km/h) you don't specify the 1.00 for the hours either. pages/$ is the unit, 1000 is the value.

thegabriele 2 hours ago

I'm using gemini to solve textual CAPTCHA with some good results (better than untrained OCR).

I will give this a shot

beebaween 2 hours ago

Wonder how it does with table data in pdfs / page-long tabular data?

dehrmann 3 hours ago

Is this burying the lede? OCR is a solved problem, but structuring document data from scans isn't.

jbverschoor 3 hours ago

Ohhh. Gonna test it out with some 100+ year old scribbles :)

WhitneyLand 3 hours ago

1. There’s no simple page / sandbox to upload images and try it. Fine, I’ll code it up.

2. “Explore the Mistral AI APIs” (https://docs.mistral.ai) links to all apis except OCR.

3. The docs on the api params refer to document chunking and image chunking but no details on how their chunking works?

So much unnecessary friction smh.

  • cooperaustinj 2 hours ago

    There is an OCR page on the link you provided. It includes a very, very simple curl command (like most of their docs).

    I think the friction here exists outside of Mistral's control.

    • kergonath 2 hours ago

      > There is an OCR page on the link you provided.

      I don’t see it either. There might be some caching issue.

rvz 2 hours ago

> "Fastest in its category"

Not one mention of the company that they have partnered with and that is Cerebras AI and that is the reason they have fast inference [0]

Literally no-one here is talking about them and they are about to IPO.

[0] https://cerebras.ai/blog/mistral-le-chat

th0ma5 2 hours ago

A great question for people wanting to use OCR in business is... Which digits in monetary amounts can you tolerate being incorrect?

  • kevindamm an hour ago

    or when + becomes 4 and isn't caught during review

    I wonder if a superimposed copy of the document on the original (with coloring or other highlighting of the diff) would help to catch some important errors.. the model would have to include layout of the copy in addition to the text/images, which I think is a little beyond SOTA but attainable.

linklater12 3 hours ago

Document processing is where b2b SAAS is at.

OrvalWintermute an hour ago

I'm happy to see this development after being underwhelmed with Chatgpt OCR!

groby_b 3 hours ago

Perusing the web site, it's depressing how much behind Mistral is on basic "how can I make this a compelling hook for customers" for the page.

The notebook link? An ACL'd doc

The examples don't even include a small text-to-markdown sample.

The before/after slider is cute, but useless - SxS is a much better way to compare.

Trying it in "Le Chat" requires a login.

It's like an example of "how can we implement maximum loss across our entire funnel". (I have no doubt the underlying tech does well, but... damn, why do you make it so hard to actually see it, Mistral?)

If anybody tried it and has shareable examples - can you post a link? Also, anybody tried it with handwriting yet?

jacooper 3 hours ago

Pretty cool, would love to use this with paperless, but I just can't bring myself to send a photo of all my documents to a third party, especially legal and sensitive documents, which is what I use Paperless for.

Because of that I'm stuck with crappy vision on Ollama (Thanks to AMDs crappy ROCm support for Vllm)

thiago_fm 2 hours ago

For general use this will be good.

But I bet that simple ML will lead to better OCRs when you are doing anything specialized, such as, medical documents, invoices etc.

deadbabe 3 hours ago

LLM based OCR is a disaster, great potential for hallucinations and no estimate of confidence. Results might seem promising but you’ll always be wondering.

  • menaerus 2 hours ago

    CNN-based OCR also have "hallucinations" and Transformers aren't that much different in that respect. This is a problem solved with domain specific post-processing.

  • leumon 2 hours ago

    well already in 2013 ocr systems used in xerox scanners (turned on by default!) randomly altered numbers, so its not an issue only occuring in llms.

bugglebeetle 2 hours ago

Congrats to Mistral for yet again releasing another closed source thing that costs more than running an open source equivalent:

https://github.com/DS4SD/docling

  • anonymousd3vil an hour ago

    Back in my days Mistral used to torrent models.

  • Squarex 2 hours ago

    I am all for open source, but where do you see benchmarks that conclude that it's just equivalent?

kiratp an hour ago

It's shocking how much our industry fails to see past its own nose.

Not a single example on that page is a Purchase Order, Invoice etc. Not a single example shown is relevant to industry at scale.

  • merb 42 minutes ago

    Mistral is Europe based where invoices are more or less sent digitally in like 95% of all the cases anyway. Some are even digital invoices, which will at some point in the eu be mandatory. For orders there are proposals for that, too. And basically invoice data extraction is a different beast.

    • revnode 16 minutes ago

      So an invoice attached to an email as a PDF is sent digitally ... those unfamiliar with PDF will think text and data extraction is trivial then, but this isn't true. You can have a fully digital, non-image PDF that is vector based and has what looks like text, but doesn't have a single piece of extractable text in it. It's all about how the PDF was generated. Tables can be formatted in a million ways, etc.

      Your best bet is to always convert it to an image and OCR it to extract structured data.

    • codetrotter 15 minutes ago

      One use-case is digitising receipts from business related travels for expenses that employees paid for out of their own pocket and which they are submitting pictures to the business for reimbursement.

      Bus travels, meals including dinners and snacks, etc. for which the employee has receipts on paper.

    • napolux 20 minutes ago

      Can confirm, in Italy electronic invoicing is mandatory since 2019

    • wolfi1 38 minutes ago

      even in Europe this is still a thing, I know of systems which still are unable to read items having more than one line (costing s sh*tload of money)

  • simpaticoder an hour ago

    Another good example would be contracts of any kind. Imagine photographing a contract (like a car loan) and on the spot getting an AI to read it, understand it, forecast scenarious, highlight red flags, and do some comparison shopping for you.

  • sha16 11 minutes ago

    I wanted to apply OCR to my company's invoicing since they basically did purchasing for a bunch of other large companies, but the variability in the conversion was not tolerable. Even rounding something differently could catch an accountant's eye, let alone detecting a "8" as a "0" or worse.

  • mentalgear 13 minutes ago

    To be fair: Reading the blog post, the main objective seems to have been to enable information extraction with high confidence for the academic sector (e.g. unlocking all these paper pdfs), and not necessarily to be another receipt scanner.

  • arpinum 36 minutes ago

    Businesses at scale use EDI to handle purchase orders and invoices, no OCR needed.

    • cdolan 14 minutes ago

      Thats simply not a factual statement.

      Scaled businesses do USE edi, but they still receive hundreds of thousands of PDF documents a month

      source: built a saas product that handles pdfs for a specific industry

  • guiomie an hour ago

    Agreed. In general I've had such bad performance for complex table based invoice parsing, that every few months I try the latest models to see if its better. It does say "96.12" on top-tier benchmark under the Table category.

  • mtillman an hour ago

    We find CV models to be better (higher midpoint on an ROC curve) for the types of docs you mention.

bondolo an hour ago

Such a shame that PDF doesn’t just, like, include the semantic structure of the document by default. It is brilliant that we standardized on an archival document format that doesn’t include direct access to the document text or structure as a core intrinsic default feature.

I say this with great anger as someone who works in accessibility and has had PDF as a thorn in my side for 30 years.

  • NeutralForest 28 minutes ago

    I agree with this so much. I've tried to sometimes push friends and family to use text formats (at least I sent them something like Markdown), which is very easy to render in the browser anyways. But often you have to fall back to PDF, which I dislike very much. There's so much content like books and papers that are in PDF as well. Why did we pick a binary blob as shareable format again?

  • lukasb 18 minutes ago

    Even assuming you could get people to do the work (probably the real issue here) could a single schema syntax capture the semantics of the universe of documents that exist as PDFs? PDFs succeeded because they could reproduce anything.

  • andai 35 minutes ago

    Tables? I regularly run into PDFs where even the body text is mangled!

  • cess11 23 minutes ago

    PDF is pretty strictly modeled on printed documents and their mainstream typography at the time of invention of Postscript and so on.

    Printed documents do not have any structure beyond the paper and placement of ink on them.

ChrisArchitect 4 hours ago

[flagged]

  • vessenes 3 hours ago

    No comments there yet - this at the top of the home page, let’s use this one.

hyuuu an hour ago

It's a weird timing because I just launched https://dochq.io - ai document extraction where you can define what you need to get out your documents in plain English, I legitimately thought that this was going to be such a niche product but hell, there has been a very rapid rise for AI-based OCR lately, an article/tweet even went viral 2 weeks ago I think? About using Gemini to do OCR, fun times.