dginev 13 hours ago

Hi, an arXiv HTML Papers developer here.

As a very brief update - we are pending a larger update.

You will spot many (many) issues with our current coverage and fidelity of the paper rendering. When they jump at you, please report them to us. All reports from the last 2 years have landed on github. We have made a bit of progress since, but there are (a lot of) more low-hanging fruit to pick.

Project issues:

https://github.com/arXiv/html_feedback/issues/

The main bottleneck at the moment is developer time. And the main vehicle for improvements on the LaTeX side of things continues to be LaTeXML. Happy to field any questions.

RandyOrion 9 hours ago

For arXiv papers, I prefer HTML format much more than PDF format.

Compared to PDF format, HTML format is much more accessible because of browsers. Basically I can reuse my browser extensions to do anything I like without hassle, like translation, note taking, sending texts to LLMs, and so on.

For now, arXiv offers two HTML services: the default one in https://arxiv.org/html/xxxx.xxxxx , and the alternative one in https://ar5iv.labs.arxiv.org/html/xxxx.xxxxx , here 'x' is a placeholder for a number or digit.

The most glaring problem of the default HTML service is the coverage of papers. Sometimes it just doesn't work, e.g., https://arxiv.org/html/2505.06708 . The solution may be switch to alternative HTML service, e.g., https://ar5iv.labs.arxiv.org/html/2505.06708 .

Note that alternative HTML service also has coverage problem. Sometimes both HTML services fail, e.g. https://arxiv.org/abs/2511.22625 .

ComputerGuru 16 hours ago

If the Unicode consortium would spend less time and effort on emoji and more on making the most common/important mathematical symbols and notations available/renderable in plain text, maybe we could move past the (LA)TeX/PDF marriage. OpenType and TrueType now (edit: for well over a decade, actually) support the necessary conditional rendering required to perform complicated rendering operations to get sequences of Unicode code points to display in the way needed (theoretically, anyway) and with fallback missing-glyph-only font family substitution support available pretty much everywhere allowing you to seamlessly display symbols not in your primary font from a fallback asset (something like Noto, with every Unicode symbol supported by design, or math-specific fonts like Cambria Math or TeX Gyre, etc), there are no technical restrictions.

I’ve actually dug into this in the past and it was never lack of technical ability that prevented them from even adding just proper superscript/subscript support before, but rather their opinion that this didn’t belong in the symbolic layer. But since emoji abuse/rely on ZWJ and modifiers left and right to display in one of a myriad of variations, there’s really no good reason not to allow the same, because 2 and the squares symbol are not semantically the same (so it’s not a design choice).

An interesting (complete) tangent is that Gemini 3 Pro is the only model I’ve tested (I do a lot of math-related stuff with LLMs) that absolutely will not under any circumstances respect (system/user) prompt requests to avoid inline math mode (aka LATeX) in the output, regardless of whether I asked for a blanket ban on TeX/MathJax/etc or when I insisted that it use extended unicode codes points to substitute all math formula rendering (I primarily use LLMs via the TUI where I don’t have MathJax support, and as familiar as I once was with raw TeX mathematical notations and symbols, it’s still quite easy to confuse unrendered raw output by missing something if you’re not careful). I shared my experiment and results here – Gemini 3 Pro would insist on even rendering single letter constants or variables as $k$ instead of just k (or k in markdown italics, etc) no matter how hard I asked it not to (which makes me think it may have been overfit against raw LATeX papers, and is also an interesting argument in favor of the “VL LLMs are the more natural construct”): https://x.com/NeoSmart/status/1995582721327071367?s=20

  • crazygringo 15 hours ago

    I don't understand. No matter what fancy things you do with superscripts and subscripts, you're not going to be able to do even basic things you need for equations like use a fraction bar, or parentheses that grow in height to match the content inside them.

    At a fundamental level, Unicode is for characters, not layout. Unicode may abuse the ZWJ for emoji, but it still ultimately results in a single emoji character, not a layout of characters. So I don't really understand what you're asking for.

    • lukan 13 hours ago

      Agreed. I think MathML is intended for layout of formulas and integrated into browsers nowdays, but I never used it, so don't know if essentials are missing?

    • bsder 11 hours ago

      > No matter what fancy things you do with superscripts and subscripts, you're not going to be able to do even basic things you need for equations like use a fraction bar, or parentheses that grow in height to match the content inside them.

      Why not? Things like Arabic ligatures already do that, no?

      • austinjp 11 hours ago

        This is interesting to me, but I am very naive about this. Can you explain, or point to where I could learn more?

        • bsder 5 hours ago

          I'd start with HarfBuzz: https://github.com/harfbuzz/harfbuzz

          That's the open source font shaping engine. It does a lot of work to handle font shaping and rendering for languages that can't really be reduced to characters.

  • raincole 10 hours ago

    Math formulas are far far far more complex than unicode emojis. I don't even know how to start comparing them.

  • SOTGO 14 hours ago

    I'm almost surprised that Gemini 3 uniquely has this problem. I would have expected that responses from any LLM that require complex math notation would almost certainly be LaTeX heavy, given the abundance of LaTeX source material in the training data. I suppose it is a flaw if a model can't avoid LaTeX, but given that it is the standard (and for the foreseeable future too) I don't know what appropriate output would look like. For "pure" mathematics or similar topics I think LaTeX (or system that represents a superset of LaTeX) is the only acceptable option.

  • hannahnowxyz 16 hours ago

    Have you tried a two-pass approach? For example, where prompt #1 is "Which elliptic curves have rational parameterizations?", and then prompt #2 (perhaps to a smaller/faster model like Gemma) is "In the following text, replace all LaTeX-escaped notation with Markdown code blocks and unicode characters. For example, $F_n = F_{n - 1} + F_{n - 2}$ should be replaced with `Fₙ = Fₙ₋₁ + Fₙ₋₂`. <Response from prompt #1>". Although it's not clear how you would want more complex things to be converted.

    • toastal 3 hours ago

      reStructuredText support :math: roles. AsciiDoc has stem blocks. Why do folks keep trying to shoehorn Markdown into everything, creating yet another fork, when there are other lightweight markup languages that support actual features for technical blogs/documentation?

    • baby 15 hours ago

      I've done latex -> mathml -> markdown and it works quite well

    • yannis 15 hours ago

      It is actually quicker to ask using LaTeX markup!

ForceBru 20 hours ago

Is this new or somehow updated? HTML versions of papers have been available for several years now.

EDIT: indeed, it was introduced in 2023: https://blog.arxiv.org/2023/12/21/accessibility-update-arxiv...

  • Tagbert 20 hours ago

    From the paper...

    Why "experimental" HTML?

    Did you know that 90% of submissions to arXiv are in TeX format, mostly LaTeX? That poses a unique accessibility challenge: to accurately convert from TeX—a very extensible language used in myriad unique ways by authors—to HTML, a language that is much more accessible to screen readers and text-to-speech software, screen magnifiers, and mobile devices. In addition to the technical challenges, the conversion must be both rapid and automated in order to maintain arXiv’s core service of free and fast dissemination.

    • ForceBru 20 hours ago

      No I mean _arXiv_ has had experimental support for generating HTML versions of papers for years now. If you visit arXiv, you'll see a lot of papers have generated HTML alongside the usual PDF, so I'm trying to understand whether the article discussed any new developments. It seems like it's not new at all

    • fooofw 15 hours ago

      It's kind of fun to compare this formulation with the seemingly contradictory official arXiv argument for submitting the TeX source [1]:

      > 1. TeX has many advantages that make it ideal as a format for the archives: It is plain text, it is compact, it is freely available for all platforms, it produces extremely high-quality output, and it retains contextual information.

      > 2. It is thus more likely to be a good source from which to generate newer formats, e.g., HTML, MathML, various ePub formats, etc. [...]

      Not that I disagree with the effort and it surely is a unique challenge to, at scale, convert the Turing complete macro language TeX to something other than PDF. And, at the same time, the task would be monumentally more difficult if only the generated PDFs were available. So both are right at the same time.

      [1] https://info.arxiv.org/help/faq/whytex.html#contextual

    • daemonologist 16 hours ago

      There are pretty often problems with figure size and with sections being too narrow or wide (for comfortable reading). The PDF versions are more consistently well-laid-out.

DominikPeters 17 hours ago

As an arXiv author who likes using complicated TeX constructions, the introduction of HTML conversion has increased my workload a lot trying to write fallback macros that render okay after conversion. The conversion is super slow and there is no way to faithfully simulate it locally. Still I think it's a great thing to do.

notorandit 3 hours ago

Thee problem is the viewer, not the format. We are talking about accessibility and scientific papers, where fancy animations and transitions are not core features.

LaTeX and TeX are the de facto standard for this context and converting all existing documents is a lot of work and energy to be spent for basically little gain, if any.

ekjhgkejhgk 18 hours ago

I wish epub was more common for papers. I have no idea if there's any real difficulties with that, or just not enough demand.

  • mmooss 17 hours ago

    epub is html, under the hood

    Is there an epub reader that can format text approximately as usably and beautifully as pdf? What I've seen makes it noticeably harder to read longer texts, though I haven't looked around much.

    epub also lacks annotation, or at least annotation that will be readable across platforms and time.

  • hombre_fatal 17 hours ago

    Because what makes epub a format on top of html is just that someone QA'ed it and wrote the html/css with it in mind. Especially considering things like diagrams and tables.

    Not really what you want researchers to waste their time doing.

    But you can use any of the numerous html->epub packagers yourself.

  • pspeter3 17 hours ago

    Why epub? Isn’t it just HTML under the hood?

    • ekjhgkejhgk 17 hours ago

      Because I can open it on my ereader.

el3ctron 21 hours ago

Accessibility barriers in research are not new, but they are urgent. The message we have heard from our community is that arXiv can have the most impact in the shortest time by offering HTML papers alongside the existing PDF.

  • lalithaar 20 hours ago

    Hello, I was going through html versions of my preprints on Arxiv, thank you for all that you guys do Please do let me know if the community could contribute through any means for the same

    • dginev 13 hours ago

      You can help make LaTeXML better, or you can simply report issues when you spot them during reading. Some we have collected automatically (any errors and missing packages), but others we can't - wrong colors, broken aspect ratios of figures, weirdly layed out author lists, etc.

Barbing 20 hours ago

>Did you know that 90% of submissions to arXiv are in TeX format, mostly LaTeX? That poses a unique accessibility challenge: to accurately convert from TeX—a very extensible language used in myriad unique ways by authors—to HTML, a language that is much more accessible to screen readers and text-to-speech software, screen magnifiers, and mobile devices.

Challenging. Good work!

leobg 18 hours ago

It must have been around 1998. I was editor of our school’s newspaper. We were using Corel Draw. At some point, I proposed that we start using HTML instead. In the end, we decided against it, and the reasons were the same that you can read here in the comments now.

constantcrying 2 hours ago

Reading this thread many people do not seem to understand what to the problem even is. What researchers writing Papers want is a low effort/high flexibility way to write documents (Nobody wants to write their paper in HTML). For a paper to be printed it needs to be in some printable format, like PDF. To provide accessibility and accommodate the changing ways papers are read, which is increasingly online, HTML is also a desirable output.

What really is needed is a markup language which natively can target both PDF and HTML. This is something typst is working on, but I am not aware of any other project, which either comes close to the features of LaTeX or supports both target formats.

To me this is the only reasonably way to address the accessibility and usability issues around Papers. Have one markup, with sufficient accessibility features, which simultaneously targets HTML and PDF.

sega_sai 20 hours ago

Unfortunately I didn't see the recommendation there on what can be done for old papers. I checked, and only my papers after 2022 have an HTML version. I wish they'd make some kind of 'try html' button for those.

  • sundarurfriend 19 hours ago

    Do the older papers work via [Ar5iv](https://ar5iv.labs.arxiv.org/) ?

    > View any arXiv article URL [in HTML] by changing the X to a 5

    The line

    > Sources upto the end of November 2025.

    sounds to me like this is indeed intended for older articles.

    • dginev 13 hours ago

      ar5iv tracks the arXiv collection with a one month lag. Exactly as to signal that this is not the "official" arXiv rendering. It is also a showcase predating the arXiv /html/ route, but largely using the same technology. Nowadays maintained by the same people (hi!)

      There used to be another showcase, called arxiv-vanity. They captured what happened pretty well with their farewell post on their homepage:

      https://www.arxiv-vanity.com/

percentcer 16 hours ago

Dumb question but what stops browsers from rendering TeX directly (aside from the work to implement it)? I assume it's more than just the rendering

  • bo1024 15 hours ago

    You mean a display engine that works like an HTML renderer, except starting from TeX source instead of HTML source? I think you could get something that mostly works, but it would be a pain and at the end you wouldn't have CSS or javascript, so I don't think browser makers are interested.

  • pwdisswordfishy 15 hours ago

    For starters, TeX is Turing-complete, and the tokenizer is arbitrarily reprogrammable at runtime.

    • gbear605 15 hours ago

      Browsers already support JavaScript anyway, so why not add another Turing-complete language into the mix? (Not even accounting for CSS technically being Turing-complete, or WASM, or …)

    • ErroneousBosh 15 hours ago

      Okay then, what would stop you rendering TeX to SVG and embedding that?

      Edit: Genuine question, not rhetorical - I don't know how well it would work but it sounds like it should.

      • fooofw 15 hours ago

        That would (mostly if not always) work in the sense of reproducing the layout of the pages, but would defeat the purpose of preserving the semantic information present in the TeX file (what is a heading, a reference and to what, a specific math environment, etc.) which is AFAIK already mostly dropped on conversion to PDF by the latex compiler.

jas39 19 hours ago

Pandoc can convert to svg. It can then be inlined in html. Looks just like latex, though copy/paste isn't very useful

  • stephenlf 18 hours ago

    That doesn’t solve the accessibility issue, though. You need semantic tags.

chr15m 11 hours ago

Wish I could upvote this harder. Thank you arXiv!

nateroling 20 hours ago

Seeing the Gemini 3 capabilities, I can imagine a near future where file formats are effectively irrelevant.

  • qart 18 hours ago

    I have family members with health conditions that require periodic monitoring. For some tests, a phlebotomist comes home. For some tests, we go to a hospital. For some other tests, we go to a specialized testing center. They all give us PDFs in their own formats. I manually enter the data to my spreadsheet, for easy tracking. I use LLMs for some extraction, but they still miss a lot. At least for the foreseeable future, no LLM will ever guarantee that all the data has been extracted correctly. By "guarantee", I mean someone's life may depend on it. For now, doctors take up the responsibility of ensuring the data is correct and complete. But not having to deal with PDFs would make at least a part of their job (and our shared responsibilities) easier.

  • s0rce 18 hours ago

    Can you elaborate? Are you never reading papers directly but only using Gemini to reformat or combine/summarize?

    • nateroling 17 hours ago

      I mean that when a computer can visually understand a document and reformat and reinterpret it in any imaginable way, who cares how it’s stored? When a png or a pdf or a markdown doc can all be be read and reinterpreted into an infographic or a database or an audiobook or an interactive infographic the original format won’t matter.

  • DANmode 19 hours ago

    Files.

    Truth in general, if we aren't careful.

  • sansseriff 18 hours ago

    Seriously. More people need to wake up to this. Older generations can keep arguing over display formats if they want. Meanwhile younger undergrad and grad students are getting more and more accustomed to LLMs forming the front end for any knowledge they consume. Why would research papers be any different.

    • JadeNB 18 hours ago

      > Meanwhile younger undergrad and grad students are getting more and more accustomed to LLMs forming the front end for any knowledge they consume.

      Well, that's terrifying. I mean, I knew it about undergrads, but I sure hoped people going into grad school would be aware of the dangers of making your main contact with research, where subtle details are important, through a known-distorting filter.

      (I mean, I'd still be kinda terrified if you said that grad students first encounter papers through LLMs. But if it is the front end for all knowledge they consume? Absolutely dystopian.)

      • sansseriff 17 hours ago

        I admit it has dystopian elements. It’s worth deciding what specifically is scary though. The potential fallibility or mistakes of the models? Check back in a few months. The fact they’re run by giant corps which will steal and train on your data? Then run local models. Their potential to incorporate bias or persuade via misalignment with the reader’s goals? Trickier to resolve, but various labs and nonprofits are working on it.

        In some ways I’m scared too. But that’s the way things are going because younger people far prefer the interface of chat and question answering to flipping through a textbook.

        Even if AI makes more mistakes or is more misaligned with the reader’s intentions than a random human reviewer (which is debatable in certain fields since the latest models game out), the behavior of young people requires us to improve the reputability of these systems. (Make sure they use citations, make sure they don’t hallucinate, etc). I think the technology is so much more user friendly that fixing the engineering bugs will be easier than forcing new generations to use the older systems.

billconan 19 hours ago

I don't think HTML is the right approach. HTML is better than PDF, but it is still a format for displaying/rendering.

the actual paper content format should be separated from its rendering.

i.e. it should contain abstract, sections, equations, figures, citations etc. but it shouldn't have font sizes, layout etc.

the viewer platforms then should be able to style the content differently.

  • cluckindan 18 hours ago

    HTML alone is in fact not a format for displaying/rendering. Done properly, it is a structural representation of the content. (This is often called ”semantic HTML”.)

    They are converting to HTML to make the content more accessible. Accessibility in this context means a11y, in effect ”more accessible” equates to ”more compatible with screen readers”.

    While PDF documents can be made accessible, it is way easier to do it in HTML, where browsers build an actual AOM (accessibility object model) tree and expose it to screen readers.

    >it should contain abstract, sections, equations, figures, citations etc.

    So <article>, <section>, <math>, <figure>, <cite>, etc.

    • o11c 7 hours ago

      The hope for semantic HTML died the day they said "stop using <i>, use <em>", regardless of what the actual purpose of the italics was (it's usually not emphasis).

    • benatkin 18 hours ago

      Much of it is a structural representation of how to display the content.

    • Theodores 15 hours ago

      I like Arxiv and what they are doing, however, do the auto-generated HTML files contain nothing more than a sea of divs dressed with a billion classes?

      I would be delighted if they could do better than that, with figcaptions as well as figures, and sections 'scoped' with just one <h2-6> heading per section. They could specify how it really should be done, the HTML way, with a well defined way of doing the abstract and getting the cited sources to be in semantic markup yet not in some massive footer at the back.

      There should also be a print stylesheet so that the paper prints out elegantly on A4 paper. Yes, I know you can 'print to PDF' but you can get all the typesetting needed in modern CSS stylesheets.

      Furthermore, they need to write a whole new HTML editor that discards WYSIWYG in favour of semantic markup. WYSIWYG has held us back by decades as it is useless for creating a semantic document. We haven't moved on from typewriters and the conventions needed to get those antiques to work, with word processors just emulating what people were used to at the time. What we really need is a means to evolve the written word, so that our thinking is 'semantic' when we come to put together documents, with a 'document structure first' approach.

      LaTeX is great, however, last time I used it was many decades ago, when the tools were 'vi' (so not even vim) and GhostScript, running on a Sun workstation with mono screen. Since then I have done a few different jobs and never have I had the need to do anything in LaTex or even open a LaTeX file. In the wild, LaTeX is rarer than hen's teeth. Yet we all read scientific papers from time to time, and Arxiv was founded on the availability of Tex files.

      The lack of widespread adoption of semantic markup has been a huge bonus to Google and other gatekeepers that have the money to develop their own heuristics to make sense of 'seas of divs'. As it happens, Google have also been somewhat helpful with Chrome and advancing the web, even if it is for their gatekeeping purposes.

      The whole world of gatekeeping is also atrocious in academia. Knowledge wants to be free, but it is also big business to the likes of Springer, who are already losing badly to open publishing.

      As you say, in this instance, accessibility means screen readers, however, I hope that we can do better than that, to get back to the OG Tim Berners Lee vision of what the web should be like, as far as structuring information is concerned.

      • dginev 13 hours ago

        You will be delighted. Feel free to inspect some sources.

  • m-schuetz 17 hours ago

    That's a purist stance that's never going to work out in praxtice. Authors will always want to adjust the presentation of content, and html might be even better suited for that than Latex, which as bad at both.

  • dimal 19 hours ago

    Perfect is the enemy of good. HTML is good enough. Let’s get this done.

    And as another commenter has pointed out, HTML does exactly what you ask for. If it’s done correctly, it doesn’t contain font sizes or layout. Users can style HTML differently with custom CSS.

    • billconan 19 hours ago

      mixing rendering definitions with content (PDF) is something from the printer era, that is unsuitable for the digital era.

      HTML was a digital format, but it wanted to be a generic format for all document types, not just papers, so it contains a lot of extras that a paper format doesn't need.

      for research papers, since they share the same structure, we can further separate content from rendering.

      for example, if you want to later connect a paper with an AI, do you want to send <div class="abstract"> ... ?

      or do some nasty heuristic to extract the abstract? like document. getElementsByClassName("abstract")[0] ?

      • simonw 18 hours ago

        All of the interesting LLMs can handle a full paper these days without any trouble at all. I don't think it's worth spending much time optimizing for that use-case any more - that was much more important two years ago when most models topped out at 4,000 or 8,000 tokens.

  • bob1029 19 hours ago

    > HTML is better than PDF

    I disagree. PDF is the most desirable format for printed media and its analogues. Any time I plan to seriously entertain a paper from Arxiv, I print it out first. I prefer to have the author's original intent in hand. Arbitrary page breaks and layout shifts that are a result of my specific hardware/software configuration are not desirable to me in this context of use.

    • ACCount37 18 hours ago

      I agree that PDF is best for things that are meant to be printed, no questions. But I wonder how common actually printing those papers is?

      In research and in embedded hardware both, I've met some people who had entire stacks of papers printed out - research papers or datasheets or application notes - but also people who had 3 monitors and 64GB of RAM and all the papers open as browser tabs.

      I'm far closer to the latter myself. Is this a "generational split" thing?

      • pfortuny 18 hours ago

        Possibly, but then again, when I need to study a paper, I print it, when I need just to skim it and use a result from it, it is more likely that I just read it on a screen (tablet/monitor). That is the difference for me.

    • s0rce 18 hours ago

      I used to print papers, probably stopped about 10 years ago. I now read everything in Zotero where I can highlight and save my annotations and sync my library between devices. You can also seamlessly archive html and pdfs. I don't see people printing papers in my workplace that often unless you need to read them in a wet lab where the computer is not convenient.

  • afavour 19 hours ago

    Wouldn’t that be CSS?

    • billconan 19 hours ago

      no

      <div class="abstract-container">

      <div class="abstract">

      <pre><code> abstract text ... </code></pre>

      </div>

      <div class="author-list">

      <ol>

      <li>author one</li>

      <li>author two</li>

      <ol>

      </div>

      should be just:

      [abstract]

      abstract text

      [authors]

      author one | email | affiliation

      author two | email | affiliation

      • afavour 19 hours ago

        Sounds like XML and XSL would be a great fit here. Shame it’s being deprecated.

        But you could still use HTML. Elements with a dash in are reserved for custom elements (that is, a new standardised element will never take that name) so you could do:

            <paper-author-list>
              <paper-author />
            </paper-author-list>
        
        And it would be valid HTML. Then you’d style it with CSS, with

            paper-author {
              display: list-item;
            }
        
        And so on.
        • bawolff 18 hours ago

          Nothing is stopping you from using server side XSL. I personally dont think its a great fit, but people need to stop acting like xsl has been wiped from the face of the earth.

          • afavour 18 hours ago

            Yes but we’re specifically talking about a display format here. Something requiring a server side transform before being viewable by a user is a clear step backwards.

            • bawolff 17 hours ago

              How so? I can't think of any advantage to having client side xsl over outputting two files, in this context.

              • afavour 17 hours ago

                The discussion is about the form in which you share papers. With HTML you just share the HTML file, it opens instantly on basically any device.

                If you distribute the paper as XML with an XSLT transform you need to run something that’ll perform that transform before you can read the paper. No matter whether that transform happens on the server or on the client it’s still an extra complication in the flow of sharing information.

        • xworld21 16 hours ago

          Indeed, LaTeXML (the software used by arXiv) converts LaTeX to a semantic XML document which is turned to HTML using primarily XSLT!

      • panzi 19 hours ago

        There is <article> <section> <figure> <legend>, but yes, <abstract> and <authors> is missing as such. But there are meta tags for such things. Then there is RDF and Thing. Not quite the same, I know, but it's not completely useless.

        • kevindamm 18 hours ago

          and you could shim these gaps with custom components, hypothetically

ashleyn 20 hours ago

Can't help but wonder if this was motivated in part by people feeding papers into LLMs for summary, search, or review. PDF is awful for LLMs. You're effectively pigeonholed into using (PAYING for) Adobe's proprietary app and models which barely hold a candle to Gemini or Claude. There are PDF-to-text converters, but they often munge up the formatting.

  • jrk 20 hours ago

    Not sure when you last tried, but Gemini, Claude, and ChatGPT have all supported pretty effective PDF input for quite a while.

teddy-smith 18 hours ago

It's extremely easy to convert HTML/CSS to a PDF with the print to PDF feature of the browser.

All papers should be in HTML/CSS or Tex then just simply converted to PDF.

Why are we even talking about this?

  • tefkah 18 hours ago

    What are you talking about? No one’s writing their paper in HTML.

    The problem is having the submissions be in TeX and converting that to HTML, when the only output has been PDF for so long.

    The problem isn’t converting HTML to PDF, it’s making available a giant portion of TeX/pdf only papers in HTML.

    If you’re arguing that maybe TeX then shouldn’t be the source format for papers then I agree, but other than Typst (which also isn’t perfect about HTML output yet) there aren’t that many widely accepted/used authoring formats for physics/math papers, which is what ArXiV primarily hosts.

  • crazygringo 15 hours ago

    Have you ever written a paper for publication?

    HTML doesn't support the necessary features. Citations in various formats, footnotes, references to automatically numbered figures and tables, I could go on and on.

    HTML could certainly be extended to support those, but it hasn't been. That's why we're talking about this.

    • teddy-smith 12 hours ago

      Come on are you serious? HTML/CSS is more powerful than TEX or PDF.

      https://csszengarden.com/

      • crazygringo 10 hours ago

        Did you fully read my comment? Please point me to where HTML/CSS provide the features I listed.

        It doesn't really matter if HTML/CSS is more powerful at a hundred other layout things, if it doesn't provide the absolute necessary features for papers.

  • ekjhgkejhgk 18 hours ago

    LOL what. You're either trolling, or you've never written a paper in your life.

    • teddy-smith 12 hours ago

      It sounds like you might not understand the power of modern HTML/CSS.

  • nkrisc 18 hours ago

    So, uh, where do the HTML versions of the papers come from?

  • benatkin 18 hours ago

    It's easy to convert PDF to HTML/CSS, with similar results.

    Either way it gets shoehorned.

  • carlosjobim 17 hours ago

    Except you can't have page breaks, three links in a row, anchor links.

    • teddy-smith 12 hours ago

      @media print { .page, .page-break { break-after: page; } }

      • carlosjobim 11 hours ago

        It doesn't function in real use, it's just theoretical.

_dain_ 17 hours ago

Wasn't the World Wide Web invented at CERN specifically for sharing scientific papers? Why are we still using PDFs at all?

  • fsh 16 hours ago

    No, it wasn't. Scientists at CERN used DVI and later PDF like everyone else. HTML has no provisions for typesetting equations and is therefore not suitable for physics papers (without much newer hacks such as MathML).

cubefox 18 hours ago

This is not new, the title should say (2023). They have shipped the HTML feature with "experimental" flag for two years now, but I don't know whether there is even any plan to move out of the experimental phase.

It's not much of an "experiment" if you don't plan to use some experimental data to improve things somehow.

lalithaar 20 hours ago

I was reading through this article too, glad to have found it on here

rootnod3 20 hours ago

Maybe unpopular, but papers should be in n markdown flavor to be determined. Just to have them more machine readable.

  • xigoi 19 hours ago

    Compared to HTML, Markdown is very bad at being mahcine-readable.

vatsachak 19 hours ago

Why do we like HTML more than pdfs?

HTML rendering requires you to be connected to the internet, or setting up the images and mathJax locally. A PDF just works.

HTML obviously supports dynamic embedding, such as programs, much better but people just usually post a github.io page with the paper.

  • devnull3 18 hours ago

    > HTML rendering requires you to be connected to the internet

    Not really. One can always generate a self-contained html. Both CSS and JS (if needed) can be inline.

    • vatsachak 17 hours ago

      True but the webdev idiom is injecting things such as mathjax from a cdn. I guess one can pre-render the page and save that, but that's kind of like a PDF already

  • nine_k 18 hours ago

    Try opening a PDF on a phone screen.

    • vatsachak 17 hours ago

      I do it all the time to read papers. It's easy

  • mmooss 17 hours ago

    epub 'just works' locally, and it's html under the hood.

  • recursive 18 hours ago

    Why would html rendering require a network connection? It doesn't seem to on my machine.

    • vatsachak 17 hours ago

      Things like LaTeX equation rendering are hosted on a cdn

      • krapp 17 hours ago

        They can be but don't need to be. Any javascript can be localized like HTML and CSS.

        • vatsachak 16 hours ago

          That's fair, but imagine trying to get the average reader up to speed with something like npm.

          • krapp 14 hours ago

            You don't actually need npm either. You can literally just distribute everything - html, css, images and js in a zipped folder and open it locally.