magicalist 6 hours ago

The overhyped tweet from the robinhood guy raising money for his AI startup is nicely brought into better perspective by Thomas Bloom (including that #124 is not from the cited paper, "Complete sequences of sets of integer powers "/BEGL96):

> This is a nice solution, and impressive to be found by AI, although the proof is (in hindsight) very simple, and the surprising thing is that Erdos missed it. But there is definitely precedent for Erdos missing easy solutions!

> Also this is not the problem as posed in that paper

> That paper asks a harder version of this problem. The problem which has been solved was asked by Erdos in a couple of later papers.

> One also needs to be careful about saying things like 'open for 30 years'. This does not mean it has resisted 30 years of efforts to solve it! Many Erdos problems (including this one) have just been forgotten about it, and nobody has seriously tried to solve it.[1]

And, indeed, Boris Alexeev (who ran the problem) agrees:

> My summary is that Aristotle solved "a" version of this problem (indeed, with an olympiad-style proof), but not "the" version.

> I agree that the [BEGL96] problem is still open (for now!), and your plan to keep this problem open by changing the statement is reasonable. Alternatively, one could add another problem and link them. I have no preference.[2]

Not to rain on the parade out of spite, it's just that this is neat, but not like, unusually neat compared to the last few months.

[1] https://twitter.com/thomasfbloom/status/1995083348201586965

[2] https://www.erdosproblems.com/forum/thread/124#post-1899

  • NooneAtAll3 4 hours ago

    reading the original paper and the lean statement that got proven, it's kinda fascinating what exactly is considered interesting and hard in this problem

    roughly, what lean theorem (and statement on the website) asks is this: take some numbers t_i, for each of them form all the powers t_i^j, then combine all into multiset T. Barring some necessary conditions, prove that you can take subset of T to sum to any number you want

    what Erdosh problem in the paper asks is to add one more step - arbitrarily cut off beginnings of t_i^j power sequences before merging. Erdosh-and-co conjectured that only finite amount of subset sums stop being possible

    "subsets sum to any number" is an easy condition to check (that's why "olympiad level" gets mentioned in the discussion) - and it's the "arbitrarily cut off" that's the part that og question is all about, while "only finite amount disappear" is hard to grasp formulaically

    so... overhyped yes, not actually erdos problem proven yes, usual math olympiad level problems are solvable by current level of Ai as was shown by this year's IMO - also yes (just don't get caught by https://en.wikipedia.org/wiki/AI_effect on the backlash since olympiads are haaard! really!)

  • NooneAtAll3 5 hours ago

    I was so happy for this result until I saw your mention of robinhood hype :/

  • chihuahua 4 hours ago

    So in short, it was an easy problem that had already been solved thousands of years ago and the proof was so simple that it doesn't really count, and the AI used too many em-dashes in its response and it totally sucks.

    • sebastianz 21 minutes ago

      > problem that had already been solved thousands of years ago

      If by this you refer to "Aristotle" in the parent post - it's not that Aristotle. This is "Aristotle AI" - the name of their product.

WhyOhWhyQ 6 hours ago

I predict the way AI will be useful in science from the perspective of mathematics is by figuring out combinatorially complex solutions to problems that would otherwise not be interesting to (or far too complex to be comprehended by) humans. With such capabilities it could be imagined then that the AI will be useful for designing super materials, or doing fancy things with biology / medicine, and generally finding useful patterns in complex systems.

  • Someone 2 hours ago

    But abstract mathematics doesn’t care about solutions to problems; it cares about understanding problem spaces. I do not think current AI helps with that.

    Problems like the one discussed also aren’t interesting to applied mathematicians, either, because of lack of applications.

    But yes, if this kind of AI produces new materials, solves diseases, etc. they will be very useful. We wouldn’t care whether they arrived at those solutions through valid reasoning, though. A sloppy AI that has better ‘guesses/intuitions’ than humans or that can guess and check ‘guesses/intuitions' for correctness faster would be very useful.

    • adastra22 an hour ago

      And engineers don’t care about abstract mathematics: we care about math that solves problems. Being able to solve more problems with less human-years of effort is a big win.

  • estebarb 4 hours ago

    However, they at most would be the heuristic function of a search mechanism. A good heuristic, but heuristic at most. For search we need to identify when to abandon a path and which other entry point is promising. I'm not sure our current techniques are good for this kind of problems.

  • bluecalm 2 hours ago

    I think it will be even more helpful to know a simple proof doesn't exist because AI has tried for long enough and didn't find it. Once people know there is no easy proof of say Collatz or Twin Primes Conjencture those will not be as alluring to waste your time on.

newmana 5 hours ago

"Recently, yet another category of low-hanging fruit has been identified as within reach of automated tools: problems which, due to a technical flaw in their description, are unexpectedly easy to resolve. Specifically, problem #124 https://www.erdosproblems.com/124 was a problem that was stated in three separate papers of Erdos, but in two of them he omitted a key hypothesis which made the problem a consequence of a known result (Brown's criterion). However, this fact was not noticed until Boris Alexeev applied the Aristotle tool to this problem, which autonomously located (and formalized in Lean) a solution to this weaker version of the problem within hours."

https://mathstodon.xyz/@tao/115639984077620023

  • adastra22 an hour ago

    That doesn’t seem very fair. The problem was stated, and remained unsolved for all this time. You can’t take away that accomplishment just because the solution seems easy in hindsight.

    • zozbot234 an hour ago

      It's technically true that this version of the problem was "low-hanging fruit", so that's an entirely fair assessment. Systematically spotting low-hanging fruit that others had missed is an accomplishment, but it's quite different from solving a genuinely hard problem and we shouldn't conflate the two.

      • adastra22 an hour ago

        My point is stronger than that. Some things only appear low hanging fruit in hindsight. My own field of physics is full of examples. Saying “oh that should’ve been easy” is wrong more often than it is right.

        • zozbot234 an hour ago

          Sure, but unless all solvable problems can be said to "appear as low-hanging fruit in hindsight" this doesn't detract from Tao's assessment in any way. Solving a genuinely complex problem is a different matter than spotting simpler solutions that others had missed.

          In this case, the solution might have been missed before simply because the difference between the "easy" and "hard" formulations of the problem wasn't quite clear, including perhaps to Erdős, prior to it being restated (manually) as a Lean goal to be solved. So this is a success story for formalization as much as AI.

          • menaerus 17 minutes ago

            One of the math academics on that thread says the following:

            > My point is that basic ideas reappear at many places; humans often fail to realize that they apply in a different setting, while a machine doesn't have this problem! I remember seeing this problem before and thinking about it briefly. I admit that I haven't noticed this connection, which is only now quite obvious to me!

            Doesn't this sound extremely familiar to all of us who were taught difficult/abstract topics? Looking at the problem, you don't have a slightest idea what is it about but then someone comes along and explains the innerbits and it suddenly "clicks" for you.

            So, yeah, this is exactly what I think is happening here. The solution was there, and it was simple, but nobody discovered it up until now. And now that we have an explanation for it we say "oh, it was really simple".

            The bit which makes it very interesting is that this hasn't been discovered before and now it has been discovered by the AI model.

            Tao challenges this by hypothesizing that it actually was done before but never "released" officially, and which is why the model was able to solve the problem. However, there's no evidence (yet) for his hypothesis.

menaerus a day ago

This seems to be 2nd in row proof from the same author by using the AI models. First time it was the ChatGPT which wrote the formal Lean proof for Erdos Problem #340.

https://arxiv.org/html/2510.19804v1#Thmtheorem3

> In over a dozen papers, beginning in 1976 and spanning two decades, Paul Erdős repeatedly posed one of his “favourite” conjectures: every finite Sidon set can be extended to a finite perfect difference set. We establish that {1, 2, 4, 8, 13} is a counterexample to this conjecture.

  • NooneAtAll3 6 hours ago

    that one was vibe-coded

    Ai was given step-by-step already found proof, and asked "please rewrite in Lean"

    ---

    here Ai did the proof itself

mmmmbbbhb 15 hours ago

Meanwhile I have to use ai just to understand the problem statement.

  • lolive 4 hours ago

    Which makes you the top 1% power users of AI. [the other 99% asking for the breast size of whoever starlette appeared on TV]

wasmainiac a day ago

Ok… has this been verified? I see no publication or at least an announcement on Harmonics webpage. If this is a big deal, you think it would be a big deal, or is this just hype?

  • singularity2001 a day ago

    verified by lean so 99.99% yes

    • cluckindan 21 hours ago

      Lean verified a proof of a solution to a problem, but was it the same problem as Erdős problem #124?

      https://www.erdosproblems.com/forum/thread/124#post-1899

      • wasmainiac 15 hours ago

        > My summary is that Aristotle solved "a" version of this problem (indeed, with an olympiad-style proof), but not "the" version.

        > I agree that the [BEGL96] problem is still open (for now!), and your plan to keep this problem open by changing the statement is reasonable. Alternatively, one could add another problem and link them. I have no preference. — BorisAlexeev

        There we go, so there is hype to some degree.

    • aaomidi 14 hours ago

      Is there some good literature to read about lean? First time I’m hearing about it and it seems pretty cool.

BanditDefender a day ago

"Mathematicial superintelligence" is so obnoxious. Why exactly do they think it is called an Erdős problem when Erdős didn't find the solution? Because Erdős discovered the real mathematics: the conjecture!

These people treat math research as if it is a math homework assignment. There needs to be an honest discussion about what the LLM is doing here. When you bang your head against a math problem you blindly try a bunch of dumb ideas that don't actually work. It wastes a lot of paper. The LLM automates a lot of that.

It is actually pretty cool that modern AI can help speed this up and waste less paper. It is very similar to how classical AI (Symbolica) sped up math research and wasted less paper. But we need to have an honest discussion about how we are using the computer as a tool. Instead malicious idiots like Vlad Tenev are making confident predictions about mathematical superintelligence. So depressing.

ComplexSystems a day ago

Are you kidding? This is an incredible result. Stuff like this is the most important stuff happening in AI right now. Automated theorem proving? It's not too far to say the entire singular point of the technology was to get us to this.

  • ares623 a day ago

    This is me being snarky and ignorant, but if it solved one problem and it is automated what’s stopping it from solving all the others? That’s what’s ultimately being sold by the tweet right.

    • ComplexSystems 15 hours ago

      Ultimately the main thing that will stop it from solving literally "all the others" are things like the impossibility of solving the halting problem, considerations like P ≠ NP, etc. But as we have just seen, despite these impossibility theorems, AI systems are still able to make substantive progress on solving important open real-world problems.

    • aeve890 20 hours ago

      >This is me being snarky and ignorant, but if it solved one problem and it is automated what’s stopping it from solving all the others?

      Yeah that's the crux of the matter. How do AI did it? Using already existing math. If we need new math to prove Collatz, Goldbach or Riemman, LLMs are simply SOL. That's what's missing and hype boys always avoid to mention.

      • dwohnitmok 5 hours ago

        > How do AI did it? Using already existing math. If we need new math to prove Collatz, Goldbach or Riemman, LLMs are simply SOL.

        An unproved theorem now proved is by definition new math. Will LLMs get you to Collatz, Goldbach, or Riemann? Unclear.

        But it's not like there's some magical, entirely unrelated to existing math, "new math" that was required to solve all the big conjectures of the past. They proceeded, as always, by proving new theorems one by one.

        • yorwba 2 hours ago

          Yes, "new math" is neither magical nor unrelated to existing math, but that doesn't mean any new theorem or proof is automatically "new math." I think the term is usually reserved for the definition of a new kind of mathematical object, about which you prove theorems relating it to existing math, which then allows you to construct qualitatively new proofs by transforming statements into the language of your new kind of object and back.

          I think eventually LLMs will also be used as part of systems that come up with new, broadly useful definitions, but we're not there yet.

    • adi_kurian 4 hours ago

      Sucks that fuckers dupe suckers. Personally, I find the news to be incredible.

  • BanditDefender 21 hours ago

    I think it is way too far to say that!

    We've had automated theorem proving since the 60s. What we need is automated theorem discovery. Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence. Instead intelligence is a little game AI plays with Lean.

    AI researchers keep trying to reduce intelligence into something tiny and approachable, like automated theorem proving. It's easy: you write the theorem you want proven and hope you get a proof. It works or it doesn't. Nice and benchmarkable.

    Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space? Or how to suggest an alternative to Turing machines / lambda calculus that expresses the same underlying idea?

    • ComplexSystems 15 hours ago

      > We've had automated theorem proving since the 60s.

      By that logic, we've had LLMs since the 60s!

      > What we need is automated theorem discovery.

      I don't see any reason you couldn't train a model to do this. You'd have to focus it on generating follow-up questions to ask after reading a corpus of literature, playing around with some toy examples in Python and making a conjecture out of it. This seems much easier than training it to actually complete an entire proof.

      > Erdős discovered these theorems even if he wasn't really able to prove them. Euler and Gauss discovered a ton of stuff they couldn't prove. It is weird that nobody considers this to be intelligence.

      Who says they don't? I wouldn't be surprised if HarmonicMath, DeepMind, etc have also thought about this kind of thing.

      > Automated axiom creation seems a lot harder. How is an LLM supposed to know that "between any two points there is a line" formalizes an important property of physical space?

      That's a good question! It would be interesting to see if this is an emergent property of multimodal LLMs trained specifically on this kind of thing. You would need mathematical reasoning, visual information and language encoded into some shared embedding space where similar things are mapped right next to each other geometrically.

    • kvemkon 20 hours ago

      > What we need is automated theorem discovery.

      I've been thinking mathematicians have fun doing math, making discoveries, crafting proofs.

      Does Tour de France & Co. make no sense since small, lightweight and powerful e-bicycles appeared?

      Using computer as a helper like bicycles is one thing, using LLMs seems more like e-bicycle and is something another.

demirbey05 a day ago

This is response from mathematician: "This is quite something, congratulations to Boris and Aristotle!

On one hand, as the nice sketch provided below by tsaf confirms, the final proof is quite simple and elementary - indeed, if one was given this problem in a maths competition (so therefore expected a short simple solution existed) I'd guess that something like the below would be produced. On the other hand, if something like this worked, then surely the combined talents of Burr, Erdős, Graham, and Li would have spotted it.

Normally, this would make me suspicious of this short proof, in that there is overlooked subtlety. But (a) I can't see any and (b) the proof has been formalised in Lean, so clearly it just works!

Perhaps this shows what the real issue in the [BEGL96] conjecture is - namely the removal of 1 and the addition of the necessary gcd condition. (And perhaps at least some subset of the authors were aware of this argument for the easier version allowing 1, but this was overlooked later by Erdős in [Er97] and [Er97e], although if they were aware then one would hope they'd have included this in the paper as a remark.)

At the moment I'm minded to keep this as open, and add the gcd condition in the main statement, and note in the remarks that the easier (?) version allowing 1 and omitting the gcd condition, which was also asked independently by Erdős, has been solved."

The commentator is saying: "I can't believe this famous problem was solved so easily. I would have thought it was a fake proof, but the computer verified it. It turns out the solution works because it addresses a slightly different set of constraints (regarding the number 1) than what Erdős originally struggled with. (Generated by Gemini)

  • dang 7 hours ago

    > Generated by Gemini

    Please don't post generated comments here. We want HN to be a place for humans writing in their own voice.

ares623 a day ago

[flagged]

  • dang 7 hours ago

    "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

    "Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative."

    https://news.ycombinator.com/newsguidelines.html

  • bogdan a day ago

    Seriously, blame the investors.

    • wasmainiac a day ago

      Nah, everyone is to blame here in bubble.

sheepscreek 6 hours ago

It’s crazy that the empathetic AI I sometimes share my life’s mundane problems with wrote this. It understands such high level maths and speaks in a lexicon with words I can’t even pronounce. Incredible versatility. If this isn’t AGI (or even super intelligence) then what is?

Sure it suffers from amnesia and cannot get basic things right sometimes, but one is a design limitation that can be overcome and the other a possible problem caused by training (we’re discovering that overt focus during training on adherence to prompt somewhat lobotomizes their general competence).

vatsachak 3 hours ago

I know the problem the AI solved wasn't that hard but one should consider math toast within the next 10-15 years.

Software engineering is more subtle since you actually care about the performance and semantics of a program. Unless you get really good at program specification, it would be difficult to fully automate.

But with math the only thing you care about is whether the moves you made were right.

  • emil-lp 2 hours ago

    > one should consider math toast within the next 10-15 years.

    I keep a list of ridiculously failed predictions, and this goes into it.

    Can I ask you to be more concrete? What does "math is toast" mean to you?