jasperry 26 minutes ago

To me, the greatest contribution of mediocre scientists is that they teach their field to the next generation. To keep science going forward, you need enough people who understand the field to generate a sufficient probability of anyone putting together the pieces of the next major discovery. That's the sense in which the numbers game is more important than the genius factor.

Conversely, entire branches of knowledge can be lost if not enough people are working in the area to maintain a common ground of understanding.

ckemere 2 hours ago

I think citations are an insufficient metric to judge these things on. My experience in writing a paper is that I have formed a well defined model of the world, such that when I write the introduction, I have a series of clear concepts that I use to ground the work. When it comes to the citations to back these ideas, I often associate a person rather than a particular paper, then search for an appropriate paper by that person to cite. That suggests that other means for creating that association - talks, posters, even just conversations- may have significant influence. That in turn suggests a variety of personality/community influences that might drive “scientific progress” as measured by citation.

  • derbOac an hour ago

    I agree completely.

    My own experience in watching citation patterns, not even with things that I've worked on, is that certain authors or groups attract attention for an idea or result for all kinds of weird reasons, and that drives citation patterns, even when they're not the originator of the results or ideas. This leads to weird patterns, like the same results before a certain "popular" paper being ignored even when the "popular" paper is incredibly incremental or even a replication of previous work; sometimes previous authors discussing the same exact idea, even well-known ones, are forgotten in lieu of a newer more charismatic author; various studies have shown that retracted zombie papers continue to be cited at high rates as if they were never retracted; and so forth and so on.

    I've kind of given up trying to figure out what accounts for this. Most of the time it's just a kind of recency availability bias, where people are basically lazy in their citations, or rushed for time, or whatever. Sometimes it's a combination of an older literature simply being forgotten, together with a more recent author with a lot of notoriety for whatever reason discussing the idea. Lots of times there's this weird cult-like buzz around a person, more about their personality or presentation than anything else — as in, a certain person gets a reputation as being a genius, and then people kind of assume whatever they say or show hasn't been said or shown before, leading to a kind of self-fulfilling prophecy in terms of patterns of citations. I don't even think it matters that what they say is valid, it just has to garner a lot of attention and agreement.

    In any event, in my field I don't attribute a lot to researchers being famous for any reason other than being famous. The Matthew effect is real, and can happen very rapidly, for all sorts of reasons. People also have a short attention span, and little memory for history.

    This is all especially true of more recent literature. Citation patterns pre-1995 or so, as is the case with those Wikipedia citations, are probably not representative of the current state.

aerhardt 18 minutes ago

Very cool to see Ortega on the frontpage. He was a fine thinker - phenomenally erudite and connected to his contemporary philosophers, but also eminently readable. He is not technical, rarely uses neologisms, and writes in an easy to digest "stream of thought" style which resembles a lecture (I believe he repackaged his writings into lectures, and vice versa).

I can recommend two of his works:

- The Revolt of Masses (mentioned in the article), where he analyzes the problems of industrial mass societies, the loss of self and the ensuing threats to liberal democracies. He posits the concept of the "mass individual" (hombre masa) a man who is born into the industrial society, but takes for granted the progress - technical and political - that he enjoys, does not enquire about the origins of said progress or his relationship to it, and therefore becomes malleable to illiberal rhetoric. It was written in ~1930 and in many ways the book foresees the forces that would lead to WWII. The book was an international success in its day but it remains eerily current.

- His Meditations on Technics expose a rather simple, albeit accurate philosophy of technology. He talks about the history of technology development, from the accidental (eg, fire), to the artisanal, to the age of machines (where the technologist is effectively building technology that builds technology). He also explains the dual-stage cycle in which humans switch between self-absorption (ensimismamiento) and think about their discomforts, and alteration, in which they decide to transform the world as best as they can. The ideas may not be life-changing but it's one of these books that neatly models and settles things you already intuited. Some of Ortega's reflections often come to mind when I'm looking for meaning in my projects. It might be of interest for other HNers!

alphazard 31 minutes ago

Now that the internet exists, it's harder to reason about how hard a breakthrough was to make. Before information was everywhere instantly, it would be common for discoveries to be made concurrently, separated by years, but genuinely without either scientist knowing of the others work.

That distance between when the two (or more) similar discoveries happened gives insight into how difficult it was. Separated by years, and it must have been very difficult. Separated by months or days, and it is likely an obvious conclusion from a previous discovery. Just a race to publish at that point.

lumost 40 minutes ago

I'm very curious if anyone has tried to control for the natural hierarchies which form in Academia. e.g. A researcher who rises to the top of a funding collaboration will have a disproportionate number of citations due to their influence on funding flows. Likewise, those who influence the acceptances/reviewers at major conferences will naturally attract more citations of their work either by featuring it over other work or correctly predicting where the field was heading based on the paper flows.

TrainedMonkey 2 hours ago

Some other hypothesis:

- Newton - predicts that most advances are made by standing on the shoulders of giants. This seems true if you look at citations alone. See https://nintil.com/newton-hypothesis

- Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers. https://researchonresearch.org/largest-study-of-its-kind-sho...

If I was allowed to speculate I would make a couple of observations. First one is that resources play a huge role in research, so overall progress direction is influenced more by the economics rather than any group. For example every component of a modern smartphone got hyper optimized via massive capital injections. Second one is that this is a real world and thus likely some kind of power law applies. I don't know the exact numbers, but my expectation is that top 1% of researches produce way more output than bottom 25%.

  • Qem 2 hours ago

    > Newton - predicts that most advances are made by standing on the shoulders of giants.

    Leibniz did the same, in the same timeframe. I think this lends credence to the Ortega hypothesis. We see the people that connect the dots as great scientists. But the dots must be there in first place. The dots are the work of the miriad nameless scientists/scholars/scribes/artisans. Once the dots are in place, somebody always shows up to take the last hit and connect them. Sometimes multiple individuals at once.

    • oytis 2 minutes ago

      > The dots are the work of the miriad nameless scientists/scholars/scribes/artisans

      That is not plausible IMO. Nobody has capacity to read the works of a miriad of nameless scientists, not even Isaac Newton. Even less likely that Newton and Leibnitz were both familiar with the same works of minor scientists.

      What is much more likely, is that well-known works of other great mathematicians prepared the foundation for both to reach similar results

    • matthewdgreen 2 hours ago

      Which raises the question of whether there are any results so surprising that it's unlikely that any other scientists would have stumbled onto them in a reasonable time frame.

      • interroboink an hour ago

        I've heard Einstein's General Relativity described that way.

        Special Relativity was not such a big breakthrough. Something like it was on the verge of being described by somebody in that timeframe — all the pieces were in place, and science was headed in that direction.

        But General Relativity really took everyone by surprise.

        At least, that's my understanding from half-remembered interviews from some decades ago (:

  • jvanderbot an hour ago

    As with all things - both are probably true.

    It might be that we attribute post hoc greatness to a small number of folks, but require a lot of very interested / ambitious folks to find the most useful threads to pull, run the labs, catalog data, etc.

    It's only after the fact that we go back and say "hey this was really useful". If only we knew ahead of time that Calculus and "tracking stars" would lead to so many useful discoveries!

  • ramesh31 an hour ago

    >Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers.

    There's a ton of this among all historical figures in general. Any great person you can name throughout history, almost without exception, were born to wealthy connected families that set them on their course. There are certainly exceptions of self made people here and there, and they do tend to be much more interesting. But just about anyone you can easily name in the history math/science/philosophy were rich kids who were afforded the time and resources to develop themselves.

  • ajross 2 hours ago

    > Newton - predicts that most advances are made by standing on the shoulders of giants

    Giants can be wrong, though; so there's a "giants were standing on our shoulders" problem to be solved. The amyloid-beta hypothesis held up Alzheimer's work for decades based on a handful of seemingly-fraudulent-but-never-significantly-challenged results by the giants of the field.

    Kuhn's "paradigm shift" model speaks to this. Eventually the dam breaks, but when it does it's generally not by the sudden appearance of new giants but by the gradual erosion of support in the face of years and years of bland experimental work.

    See also astronomy right now, where a never-really-satisfying ΛCDM model is finally failing in the face of new data. And it turns out not only from Webb and new instruments! The older stuff never fit too but no one cared.

    Continental drift had a similar trajectory, with literally hundreds of years of pretty convincing geology failing to challenge established assumptions until it all finally clicked in the 60's.

karmakaze 32 minutes ago

There are plenty of examples on both sides. There's no need for one to be true and the other false. Geniuses get recognition, so it makes sense for the smurfing contributors to also get a nod.

AlexNet for example was only possible because of the developed algorithms, but also the availability of GPUs for highly parallel processing and importantly the ImageNet labelled data.

pavel_lishin 3 hours ago

> Even minor papers by the most eminent scientists are cited much more than papers by relatively unknown scientists

I wonder if this is because a paper with such a citation is likely to be taken more seriously than a citation that might actually be more relevant.

  • observationist 3 hours ago

    It's a status game, primarily - they want credibility by association. Erdos number and those type of games are very significant in academia, and part of the underlying dysfunction in peer review. Bias towards "I know that name, it must be serious research" and assumptions like "Well, if it's based on a Schmidhuber paper, it must be legitimate research" make peer review a very psychological and social game, rather than a dispassionate, neutral assessment of hypotheses and results.

    There's also a monkey see, monkey do aspect, where "that's just the way things are properly done" comes into play.

    Peer review as it is practiced is the perfect example of Goodhart's law. It was a common practice in academia, but not formalized and institutionalized until the late 60s, and by the 90s it had become a thoroughly corrupted and gamed system. Journals and academic institutions created byzantine practices and rules and just like SEO, people became incentivized to hack those rules without honoring the underlying intent.

    Now significant double digit percentages of research across all fields meet all the technical criteria for publishing, but up to half in some fields cannot be reproduced, and there's a whole lot of outright fraud, used to swindle research dollars and grants.

    Informal good faith communication seemed to be working just fine - as soon as referees and journals got a profit incentive, things started going haywire.

    • mattkrause an hour ago

      I'm sure status is part of it but I think it's almost certainly driven by "availability."

      Big names give more talks in more places and people follow their outputs specifically (e.g., author-based alerts on PubMed or Google Scholar), so people are more aware of their work. There are often a lot of papers one could cite to make the same point, and people tend to go with the ones that they've already got in mind....

rcpt 2 hours ago

> Ortega most likely would have disagreed with the hypothesis that has been named after him, as he held not that scientific progress is driven mainly by the accumulation of small works by mediocrities, but that scientific geniuses create a framework within which intellectually commonplace people can work successfully

This is hilarious

nyeah 35 minutes ago

This is interesting but how could we really determine the answer? It seems very difficult not to get pulled into my own opinions about how it "must work".

stronglikedan an hour ago

Smart people know how to aggregate and apply relevant data that others worked to bring to fruition.

unsupp0rted 2 hours ago

I am instantly skeptical of hypotheses that sound nice and egalitarian.

Nature is usually 80/20. In other words, 80% of researchers probably might as well not exist.

  • thmsths 2 hours ago

    It's not that everyone contributes equally. It's that everyone's contribution matters. And while small contributions are less impressive, they are also more numerous, much more numerous which means that it's not out of the question that in aggregate they matter more; which means they should not be discounted. As Napoleon allegedly said "quantity has a quality of its own".

    • jvanderbot 2 hours ago

      Moreover, the researchers are the contributing 20% (or more like 2%). It's probably fractal, but if you zoom out even a little, there's a long tail of not-much in any group.

  • captainbland an hour ago

    The Pareto principle gets "interesting" when you involve hierarchical categories. For instance, the category of "researchers" is arguably arbitrary. Why not research labs? Why not research universities? If we write off 80% of universities, 80% of labs in that top 20% of universities and 80% of researchers within that top 20% of labs then actually the number of impactful researchers would in fact be .2 * .2 *.2 or 0.8% of researchers which seems extreme.

    That said if we took 20% of all working people are doing useful work, then can you guarantee not all research scientists are within that category?

    And indeed there are different fields and the distributions of effectiveness may be incomparable.

    I think the nature of scientific and mathematical research is interesting in that often "useless" findings can find surprising applications. Boolean algebra is an interesting example of this in that until computing came about, it seemed purely theoretical in nature and impactless almost by definition. Yet the implications of that work underpinned the design of computer processors and the information age as such.

    This creates a quandary: we can say perhaps only 20% of work is relevant, but we don't necessarily know which 20% in advance.

  • oytis 23 minutes ago

    Also the evidence for Newton hypothesis seems so much stronger. Like, how do you even measure the invisible influence of mediocre scientists?

  • umutisik 2 hours ago

    You still need the other 80% of the folks to get the remaining 20% of the work done :)

  • pryelluw 2 hours ago

    But without the 80%, how would the 20% exist?

    • unsupp0rted 19 minutes ago

      Conversely, without the 80% the 20% might be unencumbered.

      Imagine 2 Earths : one with 10 million researchers and the other with 2 million, but the latter is so cut-throat that the 2 million are Science Spartans.

    • hobs 2 hours ago

      And more specifically, if you knew which science to fund ahead of time we'd never have anything but 100% successes, science is often random and huge parts of it are not obviously useful ahead of time, some of which later becomes enormously useful.

  • laidoffamazon 2 hours ago

    “Might as well not exist” - what should be done of the bottom 80% of society then? I’m sure this applies to SWEs too.

  • ihm 2 hours ago

    > Nature is usually 80/20. In other words, 80% of researchers probably might as well not exist.

    What does this even mean? Do you think in an ant colony only the queen is needed? Or in a wolf pack only the strongest wolf?

throwaway72610 an hour ago

Chronologies toward a working theory of advancing science, which is the subject of Orgega's contention for mediocre scholars, working on accumulating citations, footnotes, etc. For a proper understanding of technical pieces, Cal Newport's concept of deep work is essential.

qrian 2 hours ago

This sounds like the concept of ‘normal science’ in paradigm theory.

renewiltord 2 hours ago

It's probably like venture capital. There are many scientists who test many hypotheses. Many are bad at generating hypotheses or running tests. Some are good at one or the other. Some are good at both and just happen to pick the ones that don't work. Some are good at all.

But you can't tell ahead of time which one is which. Maybe you can shift the distribution but often your pathological cases excluded are precisely the ones you wanted to not exclude (your Karikos get Suhadolniked). So you need to have them all work. It's just an inherent property of the problem.

Like searching an unsorted n list for a number. You kind of need to test all the numbers till you find yours. The search cost is just the cost. You can't uncost it by just picking the right index. That's not a meaningful statement.

  • theurerjohn3 2 hours ago

    there is blog post somewhere i read, i cannot find it at the moment, that discusses the idea of "doctor problems" vs "musician problems". Doctor problems are problems where low quality solutions are deeply bad, so you should avoid them even if it involves producing fewer high quality solutions, while musician problems are ones where high quality solutions are very very worth it, so you should encourgage as many tries as possible so you get the super high quality wins. This seems a useful frame of reference, but not really the Ortega Hypothesis

    it seems clear to me that the downside of society having a bad scientist is relatively low, so long as theres a gap between low quality science and politics [0], while the upside is huge.

    0. https://en.wikipedia.org/wiki/Trofim_Lysenko

NullHypothesist 3 hours ago

> the opposing "Newton hypothesis", which says that scientific progress is mostly the work of a relatively small number of great scientists (after Isaac Newton's statement that he "stood on the shoulders of giants")

I guess the Ortega equivalent statement would be "I stood on top of a giant pile of tiny people"

...Not quite as majestic, but hey, if it gets the job done...

  • antognini 3 hours ago

    "If I have not seen as far as others, it is because giants have been standing on my shoulders." --Hal Abelson

  • otikik 3 hours ago

    It's still giants. Giant accumulated effort from many individuals.

btilly 42 minutes ago

I believe that this hypothesis is wrong.

More specifically, I believe that scientific research winds up dominated by groups who are all chasing the same circle of popular ideas. These groups start because some initial success produced results. This made a small number of scientists achieve prominence. Which makes their opinion important for the advancement of other scientists. Their goodwill and recommendations will help you get grants, tenure, and so on.

But once the initial ideas are played out, there is little prospect of further real progress. Indeed that progress usually doesn't come until someone outside of the group pursues a new idea. At which point the work of those in existing group will turn out to have had essentially no value.

As evidence for my belief, I point to https://www.chemistryworld.com/news/science-really-does-adva.... It documents that Planck's principle is real. Fairly regularly, people who become star researchers, wind up holding back further process until they die. After they die, new people can come into the field, pursuing new ideas, and progress resumes. And so it is that progress advances one funeral at a time.

As a practical example, look at the discovery of blue LEDs. There was a lot of work on this in the 70s and 80s. Everyone knew how important it would be. A lot of money went into the field. Armies of researchers were studying compounds like zinc selenide. The received wisdom was that galium nitride was a dead end. What was the sum contribution of these armies of researchers to the invention of blue LEDs? To convince Shuji Nakamura that if that was the right approach, he had no hope. So he went into galium nitride instead. The rest is history, and the existing field is lost.

Let's take an example that is still going on. Physicists invented string theory around 50 years ago. The problems in the approach are summed up in the quote that is often attributed to Feynman, *"String theorists don't make predictions, they make excuses." To date, string theory has yet to produce a single prediction that was verified by experiment. And yet there are thousands of physicists working in the field. As interesting as they found their research, it is unlikely that any of their work will wind up contributing anything to whatever future improved foundation is discovered for physics.

Here is a tragic example. Alzheimer's is a terrible disease. Very large amounts of money have gone into research for a treatment. The NIH by itself spends around $4 billion per year on this, on top of large investments from the pharmaceutical industry. Several decades ago, the amyloid beta hypothesis rose to prominence. There is indeed a strong correlation between amyloid beta plaques and Alzheimer's, and there are plausible mechanisms by which amyloid beta could cause brain damage.

After several decades of research, and many failed drug trials, support the following conclusion. There are many ways to prevent the buildup of amyloid beta plaques. These cure Alzheimer's in the mouse model that is widely used in research. These drugs produce no clinical improvement in human symptoms. (Yes, even Aduhelm, which was controversially approved by the FDA in 2021, produces no improvement in human symptoms.) The widespread desire for results has created fertile ground for fraudsters. Like Marc Tessier-Lavigne, whose fraud propelled him to becoming President of Stanford in 2016.

After widespread criticism from outside of the field, there is now some research into alternate hypotheses about the root causes of Alzheimer's. I personally think that there is promise in research suggesting that it is caused by damage done by viruses that get into the brain, and the amyloid beta plaques are left by our immune response to those viruses. But regardless of what hypothesis eventually proves to be correct, it seems extremely unlikely to me that the amyloid beta hypothesis will prove correct in the long run. (Cognitive dissonance keeps those currently in the field from drawing that conclusion though...)

We have spend tens of billions of dollars over several decades on Alzheimer's research. What is the future scientific value of this research? My bet is that it is destined for the garbage, except as a cautionary tale about how much damage it can cause when a scientific field becomes unwilling to question its unproven opinions.

IshKebab an hour ago

> According to Ortega, science is mostly the work of geniuses, and geniuses mostly build on each other's work, but in some fields there is a real need for systematic laboratory work that could be done by almost anyone.

That seems correct to me. Imagine having a hypothesis named after you that a) you disagree with, and b) seems fairly doubtful at best!

Jegber 32 minutes ago

Thomas Kuhn has entered the chat

elicash 2 hours ago

I was disappointed to read he didn't name it after himself in an ironic display of humility.

("Ortega most likely would have disagreed with the hypothesis that has been named after him...")

_verandaguy an hour ago

Are Ortega and Newton mutually exclusive? Isn't the case much more likely that both:

- Significant advances by individuals or small groups (the Newtons, Einsteins, or Gausses of the world), enable narrowly-specialized, incremental work by "average" scientists, which elaborate upon the Great Advancement...

- ... And then those small achievements form the body of work upon which the next Great Advancement can be built?

Our potential to contribute -- even if you're Gauss or Feynman or whomever -- is limited by our time on Earth. We have tools to cheat death a bit when it comes to knowledge, chief among which are writing systems, libraries of knowledge, and the compounding effects of decades or centuries of study.

A good example here might be Fermat's last theorem. Everyone who's dipped their toes in math even at an undergraduate level will have at least heard about it, and about Fermat. People interested in the problem might well know that it was proven by Andrew Wiles, who -- almost no matter what else he does in life -- will probably be remembered mainly as "that guy who proved Fermat's last theorem." He'll go down in history (though likely not as well-known as Fermat himself).

But who's going to remember all the people along the way who failed to prove Fermat? There have been hundreds of serious attempts over the four-odd centuries that the theorem had been around, and I'm certain Wiles had referred to their work while working on his own proof, if only to figure out what doesn't work.

---

There's another part to this, and that's that as our understanding of the world grows, Great Advancements will be ever more specialized, and likely further and further removed from common knowledge.

We've gone from a great advancement being something as fundamental as positing a definition of pi, or the Pythagorean theorem in Classical Greece; to identifying the slightly more abstract, but still intuitive idea that white light is a combination of all other colours on the visible spectrum and that the right piece of glass can refract it back into its "components" during the Renaissance; to the fundamentally less intuitive but no less groundbreaking idea of atomic orbitals in the early 20th century.

The Great Advancements we're making now, I struggle to understand the implications of even as a technical person. What would a memristor really do? What do we do with the knowledge that gravity travels in waves? It's great to have solved n-higher-dimensional sphere packing for some two-digit n... but I'll have to take you at your word that it helps optimize cellular data network topology.

The amount of context it takes to understand these things requires a lifetime of dedicated, focused research, and that's to say nothing of what it takes to find applications for this knowledge. And when those discoveries are made and their applications are found, they're just so abstract, so far removed from the day-to-day life of most people outside of that specialization, that it's difficult to even explain why it matters, no matter what a quantum leap that represents in a given field.

d--b 3 hours ago

People in humanities still haven’t understood that pretty mich everything in their fields is never all black or all white.

It’s a bizarre debate when it’s glaringly obvious that small contributions matter and big contributions matter as well.

But which contributes more, they ask? Who gives a shit, really?

  • oytis 21 minutes ago

    > it’s glaringly obvious that small contributions matter

    Not at all obvious to me. What were the small contributions to e.g. the theory of gravity?

  • eszed 39 minutes ago

    Humanities? Why'd you drag humanities into this?

    (I agree with your point, by the way.)

    • d--b 12 minutes ago

      the guy who coined the term Ortega Analysis is a sociologist. This is the branch of humanities that studies behaviors in a scientific setting.