Ask HN: What happens when capability decouples from credentials?

10 points by falsework 3 days ago

Over the past 18 months, I've been collaborating with AI to build technical systems and conduct analytical work far outside my formal training. No CS degree, no background in the domains I'm working in, no institutional affiliation.

The work is rigorous. Someone with serious credentials has engaged and asked substantive questions. The systems function as designed. But I can't point to the traditional markers that would establish legitimacy—degrees, publications, years of experience in the field.

This isn't about whether AI "did the work." I made every decision, evaluated every output, iterated through hundreds of refinements. The AI was a tool that compressed what would have taken years of formal education into months of intensive, directed learning and execution.

Here's what interests me: We're entering a period where traditional signals of competence—credentials, institutional validation, experience markers—no longer reliably predict capability. Someone can now build sophisticated systems, conduct rigorous analysis, and produce novel insights without any of the credentials that historically signaled those abilities. The gap between "can do" and "should be trusted to do" is widening rapidly.

The old gatekeeping mechanisms are breaking down faster than new ones are forming. When credentials stop being reliable indicators of competence, what replaces them? How do we collectively establish legitimacy for knowledge and capability?

This isn't just theoretical—it's happening right now, at scale. Every day, more people are building things and doing work they have no formal qualification to do. And some of that work is genuinely good.

What frameworks should we use to evaluate competence when the traditional signals are becoming obsolete? How do we establish new language around expertise when terms like "expert," "rigorous," and "qualified" have been so diluted they've lost discriminatory power?

thenaturalist 3 days ago

Adverserial work (be it agent or human).

The one difference between "can do" and "should be trusted to do" is the ability to systematically prove that "can do" holds up close to 100% of task instances and under adverserial conditions.

Hacking and pentesting are already scaling fully autonomously - and systematically.

For now, lower level targets aren't yet attractive as such scale requires sophisticated (state) actors, but that is going to change.

So building systems that white-hat prove your code is not only functional but competent are going to be critical not to be ripped apart by black-hat later on.

One nice example that applies this quite nicely is roborev [0] by the legendary Wes McKinney.

0: https://github.com/roborev-dev/roborev

  • falsework 3 days ago

    This is a good point. You're right that adversarial testing provides one form of validation that doesn't depend on credentials if the system holds up under systematic attack, that's evidence of competence regardless of who built it.

    But I think there's a distinction worth making between technical robustness (does the code have vulnerabilities?) and epistemic legitimacy (should we trust the analysis/conclusions?).

    Pentesting and formal verification can tell us whether a system is secure or functions correctly. That's increasingly automatable and credential-independent because the code either survives adversarial conditions or it doesn't.

    But what about domains where validation is murkier? Cross-domain analysis, research synthesis, strategic thinking, design decisions? These require judgment calls where "correct" isn't binary. The work can be rigorous and well-reasoned without being formally provable.

    The roborev example is interesting because code review is somewhat amenable to systematic validation. But we're also seeing AI collaboration extend into domains where adversarial testing isn't cleanly applicable—policy analysis, theoretical frameworks, creative work with analytical components.

    I wonder if we need different validation frameworks for different types of work. Technical systems: adversarial testing and formal verification. Analytical/intellectual work: something else entirely. But what?

    The deeper question: when the barrier to producing superficially plausible work drops to near-zero, how do we distinguish genuinely rigorous thinking from sophisticated-sounding nonsense? Credentials were a (flawed) heuristic for that. What replaces them in domains where adversarial testing doesn't apply?

svilen_dobrev 2 days ago

> gatekeeping mechanisms

> people doing work they have no formal qualification to do

These can be bad and good.. Depends on the domain, and regulatory overstepping / guilding. Let's take electricity at home. In some countries, you cannot change a lightbulb if you're not "certified" electrician, repairing one's water heater is unthinkable, re-making house's electrical wiring-and-switches-and-contacts infrastructure unimaginable. In other countries, noone gives a damn - if one knows-how and can do it, do it. And people seems wise enough to judge whether to play with that or not. And i have not heard of any difference in injuries/deaths from these. So in this case, "credentials" are an artificial thing.

On the other extreme.. any 5y old is capable of holding a gun and shooting someone from 2 meters. So in that case, guns should better not be available at all. And as consequence, very few countries allow weaponry to proliferate - in this case, strong regulation is required.. And credentials/can-be-trusted might matter..

psyklic 2 days ago

This happens with every new tech. When websites first appeared, many businesses trusted kids to build their website. The key in applied work is to build a portfolio that shows off your abilities.

The reality is that most people who need services don't know anyone who is traditionally qualified and available. So, a portfolio may convince them to take a chance on a newcomer over an overly expensive firm (that often also just hires newcomers).

moralestapia 3 days ago

>traditional signals of competence—credentials, institutional validation, experience markers—no longer reliably predict capability

They never did anyway.

(And I do have those things ...)

aristofun 2 days ago

Would you rather trust your life to a doctor or some AI expert?

What about a million dollar instead of life? A fate of your startup?

Your point breaks pretty quickly when it comes to real deal, not anecdotes and journalists’ stories.

jryb 2 days ago

I work in a field that has a strong glass ceiling, and the only way through is with a PhD (some employers are starting to recognize that, with several years of experience, people with a bachelor's or master's degree can attain the same level of independence and mastery that you'd demand from a PhD, but it's still kinda rare).

But even having a PhD isn't enough to establish credibility during the interview process, as there are plenty of PhDs who are incompetent. It's really more of a thing that gets your foot in the door - a signal that it's plausible that this candidate could have the juice.

The solution is that after the usual interview steps (resume, phone screen, etc), the candidate gives an hour long seminar on their research to 10-20 people (really 45 minutes of material + 15 minutes of questions). It's basically impossible to talk for that long about your research and the prior literature with a critical audience of experts and not reveal whether you actually know your stuff.

So in a sense, your vision has actually been achieved, but only within the group of people who have traditional credentials. The question of how to open this up to anyone regardless of credentials is not something I'm going to be able to answer, but I certainly hope it does.

In terms of frameworks for evaluating competence, here's the questions I ask when deciding if someone is an expert (and what I get out of these questions in parentheses):

1) Has this person spent years doing something where they constantly discovered that they were wrong? (if you're never wrong, you're not learning and you're certainly not doing anything interesting, and might be a crank)

2) Did they have a mentor or group of expert peers who helped them grow and critiqued their work? (this both helps them grow on a daily basis and also plugs gaps in their knowledge and skills, and gives them new ways of thinking)

3) Have they built or discovered something non-trivial, and in the LLM era, do they actually understand what they built? (You can't really be sure your knowledge and skills are meaningful until you apply them)

4) Can they hold their own when being grilled by other people with deep experience in the same field, or adjacent fields? (This assumes the experts are arguing in good faith, but if you can either answer critical questions or convince someone that their questions are flawed, that's a great sign)

5) Do they have both depth and breadth in their knowledge of their field? (I think this one might get downplayed as it smacks of gatekeeping, but it's so easy to make huge errors or reinvent the wheel when you don't know what other people have already done, and don't know how your contribution fits into the work of others)

6) Can they explain their work on multiple levels of complexity? (Filters out people who are just trying to hide their incompetence with jargon)

7) Are they willing to say they don't know, when asked a question they don't know the answer to? (Cranks will never admit this)