FloorEgg 8 hours ago

Anecdote:

Someone I know works for a municipality in digital transformation. They have a public facing website where local residents can report things to the municipality, such as potholes, water system issues, noise complaints, etc.

The UI has this huge taxonomy with like 200 categories with three levels of nesting that route to different departments. There are multiple "other" categories.

When a resident chooses other, it becomes an employee's job to choose a different category. But 30% of the categories employees choose are wrong (about the same for residents).

It's a UX problem, a taxonomy problem, a training problem, a change management problem, a work routing problem, and all these contribute to longer resolution times.

My friend started tinkering with a small quantized local LLM, testing whether it could classify the reported issues more accurately than the public/staff. Of course it could. They're preparing to integrate it into production. Installing it will dramatically improve the UX for the residents, save staff time (at least hundreds of hours a year), improve resolution time, etc.

My friends boss mentioned it at a conference and apparently "no one else is doing this".

So yeah, we are just barely scratching the surface of the productivity opportunities LLMs offer. It's early, not because the technology isn't developed enough to help (it is), but because most people are still figuring out how it can help.

  • 1970-01-01 4 hours ago

    Sorting a decision tree and hitting a leaf from fuzzy context is the one thing I've also noticed that AI does well. It's basically what LLMs are designed to do. They can and should take these bullshit tasks away from humans.

  • caminante 8 hours ago

    Wait for the next admin to unplug the decision tool so they can re-add the jobs and backfill with their private staffing service.

    • FloorEgg 6 hours ago

      Persistent negative thinking. I'm pretty sure someone posted a study just yesterday about how long term PNT seems to correlate strongly with dementia.

      You know you can choose to not always assume the worst about everything, right?

      • caminante 2 hours ago

        Got a link? Sounds like armchair psychology.

        I suspect that permanently killing jobs for a local municipality is a valid, dare I say, wholesome concern. Seems arbitrary to police it here.

        • FloorEgg 9 minutes ago

          You are completely misrepresenting the concern conveyed in the comment I replied to.

          Here is a link to what I was referring to:

          https://share.google/OqtzHog4CXqJDqsoU

          Also, replacing manual labor with innovation is what makes us human, and we have been doing it at least as long as civilization itself. You wouldn't exist without it. Why would you demonize that?

          Also how is it wholesome to keep people doing work they don't want to do and are bad at, while other people who depend on the work being done are having to wait longer than they need to?

codyklimdev 10 hours ago

I think a lot of the reasons for this is because AI helps provide a productivity boost to non-profitable sectors of most businesses, i.e. software development, finance, HR, etc. Since these departments do not directly drive profits, there's no visible bottom line to make meaningful observations on.

I do software for a retail company now and we've been having a similar debate: AI helps me and other departments do work more efficiently, but me getting a feature out the door faster is better for the business but doesn't get more products off the shelves. So, to the shareholders and the C-suite, is AI doing anything for the company?

  • comte7092 10 hours ago

    > Since these departments do not directly drive profits, there's no visible bottom line to make meaningful observations on.

    “Bottom line” is a reference to costs, it doesn’t matter whether a department is a profit center. If AI is making these departments more efficient, it should show up in the bottom line.

    • credit_guy 10 hours ago

      AI does show up in the bottom line, but it’s not attributable to AI. How can you tell if the last month’s 20 new features vs 10 new features the same period one year ago are due to AI or simply to your developers being smarter/working harder/returning to office, etc?

      • comte7092 9 hours ago

        I would pose that question back to you, if it isn’t measurable, how are you certain that it is affecting the bottom line?

  • JumpinJack_Cash 8 hours ago

    > > I think a lot of the reasons for this is because AI helps provide a productivity boost to non-profitable sectors of most businesses, i.e. software development, finance, HR, etc. Since these departments do not directly drive profits, there's no visible bottom line to make meaningful observations on.

    So the right place to look should be looking at free time of employees or cognitive load/stress they are under.

    How is it possible to measure that?

    Anyways Goldman might not be the right firm to measure it because they are not interested in anything that isn't money.

    • codyklimdev 7 hours ago

      Very much agree on your last point.

      I think finding out whether or not AI is actually boosting productivity is a problem of measuring productivity period, which at the very least my current company is pretty bad at. For a developer, is their productivity their lines of code produced, hours worked, project tasks completed per unit of time, agile points completed per unit of time, PRs reviewed, PRs submitted? In more human metrics, is it what their coworkers say about them, what leadership says about them, what customers say about them, testers, QA? The amount of bugs they fix, the amount of bugs they don't ship?

      Sorry for the ramble, but apply this productivity measurement conundrum to entire corporations and it's no wonder that no productivity boost is being recorded. I'd be surprised if semi-accurate productivity measurements were taken in the first place.

electric_muse 10 hours ago

It’s still too early to see real benefits.

Wrapping business processes around these LLMs is the same kind of hard organizational problem plaguing most internal IT projects. People are still the bottleneck.

You also run into the issue of accuracy compounding. Running multi step flows with AI compounds the success rate and dramatically increases the chances of a full-job failure. E.g. even at 99% success rate for any single step, a 30-step process is only likely to succeed 75% of the time without errors. If you go down to 95% success for each, you only have a 75% likelihood of flawless execution at about 6 steps.

So it’s also about getting those per step success rates way up.

  • lenzm 9 hours ago

    It has been years, when will it not be too early?

    • 1970-01-01 4 hours ago

      Let's see what the MBAs are trying, what is sticking, and how long that takes:

      On-prem to cloud -- took about 10 years. De-facto success.

      Blockchain to new age of finance -- 12 years later, not much success overall (Instant regret on anyone dumb enough to buy junk crypto and NFTs)

      Quantum computing -- 10ish years later, nothing major, but it might bear fruit long-term.

      LLMs -- Let's average all the above and say 11 years from 2022. So 2033 is when you can say it was a bust.

    • bluefirebrand 9 hours ago

      Something I'm noticing is that people in my network have lost some kind of concept of time passing

      I had a conversation with an acquaintance a few weeks ago who was adamant that ChatGPT only really showed up less than a year ago. They were absolutely mind blown when I pointed out that ChatGPT got crazy popular literally years ago, late 2022, early 2023. They were convinced this was still very new stuff, like 8 months ago or so

      I don't really blame people either. Personally I feel like the years since COVID have been a weird blur. People don't realize that lockdowns were half a decade ago already

      • thewebguyd 7 hours ago

        > People don't realize that lockdowns were half a decade ago already

        Myself included. My perception of time has been off ever since, and 2020 quite literally feels like it was just last year.

        Same for my peers. I'd really be curious to see some study done on this, and why this distorted perception of time exists now. I'm not sure I can attribute it to lock-down necessarily. I was already fully remote pre-COVID, and my habits nor work schedule changed during lock downs other than wearing a mask whenever I left the house.

        • etblg 4 hours ago

          I just assumed it was me getting old, but I do go "oh yeah that time when" and then realizing that was a literal decade ago. It all does just bleed together, COVID feels like it was just yesterday, 2015 feels like it was just last year, 2001 feels like a decade ago but that was 24 years.

          I still just blame it on being old though.

blitzar 9 hours ago

If it isnt showing up the they are probably depreciating the massive hardware spending over a long time, assuming that the announced billions are actually occuring.

ericdotlee 10 hours ago

My guess is they don't really know how to price it in yet - but also they seem to massively reduce headcount in areas where you can loftily make assumptions that "ai is boosting productivity"

Seems like hand waving and layoffs will have to stop before we get real data

ksec 9 hours ago

Lots of companies are using AI already, which means there is no competitive edges against another company.

It mainly helps with mundane tasks. I think mostly employees have better life within the company do those stupid task or another email or meeting notes.

  • xpe 8 hours ago

    Bad logic on at least two counts. Try again.

    • ksec 7 hours ago

      Thank You for your input. May be next time dont reply.

      • xpe 7 hours ago

        I’m not trying to hurt your pride; I’m trying to communicate a minimum comment quality.

        A not-uncommon reply at this point might be: “Why don’t you tell me my mistakes?” That would be a rather shallow criticism. If you don’t see them, run your comment through a modern LLM.

        You’ll learn more anyway by finding mistakes yourself than if I identify them for you.

        If a comment doesn’t survive a basic sanity check with a large language model, I suggest that it’s a waste of our time here.

        Keep in mind that a comment gets read more times that it is written. So, if you respect your audience, fix your logical errors.

        Finally, passive aggression is common in places but hardly something to strive for. You may not appreciate this now, but one can appreciate and benefit from sincere feedback of all kinds.

aj7 10 hours ago

I remember identical assertions about the pc. If you want to understand business trends, ask an accountant. In 7-10 years.

  • quickthrowman 8 hours ago

    The Altair 8800 came out in 1974 with no video output. The Commodore PET and Apple II were released in 1977 as non-kit computers with video output. VisiCalc came out in late 1979. We are approximately at the VisiCalc release date now with LLMs.