Discuss HN: Software Careers Post ChatGPT+

86 points by rich_sasha a year ago

We've all seen it - ChatGPT genuinely solving coding puzzles. Clearly, clearly, that's a long way from building MVP products, designing new programming languages or writing "Hello World" in Haskell. But it's also a long way since even GPT-3, never mind status quo 10 years ago. It would be cool to discuss what a future looks like where "human operators" of programming are competing against a machine. I don't think it is imminent, but equally I think it's less imminent than I did a week ago.

Some threads that come to mind:

- Are these language models better than current offshore outsourced coders? These can code too, sort of, and yet they don't threaten the software industry (much).

- What would SEs do if any layperson can say "hey AI slave, write me a program that..."? What would we, literally, do? Are there other, undersaturated professions we'd go into, where analytical thinking is required? Could we, ironically, wake up in a future where thinking skills are taken over by machine, and it's other skills - visual, physical labour, fine motor skills - that remain unautomated?

- Are we even the first ones in the firing line? Clearly, for now AI progress is mostly in text-based professions; we haven't seen a GPT equivalent for video comprehension, for example. Are lawyers at risk? Writers?

- What can SEs do, realistically, to protect themselves? Putting the genie back in the bottle is not, as discussed many times in other threads, an option.

- Or is it all bogus (with justification), and we're fine?

No doubt ChatGPT will chip in...

hansonkd a year ago

I've been using ChatGPT all weekend to generate code and what I found was this:

  * Its absurdly good at coding and following types. For example if you change a type in Rust to be an Option, it will refactor the code to properly handle Options in the parts it used them. but it isn't perfect.
  * It gets it well enough there. It can generate test cases so its easy to test to see if it works.
but in the end after hours and hours of trying to coax the AI, it was unable to do what I wanted, build a b-tree in Python. it built a binary tree just fine, but trying to have it generalize to a b-tree was a problem.

  * it introduced many many many subtle errors. Like variables not being initialized or not splitting the children correctly.
  * Its implementation worked when all keys inserted were in order, but not when they were out of order.
  * it would miss and leave out variables.
  * It would frequently have index errors from trying to access lists out of bounds.
  * writing the code in Rust was almost impossible. it would constantly have wrong types or move errors.
Overall I couldn't recommend this to anyone without a strong CS background. It introduces far too many subtle bugs in code that are are almost impossible to review because the code it produces is so convincing that you go "hmm maybe it knows what it is talking about" but in the end you have no idea what you should trust.

Even the test cases that it generates can be deceptive. They look convincing but upon closer inspection, sometimes are just not really testing anything.

  • SxC97 a year ago

    I also tried to get it to generate some simple code examples.

    One was to generate a webpage with a button to add textboxes. Each textbox should have a unique remove button. When the site gets down to only one textbox on the page, it should not allow the user to remove the last textbox.

    After several iterations, it wasn't able to do it, often with hilarious results! (I asked it to only remove the _selected_ textbox, but if you clicked remove, it would just delete all the elements from the site)

    I think the real value for me would be using this to generate a starting point.

    I have a text document on my computer that has a long list of small coding projects that I want to get to at some point. But the activation energy required to stop watching Youtube and start coding is high enough that several projects have languished in my inbox for years.

    If I could just feed my ideas into chatGPT and get a starting point, it would be much easier to keep going and fix small errors/add additional features than to start a project from scratch.

  • lmarcos a year ago

    Seems like it has the same coding abilities I had when I started to write code many years ago. Wonder how fast it can become better... I bet: exponentially faster than me.

  • qualudeheart a year ago

    > Overall I couldn't recommend this to anyone without a strong CS background. It introduces far too many subtle bugs in code that are are almost impossible to review because the code it produces is so convincing that you go "hmm maybe it knows what it is talking about" but in the end you have no idea what you should trust.

    As someone using chatgpt all week— 30ish hours so far- and copilot since I got access early one. This fits with my xp very well. It’s like a junior programmer. You need the know how to know if your junior programmer is doing it right.

  • rakejake a year ago

    Can you ask chatGPT why it wrote a certain line or why it made a certain decision? I still think explainability is missing in these models, but even if it able to come up with something, what is the guarantee that the explanation is not bullshit?

joshuahedlund a year ago

> Or is it all bogus (with justification), and we're fine?

This is mostly my take. We are at the stage of truck drivers ten years ago who might have been freaking out about self-driving taking their jobs, and here we are in 2022 with a truck driver shortage.

I don't think GPT can be useful to the point of replacing software engineering without a consistent mental model of itself, time, and the world, and I predict it will approach the limits of what advanced-search pattern-matching can do without getting anywhere near those AGI capabilities.

  • steve_adams_86 a year ago

    I think these are much different. Trucking depends on humans due to infrastructure and complex problems of law, vision, last mile details, etc. I agree about near term limitations of this kind of model, but I think highly contextual and refined models for software will prove to be powerful enough (edit: by powerful enough, I mean powerful enough to transform how people write software) in many cases.

    Software reiterates a lot, and generated solutions can be quickly vetted. A truck driving to the wrong destination or locking up en route is a much larger issues than a few seconds spent determining that a generated solution has a bug or doesn’t match a spec exactly.

    If AI can stand on the shoulders of giants and people can vet its outputs, I’m fairly sure it will become more capable and safer to implement than self driving trucks, much faster.

    The question I have is that if we can build more, faster, will we run out of work or will more people simply make more things?

    There are huge incentives for wealthy companies to run more and more code on their infrastructure. Can we do more business digitally? Will it scale to provide more programming work, even if it’s heavily AI-assisted?

    • joshuahedlund a year ago

      > Software reiterates a lot,

      Well sure. That's why we have Wordpress. Javascript frameworks. And ten thousand other things. All the plug-ins of the last ten years have made me a more productive developer. But it hasn't reduced the demand for developers.

      • sonnhy a year ago

        You should understand why there’s a lot of demand for programmers and how they demand might be impacted by ChatGPT

gardenhedge a year ago

I mentioned it in a previous comment. It's common for senior engineers not to write that much code. They spend time their time on meetings, planning, creating architectures, presenting solutions, discussing solutions, triaging, keeping up-to-date with tech, clarifying business cost, working on waste avoidance, reviewing code, streamlining processes, vetting new tech/solutions and, in general, understanding everything that is going on.

ChatGPT+ will definitely have some affect on junior devs but us more experience folk should be fine... for now..

  • lordswork a year ago

    How do junior devs ever become senior devs if an AI can replace the work of all junior devs?

    • steve_adams_86 a year ago

      This is a great question.

      One thing about GPT is that it only knows what we know at the moment. That indicates to me that it won’t be great for learning new technologies until humans generate content it can regurgitate. That alone might give juniors an edge against it (assuming they are gradually replaced by a robot pooping out dumb logic) - they might be able to specialize in learning what models don’t yet know, or what they can’t be good at.

      Just guessing here. I’d love to hear a rebuttal to get a sense of where people think things are going.

      Though I don’t think GPT is “there” yet, I can see it getting there by 2030. I think it’s seriously worth considering: how will people learn to program in 10 years, how will they remain relevant through periods of their career where an AI can generate better solutions than they can, and how will more experienced engineers adapt to those changes?

      • rich_sasha a year ago

        I like to think of Industrial Revolution for counterexamples. It wiped out tens or hundreds of professions. No more blacksmiths in every village.

        Two things happened: one, blacksmithing effectively was taken over by a different skill, that of configuring, servicing and operating industrial machines. There were no junior or senior blacksmiths anymore, seniors probably migrated their knowledge, and the whole field was taken over by non-blacksmiths.

        The other thing is that a small proportion remained. Initially perhaps for niche goods that were too difficult or uneconomical to automate. Now blacksmiths do exist, a bit like craft coffee blenders and horse trainers. But the industry was nonetheless wiped out.

        Will we see that crafting software will become a job for people trained in AI whispering? Perhaps they will never, or hardly ever, write any code, but will train extensively in software design. Much as a metallurgy engineer might know all crystalline forms of steel but have never swung a hammer.

        Day 1 these jobs would likely get filled by senior devs, but there may be no new devs, junior or senior, except for niche applications.

      • croo a year ago

        I guess they won't, in the same way we are much worse at remembering information in the era of search engines. They will be profient in telling the AI the exact words it needs to produce the correct code output. No one will be good in programming trivial code snippets not because every snippet is on SO but because every code part can be generated with the correct request.

        As many of us cannot read machine code and hex anymore many of the juniors won't be able to parse the code output - but it may not matter anymore.

        • NoZZz a year ago

          In which case programming will become stringing prompts together, ChatGPTLang anyone? That's still programming.

    • Gigachad a year ago

      They will probably manage. People manage to become programmers without having to learn how basic electronics, cpus, operating systems, etc work. You just skip over those solved problems.

  • rvz a year ago

    > ChatGPT+ will definitely have some affect on junior devs but us more experience folk should be fine... for now..

    It affects both. If a single team was to be split with 5 juniors and 5 seniors, ChatGPT significantly reduces that headcount from 5 juniors to 0, and 5 seniors to 2 or 3.

    With many companies cutting costs and with the cheap money getting dried up, no-one is safe. HN may not like it but, the same thing that has happened to digital artists with Stable Diffusion which was welcomed on this site, now has happened to programmers and I see lots of worrying and frowns everywhere.

    It appears that StackOverflow (which lots of juniors and senior developers use) has just become irrelevant.

    • Xelynega a year ago

      > ChatGPT significantly reduces that headcount from 5 juniors to 0, and 5 seniors to 2 or 3.

      Citation needed. I haven't heard of any massive disruption in the commission art market since stable diffusion went public, and I don't think something less-impactful(a different way to search old stack overflow posts) is going to cause a massive disruption either.

      Stack overflow still beats chatgpt in one area that it can never compete. Coming up with new solutions to new questions. If all we needed answered was the same questions, chatgpt would be sufficient since it's essentially a compressed version of our current knowledge. We don't really have a way to update it with "new knowledge" other than "train it again".

      • rich_sasha a year ago

        Art is a strange market, where people pay for more than the physical good being delivered. In some cases the objective value of the good being delivered is zero, or even negative after considering cost of "carry" (transport, storage, insurance) yet people pay for it.

        I once spoke to an illustrator who said her work was definitely shrinking due to low cost human competition, so much so that she was leaving her profession. I can imagine this did in fact get worse.

        • gsail11 a year ago

          >Art is a strange market, where people pay for more than the physical good being delivered

          Luckily we're in tech, which doesn't work this way at all.

  • seydor a year ago

    I think the opposite, LLMs will be used to build the optimal high level scaffolding and implementation, but low level devs will be needed to check and verify the code. As we ve seen so far, AI automates the brainy part, but not the long tail or parts that need physical access (eg safety drivers, warehouse workers)

    • rakejake a year ago

      I'd argue that reading and verifying code correctness is the brainy job.

  • jhoelzel a year ago

    i strongly disagree.

    The point is exactly that most of those meetings are happening everywhere for the same reason and thus GPT25 might already know all the answers that you need.

    Also given enough general framework skills, I'm pretty sure the AI will be able to build stuff like a good junior dev.

    • Xelynega a year ago

      The algorithm doesn't have any "general framework skills" though because it's an algorithm, not a person.

      It can generate something that looks like what a person would have wrote based on its compressed probabilities, but that's very different from being an "an artofficial intelligence". At best it's a Chinese Room.

      • jhoelzel a year ago

        And proper APIs and Frameworks are very well documented inputs and outputs, whats your point?

        Im pretty sure we could train an AI to suggest the correct API for the intended purposes. Given the fact that we have support of datatypes and documentation what is left to do is to map what you have to what you want.

        Also we can reasonably generate classes and controllers already through cli, so why not combine all the nice things we have to something usefull.

        Kubernetes is simply an api too

        • steve_adams_86 a year ago

          With types and comments as a prompt in Go, GPT already produces fairly viable API endpoints for basic CRUD operations.

          With some framework to compose various prompted endpoints using the same types and conventions, a junior engineer would be totally unnecessary. They couldn’t code as well, and they couldn’t architect as well as a more experienced engineer doing the prompting.

          You could even prompt from a spec and begin iterating upon the outputs, revising each manually or with GPT’s suggestions. Say “take this endpoint and refactor it based on the conventions of this endpoint, and make sure (some logic) accounts for (some potential problem)” and boom, it will probably work okay.

          • gsail11 a year ago

            But aren't the devs of the future on GPT right now, half-assing learning to code while getting a fast track education in correcting bugs and architecting?

            If this technology evolves to be able to reliably generate working code to a prompt, the entire field of software dev will shift dramatically. Some junior and some senior devs would prove better at meeting deadlines by AI whispering, and others would have to find new career paths.

    • gardenhedge a year ago

      In that case GPT25 can do all the tech work and all the business decisions and all the marketing work. It will just do everything.

  • rich_sasha a year ago

    Yeah, agreed. I wonder though if it will start nibbling away at the bottom of the pyramid.

    First you'll cut the bottom 10%, then the bottom 20% etc. The pie will only be shrinking.

nicholasjarnold a year ago

> Are lawyers at risk?

There are a lot of sub-areas of expertise and practice that someone with a JD might choose to specialize in. I have some small personal experience in (technically) advising a failed/defunct startup that sought to solve the problem of patent search using an AI. This was years ago, now...maybe around 2018ish. The endeavor failed for various reasons, but it did provide some insight that's relevant to your question here.

As these language models become more advanced (and much more accurate) I think there will be a number of ways in which they will disrupt existing domains of human expertise. Note that I used the word disrupt and not displace. In the patent search space that I was lightly involved in for a short time I basically learned how expensive and time-consuming a good patent search actually is. The machines were planned to be leveraged to drastically reduce the time and cost of typical prior art searches, but would still require human touch-points to interpret results and make final decisions/reports. I think this sort of use-case is much more in line with what will ultimately happen in any sort of foreseeable future. The AI will supplement and ease the previously-human-only task. It will not supplant/replace it.

  • seydor a year ago

    Lawyers can always make a law to pay themselves, programmers can't

    • adamckay a year ago

      Lawyers don't make laws, they argue about them.

      Politicians make laws.

      • seydor a year ago

        Most politicians and lawmakers (esp. the ones that write the laws) are lawyers, they are not SEs

alfalfasprout a year ago

Here's the problem-- this will automate the type of "programming" that's just looking something up on stackoverflow and more or less copy/pasting the answer. There's a lot of that out there.

Once you're a more senior engineer there's a lot more than just writing code. Designing a system, worrying about maintainability, operational burden, scaling, etc. are where you might spend your time more.

I'd argue that even for "programming" the usefulness is debatable. These models spit out relatively correct code but mainly in the sense that they regurgitate something akin to a SO answer. There are plenty of subtle logical errors though and that debugging exercise often takes longer than just writing the code. Lots of code references other libraries, etc. and APIs do change frequently. So ensuring what's generated even works as expected is a fair amount of effort.

Still, the chasm between "engineering" work and "programming" work is only going to get bigger as a result of tooling like this. I expect a lot of what's currently outsourced to overseas IT consultancies can be replaced with half the staff leveraging these tools. The bottleneck has always been producing the exact requirements, tightly scoped tasks, etc. though. We're no closer here.

yashg a year ago

Let's say you have an idea for a mobile app, but you have never coded anything and you don't want to learn Android/iOS programming to make this app. You want to convert your idea into an actual app that is deployed on the app store where people can download it on their phone. What would you do? Normally you'd hire a programmer and get them to build, test and deploy this app. Now you have ChatGPT, are you going to use it to build and deploy this app? No. You will still hire a programmer who will probably use ChatGPT to write some or most of the code as opposed to writing all the code manually. This will save them a lot of time and since they can complete the project faster, they will charge a lesser amount to you, it may still be higher on a per hour basis but it's a win-win for both you and the programmer.

AI assisted coding or anything else will not replace the professionals who had been doing it manually so far. It will only make them more productive. They can do more work in less time and charge more for the enhanced productivity.

cranium a year ago

I really fear for the day I need to debug some AI-generated legacy code. It's not really the algorithmic part that scares me, but the naming and code architecture.

These AI seem so confident when they output BS that it makes you doubt yourself. Now imagine if some code looks coherent but you find that each line does something slightly different that what the variable names and other method calls suggest. Now you can't trust the names to build a mental image of the code; you have to follow each method call to find out exactly what it does. It would be worst than looking at obfuscated names because you may think you know what is going on.

  • djmips a year ago

    Some of the legacy code I have to debug makes me wonder if someone already had a GPT 5 years ago... Seriously - it's alien code - at the very least this person doesn't think like me at all.

  • zTehRyaN a year ago

    That is a really useful insight! I share your fear about human-conducted debugging of AI-generated code

spaceman_2020 a year ago

A perspective from a non-professional who has been teaching himself to code:

My knowledge of exact functions is poor. I might know that I can use Framer library to animate on-page elements, but I have little to no understanding of the exact function needed to animate an object from, say, left-to-right on hover.

My normal workflow was to either read the documentation or search StackOverflow for answers. I would then have to rework the function to fit my current use case.

Now, I've been asking chatGPT directly to build the exact function for me.

So far, it's been a massive timesaver. I'll probably learn more if I dig through the documentation, but since I'm a hobbyist, not a professional, it's much more convenient for me to just get the information I need, without digging through Stackoverflow or documentation.

  • burkaman a year ago

    FYI this is probably not a good habit if you're trying to teach yourself, rather than just trying to get some task done. Reading documentation and searching StackOverflow are genuinely useful skills that take practice to get good at. Asking chatGPT is equivalent to just asking a friend for the answer, which is fine if you want to be efficient but not ideal for learning.

    Obviously this doesn't matter if we think chatGPT is so good that you'll never need to read documentation yourself, but I think this is one of those situations where you need to be an expert before you're allowed to break the rules. Without experience, you won't know if chatGPT is really giving you everything you'd get from reading the docs yourself, or only a small and potentially inaccurate slice.

    • pcthrowaway a year ago

      ChatGPT generally goes into a lot of details about its decisions, and provides detailed explanations. You still have to fact-check it, or verify by running the code, because it will make mistakes, but if that happens you can say "Hey, this isn't quite right because ..., how do I actually do this" and it will usually figure it out.

      As a software dev of 10 years, I've done the "googling and reading documentation" a fair bit, which is kind of like stumbling around in the dark and feeling around to get a sense of where things are. For some well-defined, well-documented things, using ChatGPT to do the same is like having having an overconfident junior-intermediate dev to pair with who's familiar in a stack that I'm not. I still have to guide it a fair bit, and adjust my expectations to account for that overconfidence. But it can absolutely guide me as well, and teach me new things.

      • burkaman a year ago

        Yeah that makes sense, I'm just saying that if you didn't have 10 years of experience, you might not know how to guide it and might not notice when it doesn't seem quite right, and end up learning a lot less than you could.

        It's a little bit like "you won't always have a calculator in your pocket". We do always have a calculator these days, but it's still useful to know how to do arithmetic so you can do things in your head, and notice if an answer doesn't make sense because you made a typo or something. Maybe in the future we'll all have chatGPT running locally on our phones or brain interfaces or whatever and be able to quickly train it on new datasets, but even then it will still be useful to know how to do things yourself.

    • discreteevent a year ago

      I think that sometimes just copying and pasting from stack overflow is not much better than using chatGPT. But I agree with you about reading documentation. When you read the docs you build up a model of the system in your head. You can then play with this model in your head and come up with good solutions. This seems to be exactly what chatGPT can't do.

      Also I'm senior and sometimes don't get to program for long periods of time. What I find is that when I don't program I get worse and solving higher level problems. The important part of programming is not about knowing APIs etc. It is modeling a problem and its solution in a domain that forces you to be precise. For that reason I would say to junior developers: Keep programming. It will make you a better problem solver and it will make you better at the things that chatGPT can't do.

      • burkaman a year ago

        Even for Stack Overflow, knowing what to copy is a learned skill. Stuff like "this accepted answer with 10k upvotes is 8 years old, I should scroll down and see if there's a newer answer with the right way to do it in $latest version". Maybe chatGPT can handle that, I don't know, I just think that finding the right answer within the right forum post is an important learned skill and I wouldn't want new people to miss out on it. It carries over to many other areas of your life.

      • spaceman_2020 a year ago

        As an aside, programming is the lowest "willpower depletion" activity I've ever done, apart from producing (i.e. in a software DAW) music. I can program for hours without getting particularly tired or feeling like I need a break.

        Have to say I'm a little jealous of people who get to do this for a living.

    • spaceman_2020 a year ago

      I understand that and I'm fine with it, especially since I'm using it for a hobby project, and mostly looking up non-core libraries that I'll likely not use often again (such as framer motion).

      My point is that it's making newbies like me way more productive than we have any right to be.

  • discreteevent a year ago

    It will be interesting if it replaces stackoverflow considering that it probably trained itself on a lot of the questions and answers. On the one hand it's not much different than training on github or how google put translators out of business by using their translations. But it is just a more direct connection that demonstrates how these guys are funneling the wealth generated by other people's work up to themselves. Before stackoverflow the state of questions and answers on the web was really bad and full of noise. They took a risk and put a lot of effort and engineering knowledge into building it.

    What really annoys me is that it will probably further train itself on this text I'm writing now. I am writing it in the spirit of exchange with other similar people. Not in the spirit of some mechanical turk worker for OpenAI.

    • burkaman a year ago

      I agree and I think this is similar to some people's very legitimate objections to Stable Diffusion and DALL-E. When people put artwork up on the internet they were expecting a handful of human beings to draw some enjoyment and maybe inspiration from it. They were not expecting billions of identical robots to ingest it in a nanosecond and remember and build off of it for eternity.

      Scale matters, and robot and human inspiration are not ethically equivalent even if you think they are mechanically equivalent.

QuadmasterXLII a year ago

We all have repositories with open issues. I'd suggest that you open up github, click the first issue you have open, and try to close it with ChatGPT assistance. For me, this revealed a gap between its spectacular skill at programming puzzles and its total incompetence at closing issues- it can do dynamic programming challenges instantly but can't modify (a function that loads a file from disk, or downloads that file if it isn't already present) to (the same function but it correctly handles a failed download)

  • kentlyons a year ago

    I did exactly this. I'm doing some frontend work where I only have basic knowledge and there was a small feature that was a nice to have. When I originally wrote my own solution, it didn't work. So I took my failed solution and out it into Chat GPT and went back and fourth a few times. It also didn't provide solutions that worked after trying them. So I started asking about alternative solutions and eventually that sparked a new idea I could ask about. That ended up working.

    Probably a 5 min conversation with a frontend dev would have gotten me that answer faster, but that isn't something I have access to at the moment. And it was far enough down my priority list that it probably would have stayed unimplemented for a long time. With the back and forth with ChatGPT I got there in maybe 30 minutes of playing around. So a direct solution - for sure not. But better than my alternatives, I think so.

  • rich_sasha a year ago

    It's a great experiment.

    One way I think about it, AI can learn when there are many high quality examples. There are tons of blog articles on small topics but none really like "here's 1000 lines of sparsely documented code, good luck". The ability to navigate that is a secondary skill and can't be achieved with even genius parroting.

    Until the parrot becomes even more clever, perhaps...

  • QuadmasterXLII a year ago

    So far, with any amount of coaxing, it can't:

    fix the above download isssue (https://github.com/uncbiag/ICON/blob/752532ff339a6511dfa93f4...)

    Modify an existing class to take the number of iterations as a parameter instead of relying on hardcoded constants (https://github.com/uncbiag/ICON/blob/752532ff339a6511dfa93f4...)

    Fix a function to gracefully handle an old version of git

    This last one is illustrative, as its proposed solution looks elegant, correct, and well explained, but totally fails.

    QuadmasterXLII: This function crashes if an old version of git is installed that doesn't have the :(exclude) feature. Can you fix it? ``` def initialize(run_name=None, output_root="results/"):

        global output_dir_impl
        global initialized
    
        if initialized:
            raise Exception("footsteps can only be initialized once")
    
        initialized = True
        if not run_name:
            if "FOOTSTEPS_NAME" in os.environ:
                run_name = os.environ["FOOTSTEPS_NAME"]
            else:
                print("Input name of experiment:")
                run_name = input()
        output_dir_impl = os.path.join(output_root, run_name) + "/"
    
        suffix = 0
        while os.path.exists(output_dir_impl):
            suffix += 1
            output_dir_impl = os.path.join(output_root, run_name) + "-" + str(suffix) + "/"
    
        os.makedirs(output_dir_impl)
    
        print("Saving results to " + output_dir_impl)
    
        with open(output_dir_impl + "info.txt", "w") as f:
            f.write("Command:\n")
            f.write(" ".join(sys.argv) + "\n")
            f.write("System:\n")
            f.write(subprocess.check_output(["hostname"]).decode())
            f.write("Python:\n")
            f.write(shutil.which("python") + "\n")
            f.write("Git Hash:\n")
            f.write(
                subprocess.check_output(["git", "describe", "--always"]).strip().decode()
                + "\n"
            )
            f.write("Uncommitted changes:\n")
            f.write(
                subprocess.check_output(
                    ["git", "diff", "HEAD", "--", ".", ":(exclude)*.ipynb"]
                ).decode()
            )
            f.write("Current working dir:\n")
            f.write(os.getcwd() + "\n")
            try:
                shutil.copy(sys.argv[0], output_dir_impl + os.path.basename(sys.argv[0]))
            except:
                pass
    ```

    ChatGPT: This function checks whether the :(exclude) feature is available in the installed version of git by calling git diff with :(exclude).ipynb. If the feature is not available, git will return an error and the function will crash.

    To fix this, we can add a check to see if git supports the :(exclude) feature before calling git diff with it. We can do this by calling git help diff and checking the output for a mention of :(exclude). If the feature is not supported, we can call git diff without the :(exclude) feature.

    Here is how the fixed function would look like:

    Copy code ``` def initialize(run_name=None, output_root="results/"):

        global output_dir_impl
        global initialized
    
        if initialized:
            raise Exception("footsteps can only be initialized once")
    
        initialized = True
        if not run_name:
            if "FOOTSTEPS_NAME" in os.environ:
                run_name = os.environ["FOOTSTEPS_NAME"]
            else:
                print("Input name of experiment:")
                run_name = input()
        output_dir_impl = os.path.join(output_root, run_name) + "/"
    
        suffix = 0
        while os.path.exists(output_dir_impl):
            suffix += 1
            output_dir_impl = os.path.join(output_root, run_name) + "-" + str(suffix) + "/"
    
        os.makedirs(output_dir_impl)
    
        print("Saving results to " + output_dir_impl)
    
        with open(output_dir_impl + "info.txt", "w") as f:
            f.write("Command:\n")
            f.write(" ".join(sys.argv) + "\n")
            f.write("System:\n")
            f.write(subprocess.check_output(["hostname"]).decode())
            f.write("Python:\n")
            f.write(shutil.which("python") + "\n")
            f.write("Git Hash:\n")
            f.write(
                subprocess.check_output(["git", "describe", "--always"]).strip().decode()
                + "\n"
            )
            f.write("Uncommitted changes:\n")
    
            # Check if git supports the :(exclude) feature
            help_output = subprocess.check_output(["git", "help", "diff"]).decode()
            if ":(exclude)" in help_output:
                # :(exclude) is supported, call git diff with it
                f.write(
                    subprocess.check_output(
                        ["git", "diff", "HEAD", "--", ".", ":(exclude)*.ipynb"]
                    ).decode()
                )
            else:
                # :(exclude) is not supported, call git diff without it
                f.write(
                    subprocess.check_output(
                        ["git", "diff", "HEAD", "--", "."]
                    ).decode()
                )
    
            f.write("Current working dir:\n")
            f.write(os.getcwd() +
    ```
    • QuadmasterXLII a year ago

      Because this code has 7 or 8 unit tests but none of them verify that notebooks are actually excluded from the diff, the gpt code which is wrong and so always calls the fallback _passes the unit test suite_. Brace yourselves for a new era of buggy code

samuelfekete a year ago

There’s a bias towards coding capabilities amongst testers (and perhaps trainers) of ChatGPT.

I bet it has (or can have) a similar level of capability when it comes to chemistry, biology, finance, law, etc. - all human knowledge that is expressible as text or formulae.

With regards to software engineering, a great amount of time is currently being wasted by coders trying to solve problems that have already been solved by others. Some of the solutions for that previously included libraries, SaaS, and Stack Overflow.

Now we have another tool at our disposal to 10x all software engineers (and perhaps the same for other industries).

  • jstx1 a year ago

    I think the question is what happens when you're able to 10x all software engineers but don't have job market demand to match the 10x increase. 90% layoffs? Maybe not 90% because these processes aren't perfectly efficient or evenly distributed, and also because you can increase productivity without affecting demand to some extent, but what's the %? 30-50%? That's still huge.

    • pornel a year ago

      Perhaps we'll have Jevons paradox, and instead of reducing the workforce, we'll accelerate software eating the world.

      A lot of software is crappy. The extra productivity could go partially into quality improvement.

    • samuelfekete a year ago

      We haven’t automated 10% of what could be automated, especially in tasks that were previously not cost-efficient to automate or that required AI tech to automate.

      • rich_sasha a year ago

        My understanding is that much of automation difficulties are actually to do with interactions with the physical world. We still don't have a machine for folding laundry that matches human performance.

        Coding is nice for AI to overtake because it's all abstract (apart from context). The problem domain is literally just string to string mapping.

        • skydhash a year ago

          Coding may be abstract, but execution of the resulting program is not. And results of the execution is driven by real world needs. Truth is that a human can invent things because it can pattern match across whole domains. You can say there is a mechanic solution to that, how can we do an algorithm that have the same result. AI cannot unless the algorithm was already created. I think the current state of AI is great for searching and creating starting point, but it can never get us to the finish line.

  • civopsec a year ago

    Finally a software engineering silver bullet post-No Silver Bullet.

pickingdinner a year ago

The problem with artificial writing is writing alone doesn't accomplish much. Art is the expression itself, so AI art is the final product. But with writing, the goal is to often communicate something novel, in which case the AI wouldn't know, or transact and get something done, in which case the AI won't have the authority or access.

Take customer service chatbots. It really does not matter how "human" the bot is, if it isn't given any power, such as refunding an order. It can't really apologize either, without lying about its lack of empathy, creating a real life turing test between it and the customer.

Of course, businesses justify making refunds hard, so they often will prevent even humans that power, and arm them with templates and legal excuses anyway. Which explains why most customer service is sh*t. They're not there to help you, as much as they are paid to entertain the thought of helping you.

With code, at least the code is itself the end result, like the art. And it can be tested with execution and with unit testing. But most programming is still deciding what to name functions and what functions are needed. The implementation is just the technical task that "anyone" or "anybot" might do. It's the mindless part.

So like AI driving, the last mile of the problem may prove to be close to impossibly hard. Or at least, it isn't an AI problem that can be solved with current AI. It needs another breakthrough. Until then, even AI will continue to rely on human input. The furthest we'll get with AI coders is AI coders taking instructions from a coder, with the end result heavily dependent on what the human adds, not what the AI adds.

yagodragon a year ago

As a developer, I legit feel crippling anxiety and might need professional help.

People saying it's not intelligent, really don't get it. It looks like it's working exactly like the human brain. We, humans, also pretend to know what we're talking about and often talk with authority without really understanding what's going on. When you apply for a react.js position you pretend to be a js God when in fact you know shit, just enough to be able to stitch things together to make them work.

The rate at which technology, languages and frameworks change also doesn't give you the ability to master your craft. In fact, mastering a specific tool/framework is risky for your career because of new tools emerging every 3-5 years. Every company uses a variation of react,vue,python,go,java,ruby whatever, it's impossible for the human brain to keep up and be good at everything. We can only scratch the surface of the complicated mess of a tech stack and that's exactly what gpt3 is extremely good at right now. So yeah, I fear a lot...

I can't imagine how good this is gonna be in 5-10 years. I legit feel scared and anxious. How can we prepare for this future?

  • nso95 a year ago

    Let the dust settle before you start predicting the end of your career

  • zshrdlu a year ago

    Have you tried chatting to it? I think it would ease your anxiety. It is impressive, but you'll see what it's definitely not when you actually sit down and quiz it. Still impressed though.

constantcrying a year ago

None of these AI systems significantly outperform searching for the answer online. The AI is good because: - The programs it generates are small - The programs generated solve well known problems with well known answer

Current AI replaces SW developers the same way StackOverflow and blog posts replace SW developers.

The real question is how fast new AI is going to be developed which overcomes these challenges. I think what is pretty certain that current language models will never be a serious contender against programmers, their whole structure makes it very difficult to perform actual logical reasoning about the structure of a computer program.

Another thought: How useful is an AI which is correct 99% of the time? How much development resources would be needed to fix a code base which is 1% total nonsense that does not even compile or is just plain wrong. Would the developer time (all of which have never seen the code) spent fixing that code base be actually less than a total rewrite?

  • steve_adams_86 a year ago

    > Current AI replaces SW developers the same way StackOverflow and blog posts replace SW developers.

    You can ask GPT how to optimize code and it’ll actually give decent responses in a few seconds though. That’s new. It can reveal answers about questions you’re not sure how to ask, at least as a less experienced developer.

    All of its optimizations so far have been things I’m aware of except for one, which was mostly due to me not writing that kind of code very often.

    That’s becoming superior to blogs and SO if only because the response is rapid and concise.

    As for correctness, honestly, I come across so much human-generated stuff online with errors that I read docs and experiment unless I can find the answer from a source I have high confidence in. People with less experience don’t have that luxury, so they go down far more rabbit holes without realizing it.

    GPT will still parrot some of the bad ideas, but eventually it could be trained on industry standards. Tools like copilot are poised to become incredibly helpful, I think.

makestuff a year ago

IMO for now it is the next iteration of stack overflow/google. I will 100% use this to ask a question on how to do something at the API/function level because it gives really good results.

Ex: "How can I create a webserver in python" will generate all of the boilerplate instead of me going and looking at the flask documentation.

_benj a year ago

> any layperson can say "hey AI slave, write me a program that..."?

I think this is one of the big challenges. From my limited experience, a person without expertise being able to describe accurately enough what a program should do is often a challenge. Further more it is often after an initial implementation that even engineers are able to understand the full requirements of a system. It's possible that generating yet-another-crud-app wouldn't be any challenge to GPT but then at that point there could be a market saturation of CRUD applications (is that not the case already?) and value might shift to apps that are able to do more than just CRUD.

Then again, predicting the future is tricky business when we are so notorious at not being able to predict what can we do during the next sprint! :-D

  • skydhash a year ago

    Describing accurately what a program to do is already possible with a programming language. But it requires logic and an understanding on what a program can do and how it does it. People that want AI either wants a way to help deduce the logic needed to solve a problem or want to automate the boring part of the implementation. The first can already be done by researching and the second by creating and using frameworks.

    Unless we find a way to add actual reasoning and creativity to AI, I wouldn’t feel threatened. Because coding do involve the two. As well as writing and painting. So there will always be a human at the helm. But what AI is becoming good at is finding patterns and reproducing outlined steps. And if you job can be resumed to these, then you’re in danger.

Quarrelsome a year ago

I feel like this is a false opposition of AI vs Programmers. I would argue its never going to happen IMHO because AI + programmers is going to be so much better than either on their own.

Even if you have some thing generating amazing code you still need the energy to deploy it and check that it works and put the docs in the correct folder and email somebody.

So IMHO the future we all need to be looking it is how these tools can make us all more productive and then the juniors just get a different bunch of shit to learn. Maybe we develop more focus on writing tests to "prove" the software is correct as opposed to reviewing the code output? Idk, but I don't see the industry suddenly replacing devs given that the problems are not entirely reduced to simply writing code.

softwaredoug a year ago

I do think its worth wondering about, BUT

* Most non trivial systems are complex, thousands of lines, many systems working together, solving complex domain-specific problems and not toy examples we can produce from ChatGPT

* CoPilot and ChatGPT introduce subtle bugs, so to use thes for code generation, you'd have to write many tests. A LOT of tests to confirm it worked. At a minimum this requires careful thought

* Getting the prompt to do very subtle things right seems time consuming. Why not just write the code (perhaps with guidance from AI)?

Maybe its a fun way to get started, and more convenient than a search engine, but I'm not convinced YET you can make it do anything non trivial without a lot of work and conversation with ChatGPT.

  • DoesntMatter22 a year ago

    Humans introduce subtle bugs as well. I've found that it gives me a great starting point and then I can clean up from there. It does 80 percent but then I go do the rest.

xkfm a year ago

ChatGPT feels like the current aim assist debates in a lot of FPSses to me. It'll make you better at the shooting part of the game, perfect even. But, won't necessarily make you that much of a better player, because aiming is only one aspect of what makes someone good at FPSes. However, if someone is generally good enough or very good at the "not aiming" portion of the games, then having aim assist would drastically increase their overall skill.

Also, ChatGPT doesn't really work for any UI based programs at all. For example, if you want help using Excel, you'll get a list of instructions, but nothing visual to help you out.

BilalBudhani a year ago

I feel AI tools will just accelerate software development speed overall rather than replacing software developers them entirely. A lot of boilerplate code, how to configure X and the 80% of development side will be taken care by these AI tools remaining 20% will be left for the software developers to fine-tune and making the functionalities work together.

kmac_ a year ago

It's like code completion on steroids, still lacks depth and detail. It can create code mashup and glue it perfectly in no time, so it's worth a lot as most of the dev work looks like that. So I predict that it will be a standard tool at some point. Still, it will require a skilled dev that understands what to ask and how to interpret the answers. If somebody would ask me if I agree to be operated by a "junior" surgeon who is helped by such "AI" I would said no.

  • skydhash a year ago

    Not really unless you’re stuck on a problem. And it may be better to google it and find the original solution and its context. It is hard to read code because you have to recreate the abstract model of the program in your head so you can understand it. Imagine doing that for AI’s code you can’t really trust because its provenance is not related to a real context, but something constructed with your ambiguous prompt.

alganet a year ago

It will raise the bar for software development significantly.

GPT needs training data. Once a problem is "figured out" and there is enough information about it to reach a treshold where it's learnable by the AI, it will stop being something done manually.

There is reason to believe that most commodity software (e-commerce, content management, media asset management, any CRUD-with-makeup, infrastructure automation, etc) will be commoditized even further to the point that no software developer would be needed to create a product that leverages these approaches, or any reasonble combination of them.

Sure, there will be a need to guide the AI to the best solution. A need to harness it properly. Some software development skills will be needed for that, and my guess is that this will become a temporary occupation in the near future, either explicitly or not.

Eventually, there will be problems that large models can't solve. Not even partially. Stuff that has no training data and there is no way to acquire inferred knowledge about it. We'll know what that is only when the commoditization of what we already know is almost complete. My guess is that it will be something trivial but rarely considered worthy exploring, writing about or pursuing, but valuable. Some kind of impossible miracle software that will be not that impossible in the future (complete protocol/data portability, wide compatible standards across thousands of different systems, etc).

Or we all will have to learn how to flip burgers in a boring dystopia. That's a reasonable scenario as well.

  • dragonwriter a year ago

    > There is reason to believe that most commodity software (e-commerce, content management, media asset management, any CRUD-with-makeup, infrastructure automation, etc) will be commoditized even further to the point that no software developer would be needed to create a product that leverages these approaches, or any reasonble combination of them.

    But then:

    > Sure, there will be a need to guide the AI to the best solution. A need to harness it properly. Some software development skills will be needed for that, and my guess is that this will become a temporary occupation in the near future, either explicitly or not.

    So, just like every past tool that will “replace software developers”, it will “replace” them with a technology which (1) requires people with software development skills to effectively deploy, and (2) greatly increases the output of the people with software development skills so employed, increasing the demand for software development skills.

    And that’s even before considering the problems you speculate on that LLMs won’t even be partially applicable to.

    • alganet a year ago

      Yeah, that is pretty much the idea. It will raise the bar.

      This kind of stuff happens in cycles though. Once there was black and white photography, and hand-colouring those black and white pictures with a brush was a popular job in high demand. The color film changed that, moving those jobs to a niche reserved for artists.

      Some decades later, Photoshop was popularized and touching up photos became a thing again. Something we didn't even imagined that was needed anymore, and now looks trivial, was suddenly on high demand. Some skills were transferred to the new thing, but we can't say people skilled in hand-colouring are good digital image artists though. Not generally.

      A good hand-colourer might had focused on "never making a mistake" so it doesn't ruin the picture. Now that skill is irrelevant, we have infinite undos.

      Software development nowadays favors pragmatism and values bit shaving abilities that are hard to hone. Complex solutions for problems are discouraged because it imposes a huge cognitive load on maintainers and refactorings.

      ChatGPT might change that to favor broad generalist megalomaniacs who are able to come up with solutions involving exotic combinations of algorithms and techniques that would require an army of previous-generation developers to maintain. It might favor large refactorings and rewrites that are commonly frowned up in the current culture so no one thinks about them.

      I don't actually know, my point is that the skillset might drastically change.

  • seydor a year ago

    The biggest accelerator will be (once again) graphical GUIs. GPT will replace the keyboard, people will drag and drop stuff and connect pipes to draw and build anything

    • purerandomness a year ago

      We've had Visual Basic and UML, and no-code visual coding tools for decades. Especially GUI builders like Delphi and Visual Basic died with the advent of responsive screens.

      They don't scale. No one wants to drag around logic elements visually.

      Code is much more expressive, can be refactored and diffed easily.

      • alganet a year ago

        No one wants to drag around *logic* elements visually. I agree.

        What if you could just mock up everything and let the AI fill the blanks with an automatically generated backend? I'm not sure it's possible, but I would like that.

        If this thing (useful generalist AIs) takes off, there will probably be no single silver bullet that represents its full potential.

        • purerandomness a year ago

          But the hard work isn't writing backend CRUD code.

          The hard part is actually deciding and describing what the backend should do. How should it work? What decisions should it make? What's the business logic it has to follow?

          You'd want to describe that in an unambiguous way, so you'd have to invent a very strict syntax so that ChatGPT has no way of misunderstanding your intentions. You'd end up with a programming language eventually - just adding more complexity on your way, but you'd end up with code.

          Now if you feel that writing boilerplate backend code is tedious, repetitive and verbose and could be automated, I totally agree. Any tool that makes that easier is welcome. But then a language model is the wrong tool for the job, we need more tools like PostgREST, ReTool, and Zapier to not end up writing too much unnecessary boilerplate code.

pornel a year ago

It is a personalized StackOverflow on steroids, so I expect it not only increase productivity of boilerplate tasks, but also teach how to program in the process.

In addition to classically trained, self-taught, we're going to have AI-taught programmers. There will be some curious cultural differences there.

We're the old guard now: "Listen kiddo, in my days, we had to google for stuff! We were lucky if there was any documentation to read, manually!"

triggercut a year ago

I've been looking at simple mathematical logic problems that are computer science adjacent and while I've had a little success on some simple toy scenarios, I've found it incredibly difficult to get it to provide even "directionally correct" solutions for anything useful.

Coding is probably a pretty restricted problem space. After all, there's only so many ways you can handle a string until something throws an error.

srinathkrishna a year ago

The way I look at it is it is an excellent companion that will help me automate a lot of tasks

1. Send this quick email to my team telling I'll be late. 2. Adding boilerplate stuff that I keep doing again and again. 3. Writing very quick and dirty poc stuff 4. Do simple usage analysis of logs/metrics for learning some patterns

After these, the onus would be on me to focus on what needs to be done.

ergonaught a year ago

Until the "AI Slave" is able to formally identify that it has produced a correct solution, which requires the ability to identify and correct non-trivial mistakes including at the "conceptual" level, and probably to provide the "rationale" for its "thinking", it is unlikely to affect this particular area too much.

jhoelzel a year ago

oh wait until you realize that most software is basically the same crud and most "unique special solutions" to problems are not so unique as you have thought.

Finally the most common reason why the "stack is hard" in startups is because there was an "all knowing" coder who build a system which he cant be replaced out of.

As a freelancer is always nice to see how quickly they shift their mannors if you show them that "you can do it too" and that all that is needed to break a stack like that is reading up on the 15 frameworks they used to build it.

All in all i dont think AI will replace us, but mainly because its going to be cheaper to hire a studied IT veteran than to use the AI.

Are the times where we are treated specially over? yes indeed. We never where more than handyman for computers.

  • deterministic a year ago

    If the software you are working on is that simple then you can simply auto generate it using a code generator. Which is what I do at work for CRUD stuff. No need for an AI. And hey it hasn’t replaced my job.

    • jhoelzel a year ago

      yeah well, try thinking about a crud generator that has read all definitions of what a ShoppingCartItem is and generating it for you with all the bells and whistels instead of defining properties and hitting generate.

      I have used many of the approaches you mean for years with symfony, but its not the same. Text replacement is clearly less adapdable to begin with

Sevii a year ago

I think we will continue to be surprised by how effective GPT-X and other LLM models are at coding tasks. The main reason is that unlike math programming is a field where dozens or hundreds of correct solutions exist for every task.

All an LLM has to do is generate one out of hundreds of potential programs which solve the problem. It doesn't even have to generate perfect code, lots of profitable businesses are built off of buggy software.

What's wild about this is that for now ChatGPT seems to have the most benefit for 'stack overflow' programmers who rely on copy-pasting solutions to problems other people wrote. Now they can effectively generate a stack overflow answer to their specific question instantly!

For now AI seems like it will empower them.

lamontcg a year ago

I tried getting ChatGPT to output me a bilinear tangent steering guidance for a rocket yesterday and it never quite gave me what I wanted (although it seems to have regurgitated some interesting stuff that it had been trained on which was kind of neighboring to what I wanted--I think it was pulling from autonomous vehicle trajectory optimization rather than rockets).

It was pretty good at simple stuff like taking one steering algorithm I had and decomposing it from a directional vector into pitch/yaw and doing the substitution for me. It does high school algebra pretty well, easier than using symbolic computations in Matlab.

qualudeheart a year ago

As I have said elsewhere many times. You still have time to make it big with crypto or startups. If you engineer your way into high frequency trading you can also make a lot of money. If you can do this and FIRE you can become an early investor in the companies that will domanince the mutlipolar ai era in the years approaching the agi level. Do this correctly and you will eternally stand apart from the ubi pleb class that many of our less competent brethren no doubt will join.

virajvchavan a year ago

Response by ChatGPT:

It is certainly possible that in the future, language models like GPT-3 could become advanced enough to compete with human programmers. However, it is important to note that these models are still limited in their capabilities and are not yet at the level of human programmers.

At present, offshore outsourced coders do not pose a significant threat to the software industry, and it is unlikely that language models like GPT-3 will be able to fully replace human programmers anytime soon. However, it is possible that these models could be used to automate certain tasks and make the work of human programmers more efficient.

If language models were to become advanced enough to compete with human programmers, it is likely that software engineers would need to adapt and develop new skills in order to remain competitive. This could include focusing on areas where language models are not yet advanced, such as visual and fine motor skills.

As for protecting themselves from potential automation, software engineers can stay up-to-date on the latest technology and continue to develop their skills in order to remain competitive in the job market. It is also important for the industry as a whole to advocate for policies that support workers and ensure that they are not displaced by technology.

Overall, while it is possible that language models could become advanced enough to compete with human programmers in the future, this is not currently the case. Human programmers still have a valuable role to play in the software industry, and it is important for them to continue to develop their skills in order to remain competitive.

  • 63 a year ago

    I've noticed that ChatGPT tends to give non-answers to specific questions like this. It will reply with an answer that "makes sense" but it's just repeating a very high level observation that essentially means nothing. Like "This could include focusing on areas where language models are not yet advanced, such as visual and fine motor skills." is literally just repeating the op without any extra insight and the conclusion is entirely about the present day which is not what the question is asking about. I like to say that it reads like a 7th grade essay on the subject. ChatGPT is definitely very good at some things (ask it how to set up nginx), but I get easily frustrated when I ask it something specific but non-technical and it gives me an answer like this. It's like it's afraid to pick a side on anything.

    • TillE a year ago

      > something specific but non-technical

      It does a pretty good job with factual questions, in my experience. It gives good, if shallow, answers about history.

      But I mean if you ask it for opinion or analysis, of course that's silly. It can't analyze anything.

    • olodus a year ago

      This, if nothing else is why AI will take over and rule the world - it has already surpassed politicians in terms of non-answers.

  • olodus a year ago

    Oh no, it has happened - it has replaced me at my side gig; writing comments on HN, YouTube and Reddit.

deterministic a year ago

I haven’t seen any code generated by an AI that in any way approach the kind of complexity I am dealing with in my daily job (large scale enterprise software running operations and planning for international companies). Especially given the fact that just describing what the software does in English to the AI and then testing it and fixing the mistakes made would probably be as much work as just coding it in a programming language. And we already have a technology that gives us instant highly advanced well functioning code we can reuse in our work: It’s called frameworks/modules/libraries/… Programming is not like art. We are already directly reusing millions of lines of code written by other programmers. AI’s for programming will only be interesting when it comes up with truly new ground breaking algorithms that humans haven’t been able to come up with already.

mangosteenjuice a year ago

ChatGPT provides incorrect solutions to most iOS prompts relating to UI stuff that I gave it.

They would look correct to someone who knows how to program, but does not know iOS. Most of them even compile. But they're very wrong, even when I tried dozens of different inputs in an attempt to get the correct solution from it.

  • emehex a year ago

    But are we confident GPT4 will fail in the same ways? What about 5? I've been coercing it to write code all day and it's honestly pretty good. It can get stuck, sure, but when it does and I provide it the compiler error it can fix in 50% of the time...

djmips a year ago

If you are a programmer you will know that debugging your own code and other people's code makes up a significant portion of your job. The GPT coding systems I've seen so far don't make much of a dent with that aspect of a programmer's life.

gaurangt a year ago

I think for now, it behaves mostly like a Junior SE. While the code is decent - 1. it has errors sometimes, hence doesn't compile 2. it's not the most optimized code 3. it could have potential security vulnerabilities, since it doesn't have access to internet and hence isn't aware of the latest exploits particular to an industry or a programming language (guess the models used for training aren't the most up to date)

So, all in all, we are ok for now, as long as we keep levelling up ourselves, and evolve beyond script kiddies.

But hey, we never know what could happen in the next 10-20 years if it continues evolving at this pace.

sross07 a year ago

Has ChatGPT been able to solve problems that are novel and unique in software engineering? For example, if you are a SE in a ninche domain, would this understand the domain specific language needed to code in that environment?

willtai a year ago

A bit of a different perspective, but ChatGPT could help people prepare for Leetcode style puzzles very well. SEs who don't practice these type of puzzles often can benefit from ChatGPT if they intend to prepare for Leetcode style interviews.

I tried using ChatGPT to debug hard Leetcode questions and it was able to debug its own wrong solution together with me and I personally learned something from it.

https://twitter.com/willtai_/status/1599565171051331585?t=Ve...

cm277 a year ago

It's probably too early to tell, but my wild guess is that it will add another conceptual level of abstraction to programming. Instead of say pesudocode→code→assembly, you would get to description/case→code (via GPT)→assembly. I put assembly there on purpose; just like higher-level languages made programming more accessible and opened up the profession, I would expect this to (eventually) happen here as well. Now, how do we get there from here and which companies lose out in the new world, who knows? we hadn't had a paradigm shift like this since the iPhone.

JonChesterfield a year ago

The obvious answer is that deciding what to build is difficult, but that rather invites asking whether building lots of stuff at random and throwing it at the wall would have a better hit rate than building the right things.

Interesting angle is testing where successfully tying a "real program" to one of these AI tools would look a lot like a fuzz tester. Possibly a better one than the current state of the art.

Philosophically I'm not hugely worried about automating the wisdom of Wikipedia and Reddit for mass application. Seems more likely to rot than improve itself. But we'll see.

anonymousDan a year ago

Can I ask what are the hopes of having a truly open source/self hostable version of chatgpt? I really worry about the implications of giving a single company so much control over the development process

devmor a year ago

Personally I don't think our jobs are at risk, but perhaps some junior-level positions might have a reduction in volume.

More than anything, I see this type of system being used in combination with a system like Github Copilot to produce much more useful suggestions through feedback loop.

Things that used to be long, annoying-but-easy tasks may be offloaded to ML.

Remember that GPT is ultimately a very fancy Markov chain. It does not and cannot reason. It's more likely to replace busywork and give engineers better time to work on important architectural features.

ashrafulla a year ago

I think the programming working model will change as it did years ago when we moved from punch cards to files and then from assembly to higher-level languages. Programmers will continue to program solutions with assistance from code automation for less business-focused parts of the problem at hand. Consider the more analog version of this: nowadays we spend more and more time copying answers from StackOverflow than we do debugging at a lower level.

pkilgore a year ago

Until it can change existing code, I think we're ok.

  • pornel a year ago

    Sorry, it can.

    Currently it can work at function level, and can make reasonable modifications to existing code (sometimes buggy, but that may be fixable with a larger model and better training data).

    The current chat UI isn't suitable for working with larger codebases, but it seems within realm of possibility to extend it to be an IDE like CoPilot with tooling that feeds it your whole codebase and documentation to work with.

    • bluejellybean a year ago

      Yupp, it can! I've been able to get it to do simple examples. I was able to get it to print out a 'sum of digits' style function. I was then purposefully placed a silly bug in it by hand. I then ran the code through a 'please code review' style prompt, got a number of decent suggestions out. Using those suggestions, I was able to prompt it to refactor the function to: fix the bug, add documentation comments, include type definitions, and even add tests!.

      A simple example, and not 'perfect', but it's quite impressive. Even a simple case of throwing your code 'over the wall' to the AI for review before going to an actual person is a huge time-saver.

      The token size limit is the killer here, it means larger files, let alone projects, can't be fully processed in a single prompt.

      Initial code produced: ``` def sum_of_digits(n):

          if n == 0:
      
              return 0
      
          else:
      
              return n % 10 + sum_of_digits(n // 10)
      
      ```

      I didn't save the exact code review, but essentially it was just a list of suggested code changes. The prompt was:

      ``` prompt = f'Python3. Act like you are a senior engineer writing a code review comment on the code. Write at least two specific changes to make, being sure to check for any errors first: {prompt}. Be as concise and correct as possible, and also be sure to write in a way that is easy to understand.

      ```

      The resulting output code was:

      ``` import unittest

      def sum_of_digits(n):

          if n < 0:  # Added base case check for negative numbers
      
              return None
      
          elif not isinstance(n, int):  # Check type of argument passed in
      
              raise TypeError("Input must be an integer")
      
          elif n == 0:
      
              return 0
      
          else:
      
              return n % 10 + sum_of_digits(n // 10)
      
      
      
      # Unit tests using built-in test suite

      class TestSumOfDigits(unittest.TestCase):

          def test_sum_of_digits_positive(self):
      
              self.assertEqual(sum_of_digits(123), 6)
      
      
      
          def test_sum_of_digits_negative(self):
      
              self.assertEqual(sum_of_digits(-123), None)
      
      
          def test_sum_of_digits_type(self):
      
              with self.assertRaises(TypeError):
      
                  sum_of_digits('abc')
      
      if __name__ == '__main__':

      unittest.main() ```

      Edit: trying to cleanup hn formatting for readability

      • Jtsummers a year ago

        > Edit: trying to cleanup hn formatting for readability

        HN code formatting is simple, prefix each line with 2 spaces and then you don't need any extra blank lines like with normal paragraphs.

          class TestSumOfDigits(unittest.TestCase):
              def test_sum_of_digits_positive(self):
                  self.assertEqual(sum_of_digits(123), 6)
        
              def test_sum_of_digits_negative(self):
                  self.assertEqual(sum_of_digits(-123), None)
        
              def test_sum_of_digits_type(self):
                  with self.assertRaises(TypeError):
                      sum_of_digits('abc')
        
          if __name__ == '__main__':
              unittest.main()
        
        ``` does nothing, it's just noise here.
carapace a year ago

cf. Englebart; and Ashby "Introduction to Cybernetics"

The fundamental principle of intelligence augmentation (of human intelligence by machine assistance) comes from selection: the machine can generate options beyond those which the un-augmented human mind can reach, but from which it can select.

Anyway, the question is, "What is good?"

It's an open-ended intelligence test built into the structure of the Universe, of all possible Universes.

Additional references:

Wendell Berry "What Are People For?"

Larry Niven, the character Teela Brown (spoiler alert)

"Five Whys" diagnostic technique and "Core Transformation Process" a similar technique for human consciousness

Also "Intuitive Ordinal Notations"

https://github.com/semitrivial/IONs

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

jmoak3 a year ago

Can we discuss teaching post ChatGPT+?

I'm considering giving my local high school system a call to raise awareness. When I was in college 10 years ago I abused Wolfram Alpha for some math work, but this is a whole different level and may require a new model of instruction.

Why give free response homework when an AI can just generate 80% of it?

This will radically alter homework as we know it :)

  • y-curious a year ago

    Yes I think this is already a problem with student essays [and using GPT3 on them]. I would love to see teachers teaching how to actually use the tools instead of ways to avoid them. For example, it's really hard to get exactly what you want from stable diffusion. Having an assignment based on tweaking the algorithm seems a lot better than avoiding it altogether.

minerva23 a year ago

I can’t believe I have to say this, but please consider not trivializing slavery by referring to twiddling transistors as such.

secfirstmd a year ago

I think documentation will be one of the first places we see it used a lot. I say this because today I'm writing documentation about how to use sprints, Jira and projects within our company today. I've written over 50 percent with the help of ChatGPT. Lets me focus on the less boring written stuff.

Jack000 a year ago

keep in mind the current iteration of ChatGPT doesn't try to execute the code, its understanding is purely based on "reading" existing code.

This tech could potentially be integrated with a compiler and trained through self-play like AlphaZero (possibly with human guidance at a meta level). They could also add some kind of visual modality for UI code.

Once this happens, 90% of devs that work on CRUD style apps will lose their jobs, imo. The software industry will probably become a lot more stratified, like the music industry - a small percentage of ML researchers and devs making a ton of money at the top, close to zero for everyone else.

Software development is most at risk vs other "soft" industries, because for the most part code either works or it doesn't, and it's fairly easy to check.

lamontcg a year ago

Could we try having humans learn to evaluate other humans without leetcoding?

I mean it sounds fucking crazy just typing it out on my keyboard, but could we possibly do it?

ineedausername a year ago

ChatGPT can and will make us way more productive.

The obvious conclusion is that you're gonna need less engineers overall and especially on the junior and mid level.

  • youniverse a year ago

    I wonder what will happen if every tech company cuts their SWE salaries AND headcount in half over the next 10 years as this technology advances. You could perhaps have a team of 10-15 engineers under a CTO build and maintain a pretty complex product perhaps? Will be interesting to see where things go for sure.

    At least I will be able to build some cool software on my own or with a small team!

    Position yourself to be at the open of the next door. :)

Spooky23 a year ago

I think in-house counsel for contract reviews, etc are most at risk. These AIs can flag risks pretty well and have a real attorney triage issues.

echobear a year ago

as a computer science student i am slightly concerned but I also don't think that it would necessarily be possible for AI as it currently exists to be able to complete software engineering tasks with the required level of specification/efficiency. But maybe in 10 or 20 years.... ? who knows what the landscape will be like

falleng0d a year ago

> Are these language models better than current offshore outsourced coders? These can code too, sort of, and yet they don't threaten the software industry (much).

I felt kind of offended by this biased view (especially the “sort of" part). Just because one is not from the same country as the OP, it doesn’t mean they are any inferior.

  • rich_sasha a year ago

    I'm not from any of the countries you may be thinking of.

    I'm talking about outsourcing software to the lowest bidder - who is almost always "somewhere" abroad. And isn't really a competition to a skilled SE.

    Rephrase it how you like, but it's a real effect.

    • falleng0d a year ago

      Ok. I can agree to that

9wzYQbTYsAIc a year ago

Quality Assurance / Testing / Verification and Validation

  • DrWorldwide a year ago

    Are you saying that these are the disciplines where humans will still be needed, or that these are disciplines that AI will take over?

    IMO it's the former, simply because it's still a human world (a la "it's an analog world" ca. the DSP revolution) and nothing will replace human agency, accountability, and, ultimately, skin in the game.

    • 9wzYQbTYsAIc a year ago

      I am saying that those are some of the careers that could expect a brighter outlook, post-singularity or what have you.

      I agree that human accountability is something not likely to ever disappear.

seydor a year ago

Like all professionals , SEs should not rely on paycheck long term. Should build some form of wealth like real estate, or virtual real estate (an app or sth) to profit from. Don't think the crazy salaries of the '10s are repeatable

jmoak3 a year ago

Here is an unedited transcript with OpenAI's GPT-3. My prompts are peppered throughout, but I believe them to be obvious.

/START

Please list some microservices I would have to create in order to implement a system to recommend a daily workout to a user. Assume the user's information is stored in a database.

1. User Profile Microservice: This microservice would be responsible for retrieving, updating, and storing user information (e.g. age, weight, height, etc.) from the database.

2. Workout Recommendation Microservice: This microservice would be responsible for recommending a daily workout to the user based on their profile information.

3. Workout Tracking Microservice: This microservice would be responsible for tracking the user's workout progress and updating the database accordingly.

4. Exercise Library Microservice: This microservice would be responsible for storing the exercises and other relevant information (e.g. descriptions, images, etc.) in the database.

5. Notifications Microservice: This microservice would be responsible for sending notifications (e.g. reminders, updates, etc.) to the user.

Please list a suitable tech stack for 2 people to implement these services with.

Front-end: HTML, CSS, JavaScript, React.js

Back-end: Node.js, Express.js, MongoDB

Testing: Jest, Enzyme

Deployment: Docker, AWS

How would you implement the Notifications Microservice using these? Be specific.

The Notifications Microservice could be implemented using Node.js and Express.js. Node.js would be used to create the API endpoints for the microservice, and Express.js would be used to create the routes and handle incoming requests. The microservice would also need to connect to the database in order to retrieve user information. Once the user information is retrieved, a notification could be sent using a library like Twilio or Sendgrid. Finally, the microservice would need to log the notification in the database.

Why did you choose twilio or sendgrid over FCM?

I chose Twilio or Sendgrid over FCM (Firebase Cloud Messaging) because they both provide a more robust set of features and capabilities than FCM. Twilio and Sendgrid both offer advanced features such as personalization, scheduling, segmentation, analytics, and more, allowing for more flexibility and customization when sending notifications. In addition, both are more reliable and have better uptime than FCM.

Can you define the api for the notifications service?

The Notifications Microservice would have two main API endpoints:

1. POST /notifications: This endpoint would be used to create and send a notification to a user. It would require the user’s ID, the content of the notification, and any other relevant data (e.g. expiration time, priority, etc.).

2. GET /notifications: This endpoint would be used to retrieve all notifications for a given user. It would require the user’s ID as a parameter and would return a list of notifications.

/END transcript

This is remarkable. Absolutely astounding.

  • md5crypto a year ago

    It's one thing to produce ideas, it's another thing to implement everything including the CI/CD, authentication, Github Actions, etc... Now when some new unicorn comes along in 2036 that promises a ChatGPT+ based 'solution generation' app that can produce all the code & deployment for a real revenue generating application, then it's game over for all of us.

wittycardio a year ago

The amount of paranoia and fear mongering over gpt is insane. Firstly you don't get paid to churn out lines of code. You get paid to build, maintain and own systems. Secondly GPT is impressive but it is in the end a statistical matching machine prone to constant errors. It's very good as a tool to help humans but by it's very nature it's a terrible tool for automation. Compilers, high level languages and infinite free public libraries are far better tools to automate programming than GPT is. If your job is something like taking an English language question from someone and writing a short script for them in a programming language then I might be worried. But for the vast majority of programmers that's not the case

  • mirekrusin a year ago

    People you’re referring to are not talking about gpt as is, they’re talking about the rate it improves and extrapolate it to the near future.

    • wittycardio a year ago

      No I'm talking about the inherent nature of deep learning algorithms.

yrgulation a year ago

Well it would appear that leet code is easily replaceable. Not even sure why people would get paid to rewrite algorithms that can easily be imported from existing libraries. But actual development will take a while to be replaced. A developer does much more than write code. And good developers already know they should use off the self solutions for repetitive tasks. So at most this type of tooling will help, but it wont be anywhere near what their evangelists and the media claims it to be.

  • alfalfasprout a year ago

    If anything this is proof that leetcode is a terrible test of if someone is a good engineer. All it tests for is if you don't have a family and are willing to spend hundreds of hours on "training for the test" (eg; memorizing coding puzzles).

    • yrgulation a year ago

      A good memory indeed. Most of the leet code stuff is something i studied in highschool. Since then, everything of that sort was an import from a library followed by a function call. I forgot all of it, and have never had a need to do it again. Of course there are low level programming tasks that need it but i simply fail to understand their purpose. Perhaps i am in a different market all together because i never felt the need to interview someone based on leet code either and i’ve met some very smart people. Who knows, maybe i am wrong.