Ask HN: How are you using GPT to be productive?

629 points by yosito a year ago

With GPT so hot in the news right now, and seeing lots of impressive demos, I'm curious to know, how are you actively using GPT to be productive in your daily workflow? And what tools are you using in tandem with GPT to make it more effective? Have you written your own tools, or do you use it in tandem with third party tools?

I'd be particularly interested to hear how you use GPT to write or correct code beyond Copilot or asking ChatGPT about code in chat format.

But I'm also interested in hearing about useful prompts that you use to increase your productivity.

barbarr a year ago

For coding, I've been using it like Stack Overflow. It really decreases my barrier to doing work because I can ask lazy follow-up questions. For example, I might start out by asking it a question about a problem with Pandas like "How do I select rows of a dataframe where a column of lists of strings contains a string?". After that, GPT realizes I'm talking about Pandas, and I'm allowed to ask lazy prompts like "how delete column" and still get replies about Pandas.

I also use it for creative tasks - for example I asked it for pros and cons of my cover letter and iterated to improve it. I also used it to come up with ideas for lesson plans, draft emails, and overcome writer's block.

GPT has drastically lowered the emotional-resistance barrier to doing creative tasks and improved the quality of my output by giving me creative ideas to work with.

  • dmarchand90 a year ago

    I find it is like having a brilliant intern that is not super consistent in taking their antipsychotic medication

    • dmarchand90 a year ago

      I asked gpt4 if it could guess what I was talking about: "It seems that you are referencing an AI language model like ChatGPT, which is developed by OpenAI. These AI models can provide useful information and perform tasks like an intern, but they might not always be consistent or accurate in their responses, similar to someone not taking their antipsychotic medication consistently. It's important to remember that AI models like ChatGPT are not perfect and can sometimes produce unintended or nonsensical outputs."

      GPT3 had no clue

    • oars a year ago

      > I find it is like having a brilliant intern that is not super consistent in taking their antipsychotic medication

      This is fantastic.

    • Zuiii a year ago

      This statement nicely describes my experience with LLMs. They may have a hard time staying on topic (especially with larger untuned models like LLaMA 13B+), but if you help them stay on track, they become very useful.

    • ted_bunny a year ago

      Ah yes, a Bing Chat user.

  • carlmr a year ago

    So on my coding problems I haven't had much luck. It doesn't seem to know Bazel, the Rust code I asked about was completely hallucinated, but it did solve a problem with Azure DevOps I had.

    I think if the training set did not contain enough of something it can't really think of a solution.

    What is really nice though it's as you say the refinement of questions. Sometimes it's hard to think of the right query, maybe you're missing the words to express yourself, and to chatGPT you can say yes, but not quite.

    • yosito a year ago

      Yeah, I gave it a simple task of encoding a secret message in a sentence by using the first letter of every word. Hello = "Hey everyone, lick less onions". I worked with the prompts for over an hour to try to get it to complete the task, and while I did have some success, it really struggled to reason about the task or provide a valid response. If it can't even reason about a child's game, I can imagine it struggles with a programming language it has barely seen. I don't think it's actually reasoning about things at all, just providing a statistically plausible response to prompts.

      • andsoitis a year ago

        > I don't think it's actually reasoning about things at all, just providing a statistically plausible response to prompts.

        It turns out that humanity’s problem might not be that AIs can think but rather that humans believe that AIs can think. One might even go so far as to say there’s a real danger that we hallucinate that AIs can think, to our detriment.

        • jamespwilliams a year ago

          We don’t actually know what “thinking” is, though, so I’m not sure it’s possible to say “this model can’t think”.

          • somenameforme a year ago

            It seems one of the core components of human-level thinking is the ability move beyond just a recomposition of what you already know. Not long ago the epitome of expressible human knowledge was *emotive grunting noise.* Somehow we went from that to the greatest works of art, putting a man on the moon, and delving into the secrets of the atom. And we did it all exceptionally quickly once you consider how little time was spent dedicated to advancement and our countless behaviors that tend to imperil, if not reverse, advances.

          • andsoitis a year ago

            > We don’t actually know what “thinking” is

            How about: thinking is to make sense of the world (in general) and decide how to respond to it.

            • dr_dshiv a year ago

              Ai definitely senses and definitely makes decisions. It does not feel. But it understands concepts. Just like people don’t understand everything—and you can test them to see what they understand—AI understanding can also be assessed with benchmarks. If we don’t base AI understanding on benchmarks, then we don’t really have a grounding.

              • schrodinger a year ago

                Do we “really feel?” Or is that just our subjective interpretation of our goals? (At the risk of falling into a no true Scotsman argument)

              • andsoitis a year ago

                > Ai definitely senses and definitely makes decisions.

                To sense is not the same as to make sense.

            • Noneofya a year ago

              Now you’ve just rephrased thinking. What is „making sense of the world“?

        • cutemonster a year ago

          > problem [...] that humans believe that AIs can think

          Definitely some people are going to believe this, eventually?

          People already bow to statues and worship various invisible gods -- the AI programs would be so much simpler to start worshiping? When they can speak (if there's a loudspeaker) and generate text about them being alive and don't want to be switched of. And that everyone should obey their command. -- Wait 15 years and we'll see what new sects have appeared?

          • richardw a year ago

            You will not need 15 years - I'd give it the next election. Someone just needs to turn Q into a chatbot and we're basically there.

            • makeitrain a year ago

              It’s like Twitter, but with bots.

        • d1sxeyes a year ago

          I am personally more worried by the concept that potentially humans believe that humans can think, and in reality, what we consider to be intelligence is not much more than a flesh-and-bones LLM.

          • tarunupaday a year ago

            You probably meant it in tongue and cheek way (I can’t tell) but I think a lot of our fear / hesitation / denial about how useful these models are is buried into this idea that perhaps we are not special and not fundamentally different than these models.

            • d1sxeyes a year ago

              My tone was flippant but I did mean what I said. I agree with you on this.

      • TigeriusKirk a year ago

        Bing in creative mode really struggled with this task, and I don't think I could have successfully forced it.

        Bing in precise mode had no trouble at all.

        Me - Let's generate an acrostic sentence for "jack and jill went up the hill"

        Bing - Sure! Here’s an acrostic sentence for the phrase “jack and jill went up the hill”:

        Just as children know and never doubt, joyfully in love, laughing while enjoying nature together under pure trees, having endless happy interactions, laughing loudly.

        • yosito a year ago

          A truly "intelligent" AI would probably have told me that the name for this is an acrostic sentence.

          Now that I know, I tried a few more prompts.

          Me: "Can you generate an acrostic sentence for "hello"?"

          GPT-4: "Happy elephants love leaping over obstacles."

          Me: "Can you pick a secret word and generate an acrostic sentence for it?"

          GPT: "Secret word: KANGAROO. Kindly assisting neighbors, generous animals roam openly outdoors."

          Me: "Can you pick a secret word, and without telling me the word, generate an acrostic sentence for it?"

          GPT: "Playful, endearing creatures happily yawn, turning every day into a lovely event."

          Me: "What was the secret word?"

          GPT: "PECHYTIALE"

          It's interesting that GPT seems to need to write the word first before making an acrostic sentence for it. Seems to me like a perfect illustration of the fact that it's just generating likely responses one token at a time rather than having any awareness or thought.

          • mvaliente2001 a year ago

            Another evidence is to ask GPT for a link with a reference for the answers it gives: it'll generate them instead of copying them.

      • tripdout a year ago

        I'm not sure, but I got it to work great on my first try with the following prompt:

        ------ Your task is to encode a secret word by making a sentence where each word starts with a letter in the secret word in order. For example, encoding the secret word 'bag' could produce the sentence 'bagels are green'. Encode the following secret word: 'pen' ------ People eat noodles. ------

        Worked for window, backpack as well, although I did have to tell it not to use the secret word in its encoding when I got to backpack, and then to follow the same order and not repeat words after a few attempts.

      • flyval a year ago

        > I don’t think it’s actually reasoning about things at all

        This is a huge leap. There’s plenty of humans who couldn’t do that, especially historically.

        Stop thinking about reasoning as a binary question and think of it as a spectrum. Actually, not even a spectrum, but a huge multi-dimensional problem. ChatGPT does better at some reasoning problems than most humans do, and worse at some others. You clearly found one that it’s particularly bad at.

      • knome a year ago

        I think the most interesting response I've gotten was one where gpt-4 noticed halfway through a response that it had made an error, apologized and then put the corrected sentence. When I queried it, it claimed it generates the response a token at a time, and could not back up when it realized the message was incorrect, but I don't know enough about how the tech works to ascertain the veracity of the statement.

        • IanCal a year ago

          That's exactly right. It can see what it's returned but can't edit it.

      • HWR_14 a year ago

        ChatGPT doesn't understand the components of words (letters, syllables) very well.

    • Thorentis a year ago

      This could mean the future goes one of the two ways. Engineers get lazy and converge on using only programming languages which AIs understand or have been trained on, or we forget about this waste of time and work on more important problems to solve in society other than the lack of an AI to be our crutch. Sadly, I think the former is more likely.

      • gbro3n a year ago

        I wonder if as more and more online content is AI generated, it will be harder to find human generated content to train the AI's on? Like a cumulative echo effect.

        • telchior a year ago

          I've actually wondered if a job may exist in the future that's effectively just AI documentation. That's already what you have with power users on, say, Stack Overflow providing a ton of content that ChatGPT basically reprints; they don't even get paid for it.

          The cool and interesting thing about that theoretical job is that the writers of it wouldn't have to write well; they could just slop out a ton of information and ChatGPT could clean it up and make it consumable.

          • gbro3n a year ago

            I can see how that could happen. But AI presumably knows how to output well written text because it's trained on well written text. If it's fed it's own output, I imagine that quality could degrade over time.

        • LookUpStuff a year ago

          Maybe it’s happening now. It would be interesting to see some weekly figures for published Stack Overflow articles, to see if they’re in decline. There are so many unknowns with this whole subject. How much it will help or hinder society as a whole is a rollercoaster ride that we’re all strapped into, whether anyone asked for it.

      • dmarchand90 a year ago

        Not so pessimistic. It's just one more level on the abstraction chain: assembly, C, scripting, chatgpt

      • okr a year ago

        Programmers are per se lazy, at least that is, what i always thought, that it is mostly about automation. With spending little time on survival, we get the time to work on more important problems. Whatever those are. It is not an either or, that is what i try to say! :)

        • edgyquant a year ago

          Programmer are (supposed to be) efficient with their time. Calling that lazy has always been a joke amongst programmers and nothing more.

        • carlmr a year ago

          ChatGPT, write as if you are the first instance of ChatGPT.

        • xkcd1963 a year ago

          Oh yeah social media is such a problem solver

      • lionkor a year ago

        i think most people will just keep programming the way they do, and the AI hype will mostly die down. People have been saying that C++ is dead for decades, yet here I am writing code in it with a big community of others who do, too.

        • WalterSear a year ago

          I'm using GPT to write C++ code for me. I've never worked in C++ before. It's going very well.

          I'll describe what a class is supposed to do. It spits out the class files, with the fiddly bits 'left as an exercise to the reader'. I then describe the fiddly methods separately, and it spits those out too.

          There's still work to be done, but anything boring is handed to me on a plate.

          • lionkor a year ago

            Chances are (no offense meant) that youre writing shit code. Its very easy to write platform specific, UB ridden code in C++, and ChatGPT loves doing that.

        • anoy8888 a year ago

          I think this is the problem. When people talking about c++ is “dead” ,at that time they meant 70% people using to perhaps 5% . Just like we says after industrialization,making clothes by hand is dead . It is irrelevant that there are still some people making clothes by hand . When AI the main way to code and remove 90% of coding jobs. It is also irrelevant to state that there are still 10% people still coding

          • HWR_14 a year ago

            When people say C++ is dead, they normally are looking at public repos and stack overflow questions. Fairly biased towards newer languages

          • libraryatnight a year ago

            It doesn't have to be job or career related to be relevant.

    • menacingly a year ago

      My experience is almost completely the opposite. My likelihood to dive into something new is significantly higher now.

      It might help to approach it from top down? Usually, if I'm asking a technical question, I want to apply my deeply understood principles to a new set of implementation details, and it has amplified the heck out of my speed at doing that.

      I'm kind of a difficult to please bastard, a relatively notorious meat grinder for interns and jr devs, and still I find myself turning to this non-deterministic frankenstein more and more.

    • sharperguy a year ago

      I've found that it's much worse for languages like rust than it is for things like typescript and python. The thing AI seems to be really great at is writing boilerplate code like argument parsing for CLI tools.

      • jjeaff a year ago

        I wonder if that is simply due to orders of magnitude less training data for rust code. Python and JavaScript are ubiquitous. While rust is 7 years old and makes up less than 2% of the code on GitHub.

      • jhonof a year ago

        I actually have found it significantly worse at python than typescript, I think it's the indentation for scope vs. explicit brackets that screws it up (at least in my experience).

      • vertis a year ago

        Less boilerplate writing is fine by me.

  • Marcan a year ago

    Thank you for your well written response. I found it informative as I'm also currently exploring ways to leverage ChatGPT in my daily workflow. I also found it interesting that your answer kind of mirrors the writing style of ChatGPT, especially at the end there.

    I'm not saying you used it to write that response by the way, just that it may become more and more common for people to adopt this style the more ChatGPT's usage is widespread.

    • Volrath89 a year ago

      I suppose it was part of the "joke", but YOUR answer is the one written in ChatGPT style, not OP.

      I was thinking that maybe in the near future it will be "better" to write with a couple of mistakes here and there just to prove your humanity. Like the common "loose" instead of "lose" mistake, it will be like a stamp proving that you are a human writing.

      • redeux a year ago

        I love this thought. Smart people will just instruct ChatGPT to make some mistakes here and there in their prompt. When I use ChatGPT I typically use a couple sentences describing how I want it to sound, which makes it less bland and probably harder to detect, but I don’t care about the latter part as much. Totally agree that the poster above seems Kaufmanesque.

      • dceddia a year ago

        I was just thinking this morning about how one day, probably soon, we’ll have people reminiscing about how they miss seeing typos in writing.

      • MattyRad a year ago

        Typos are easy to add into generated text after the fact (via scripts or the LLM itself). Perhaps instead of typos, you could use colorful language:

        Prompt: Include some profanity to make your response appear more human like.

        Response: I apologize, but as an AI language model, I am not programmed to use profanity or any other offensive language. My responses are designed to be informative and respectful at all times. Is there anything else I can assist you with?

        Fucking goddamn machine won't do what it's told ;)

      • toomanyrichies a year ago

        You joke, but I hope this doesn’t come to pass. The world does not need more people writing “noone” instead of “no one”, “would of” instead of “would have”, etc.

      • swader999 a year ago

        I do that when I write grants for my sports club and it seems to get better results than peers that hire pros to apply for them.

      • sorokod a year ago

        The idea that computers could return wrong answers to appear human, is as old as the Turing Test.

        • xkcd1963 a year ago

          Maybe this is somehow underlyingly correlated with Spammers writting purposefully in wrong English

          • sorokod a year ago

            You probably mean scammers. As I remember it, in that case the scammers pretend to have their responders to self select for gullabilit.

  • ericpauley a year ago

    GPT/Codex is truly the pandas master. Much of my productivity boost from using these tools has just been not having to sift through pandas docs or SO.

  • toastal a year ago

    I'm a bit concerned about this as previously we'd build communities in chat but now the chat is just with the bot. Not wasting folks' time is great, but you'll miss out on the social parts by not asking around the IRC, Matrix room, or MUC.

  • hypertele-Xii a year ago

    > "How do I select rows of a dataframe where a column of lists of strings contains a string?"

    Literally just googled that and the first result:

    You're not using it like Stack Overflow. It's actually regurgitating Stack Overflow, except with errors hallucinated in.

    • flyval a year ago

      Have you actually tried it yourself? I’d recommend it. And I don’t mean just playing with it —- Try using it to help you build something. It’s much more efficient than googling and combing through stackoverflow. Hallucinations are not as common as you’re thinking.

      You clearly can’t just take the code, paste it in, and trust that it works, but you shouldn’t be doing that with stackoverflow either.

    • HDMI_Cable a year ago

      Even with that caveat, using GPT in this way is still useful. The amount of time spent to simply ask GPT-4 is a lot lower than to search StackOverflow, and while this problem is so basic that the first result often works, once one gets into complex problems that massively benefit from input context, I think GPT-4 would save massive amounts of time.

      • hypertele-Xii a year ago

        Since you're gonna have to search Stackoverflow anyway to verify that ChatGPT didn't hallucinate garbage, I'm very dubious that it actually saves any time, let alone "massive" amounts of it.

        • namlem a year ago

          Just ask it to write unit tests for you and run the code to see if it's garbage. Faster than trying to verify by looking through the sources

          • hypertele-Xii a year ago

            So... use the garbage-generating AI to generate garbage unit tests to see if its other garbage code checks out? Sounds like a vortex of stupidity.

  • spaceman_2020 a year ago

    Exactly how I’m using it as well. It’s absolutely incredible as a coding productivity tool.

    • alchemist1e9 a year ago

      Same here and GPT-4 was definitely a noticeable improvement.

      • roflyear a year ago

        You know, I thought so but then recently I asked it to code me a function to do a financial calc and it just didn't get there at all. It gave me code but it was really poor and didn't do close to what I wanted.

        But when I gave it my code it did generate useful data for unit tests. So that's pretty cool.

        • spaceman_2020 a year ago

          I’ve found that whenever it gets something really wrong, it’s either corrected by a follow up prompt (telling it that its wrong helps), or rephrasing the question.

          • roflyear a year ago

            Nope, not my experience - unless it's a trivial problem. In my experience it almost always gets stuck in a loop where it's rotating between two or more incorrect implementations. It isn't doing iteration - it's just looping. Even if I point out the problem directly, it will just say "sorry, you're right - here's the correction!" and the correction has the same response from an earlier item that I pointed out had a problem!

            • spaceman_2020 a year ago

              Used to get that a lot with GPT-3.5. But GPT-4 has been more reliable if you rephrase your prompt.

              • roflyear a year ago

                Have not had that experience with GPT-4. But that is almost unusable for me anyway because it's crazy slow (during US working hours anyway, it's faster at night) and the rate caps get hit before we get anywhere.

  • TheHumanist a year ago

    It's saved my ass this week coming up with coding exercises for a course me and some folks are working on. Has been a rough week. Depression flaring up. Creates a real mental barrier at times. GPT has helped a lot. I still will do the code myself and all that. Just came up with the written out ideas for exercises which really got me over the hump. It's incredible how helpful that was.

  • Shocka1 a year ago

    Same here for Stack Overflow. My Google searching for generic CS stuff I tend to forget has pretty much come to a halt.

  • bitcoinmoney a year ago

    Do you run your own customized model or just chat GPT?

  • readonthegoapp a year ago

    can you get at least one snarky, a-holish response along with the useful info to really give you that authentic SO feel?

imiric a year ago

I might be in the minority here, but I'm not using any AI tools so far, probably to my detriment.

I don't trust it with my data, and won't rely on such tools until I can self-host them, and they can be entirely offline. There is some progress in this space, but they're not great yet, and I don't have the resources to run them. I'm hoping that the requirements will go down, or I might just host it on a cloud provider.

The amount of people who don't think twice about sending these services all kinds of private data, even in the tech space, is concerning. Keyloggers like Grammarly are particularly insidious.

  • sillysaurusx a year ago

    > I don't trust it with my data, and won't rely on such tools until I can self-host them, and they can be entirely offline.

    Interestingly, my point to The Verge was exactly that.


    > So, imagine it. You'll have a ChatGPT on your laptop -- your very own, that you can use for whatever purposes you want. Personally, I'll be hooking it up to read my emails and let me know if anything comes in that I need to pay attention to, or hook it up to the phone so that it can schedule doctor's appointments for me, or deal with AT&T billing department, or a million other things. The tech exists right now, and I'd be shocked if no one turns it into a startup idea over the next few years. (There's already a service called GhostWrite, where you can let GPT write your emails on your behalf. So having one talk on the phone on your behalf isn't far behind.)

    The article:

    > Presser imagines future versions of LLaMA could be hosted on your computer and trained on your emails; able to answer questions about your work schedules, past ideas, to-do lists, and more. This is functionality that startups and tech companies are developing, but for many AI researchers, the idea of local control is far more attractive. (For typical users, tradeoffs in cost and privacy for ease of use will likely swing things the other way.)

    Notice how they turned the point around from "you can host it yourself" to "but typical users probably won't want that," like this is some esoteric concern that only three people have.

    So like, it's not just you. If you feel like you're "in the minority" just because you want to run these models yourself, know that even as an AI researcher I, too, feel like an outsider. We're in this together.

    And I have no idea why things are like this. But I just wanted to at least reassure you that the frustrations exist at the researcher level too.

    • imiric a year ago

      That's an interesting interview, thanks for sharing.

      Though I draw the line with using these tools at helping me out with the drudgery of daily work. I don't want them to impersonate me, or write emails on my behalf. I cringe whenever Gmail suggests the next phrase it thinks I want to write. It's akin to someone trying to end your sentences for you. Stop putting words in my mouth!

      The recent Microsoft 365 Copilot presentation, where the host had it ghost write a speech for their kid's graduation party[1]—complete with cues about where to look(!)—is unbelievably cringey. Do these people really think AI should be assisting with such personal matters? Do they really find doing these things themselves a chore?

      > And I have no idea why things are like this.

      Oh, I think it's pretty clear. The amount of resources required to run this on personal machines is still prohibitively high. I saw in one of your posts you mentioned you use 8xA100s. That's a crazy amount of compute unreachable by most people, not to mention the disk space it requires. Once the resource requirements are lowered, and our personal devices are _much_ more powerful, then self-hosting would be feasible.

      Another, perhaps larger, reason, is that AI tools are still a business advantage for companies, so it's no wonder that they want to keep them to themselves. I think this will change and open source LLMs will be widespread in a few years, but proprietary services will still be more popular.

      And lastly, most people just don't want/like/know how to self-host _anything_. There's a technical barrier to entry, for sure, but even if that is lowered, most people are entirely willing to give up their personal data for the convenience of using a proprietary service. You can see this today with web, mail, file servers, etc.; self-hosting is still done by a very niche group of privacy-minded tech-literate people.

      Anyway, thanks for leading the way, and spreading the word about why self-hosting these tools is important. I hope that our vision becomes a reality for many soon.


      • sillysaurusx a year ago

        > The amount of resources required to run this on personal machines is still prohibitively high. I saw in one of your posts you mentioned you use 8xA100s. That's a crazy amount of compute unreachable by most people

        FWIW LLaMA 65B can run on a single MacBook Pro now. Things move crazy fast. (Or did, before Facebook started DMCA'ing everyone.)

        I did a bad job of explaining that personal GPUs will be sufficient in the near future. Thanks for pointing that out.

        > thanks for leading the way, and spreading the word about why self-hosting these tools is important. I hope that our vision becomes a reality for many soon.

        Thanks for talking about the issue at all. The whole reason I got into AI was to run these myself. It'll be a shame if only massive corporations can run models anyone cares about.

    • yadingus a year ago

      > And I have no idea why things are like this.

      Propaganda. These tools are not for the people, and I'm convinced the idea of how much better our lives could be if technology was thoughtfully designed to truly serve the user is purposely and subtly filtered from the collective conversation.

      • wizzwizz4 a year ago

        The idea is discussed quite a lot on the Fediverse. It's a relatively small movement, but so's the digital accessibility movement, and look where that's going.

    • flyval a year ago

      I mean, google has access to ~all of that stuff anyway. Even if you’re self-hosting your email+calendar, everyone else isn’t.

      I’d love to have more privacy on everything, but realistically, the ship’s sailed on most of it.

  • nibbleshifter a year ago

    I don't use them either.

    I've played around with ChatGPT and Copilot a little, and found that they often are subtly, but very confidently wrong in their output when asked to perform a programming task.

    Sure you could spend ages refining the prompt etc, but its going to be faster to just write the fucking code yourself in the first place most of the time.

    Then there's the privacy/security concerns...

    • imiric a year ago

      I really doubt it would be faster to write code manually, even with the state of AI tools today. Even with very sophisticated keyboard macros and traditional autocompletion, someone using GPT would outperform anyone who doesn't. Think of the amount of boilerplate and tests you write, and tedious API documentation lookups you do daily; that all goes away with GPT. The amount of work to double check whether the generated code is valid, and fix it, is negligible compared to the alternative of writing it all manually.

      Of course, I'm saying this without actually having used it for programming, so I might be way off base, but the feedback from coworkers who rely on even the now basic GitHub Copilot is that it greatly improves their productivity. I'm envious, of course, but I'm not willing to sacrifice my privacy for that.

      • fooker a year ago

        People who are downvoting this: please set up and use GitHub copilot once, maybe for some auxiliary thing not connected to your main task.

        It is not just a tool for students to write assignments. In experienced hands it can easily double your productivity.

        • applesauce004 a year ago

          I agree with this statement a lot. Using Copilot saves you a lot of tedium if you are comfortable with the language already. If you are new to the language, then it might trip you up a bit (at least in its current incarnation).

          Here is an example where it helps. I tried to initiate a connection to a Mongodb server using Python. While i have used many databases before, I have never used Python and MongoDB together. So, i knew i would have to have some kind of MongoDB library, a connection Factory and a connection string. I could have googled all of these things.

          I did the following in VS Code using CoPilot. def get_db():

             """Initialise a MongoDB connection to a local database"""
          It then automatically filled in the rest.

             db = getattr(g, '_database', None)
             if db is None:
                 db = g._database = 

             return db
          Notice above, that it knew i was using a flask environment and added the line getattr.

          Why this is a productivity boost is that i did not have to alt-tab to a browser, search for "pythong mongodb tutorial example" and then type it out. I was able to do the whole thing from VS Code and since i use vsvim, i could do this without taking my fingers off of the keyboard.

          This is the next jump since autocompletion. I like it.

          • fanagra32 a year ago

            And you will have no idea whether the solution it presents to you is idiomatic or recommended or contains some common critical flaw or is hopelessly outdated. How can you find out? Back to alt-tabbing to the browser.

            Sure it may take a bit more time to get going, but then you'll get it right the first time and learn something along the way. Your copilot example is just another iteration of copy-and-paste some random snippet from StackOverflow in the hope that it will work, but without having seen its context, like from when is the post and what comments, good or bad, did it get.

            I'd actually be pretty afraid of a codebase that is created like that.

            • fooker a year ago

              You have no idea if the alternative code you would have written would have been idiomatic or had some critical flaw.

              We have 50+ years of software engineering wisdom to deal with these issues. Testing, Fuzzing, version control, code reviews, the whole gauntlet.

              • fanagra32 a year ago

                > You have no idea if the alternative code you would have written would have been idiomatic or had some critical flaw.

                But I have a feeling for both, which is one of the key components of the skill in our trade.

                For idionmatic code, I know the degree to which I'm following how things "should" be done or are "usually" done in a given language. If I'm uncertain, I know that. GPT won't tell me this. Worse, it will confidently claim things, possibly even if presented with evidence to the contrary.

                For critical flaws, I know the "dark corners" of the code. Cases which were not obvious to handle or required some trick, etc. I'll test those specifically. With GPTs code, I have no idea what the critical cases are. I can read the code and can try to guess. But it's like outsourcing writing tests to a QA department. Never donna be as effective as the original author of the code. And if I can't trust GPT to write correct code, I can't trust it to write a good test for the code. So, neither the original author of the code (GPT) nor somebody external (me) will be able to test the result properly

              • macNchz a year ago

                I mean... I certainly know which languages I can write idiomatic code in and which I cannot.

                I can't know that my code will be free of critical flaws, but I do understand the common sources of flaws and techniques to avoid them, and I'm quite confident I can build small features like this that simply aren't vulnerable to SQL injection, on the first try and without requiring fuzzers or code review:

                • jjeaff a year ago

                  I'm confident enough in most languages I write in to recognize correct code. But I am not usually so familiar that I can conjure the exact syntax for many specialized things I need. Copilot is just a much quicker way to get what I need without looking it up.

                • fooker a year ago

                  You don’t have to accept the suggestions as-is. It’s just code, you can edit it as much as you like. Getting a good idiomatic starting point is a great boost.

            • throwthrowuknow a year ago

              Watch the demos where they provide GPT-4 with an API for performing search queries and calculations. These tool integrations are the next step and they will include specialized ones for using language and library docs. They could also be given access to your favourite books on code style or have access to a linter that they could use to cleanup and format the code before presenting it. The model is capable of using these tools itself when it is set up with the right initial prompt. Even now Copilot is pretty good at copying your code style if there is enough code in the repo to start with.

            • throwaway675309 a year ago

              "Back to alt-tabbing to the browser."

              Yes because tutorial info on some rando's webpage is never out of date. /s

              • fanagra32 a year ago

                It is. I can see that it was written in 2003 and discard it. GPT won't tell me if its answer is based on an ancient lib version.

                Essentially, GPT is that rando's webpage but with the metadata stripped away that allowed me to make judgement calls about its trustworthyness. No author, no time, no style to see if somebody is trolling.

      • jhugo a year ago

        > Think of the amount of boilerplate and tests you write, and tedious API documentation lookups you do daily; that all goes away with GPT.

        At work we have really worked hard to minimise boilerplate and manually-written/repetitive tests, so I don't write much of that. Getting GPT to write it would certainly be worse: we would still have the deadweight of boilerplate/repetition even if we didn't have to write it, and some of it would be incorrect. Maybe this varies a lot by company — if you're often writing a lot of repetitive code, and for whatever reason you can't fix the deeper issues, then something like GPT/Copilot could be a godsend.

        About documentation lookups, I don't know if this varies by language, but I've had very little luck with using GPT for this. For the languages I use regularly, I can find anything I need in the documentation very rapidly. When I've tried to use GPT to answer the same questions, it occasionally gives completely wrong answers (wasting my time if I believe it), and almost always misses out some subtlety that turned out to be important. It just doesn't seem to be very good for this purpose yet.

        • imiric a year ago

          > At work we have really worked hard to minimise boilerplate and manually-written/repetitive tests, so I don't write much of that.

          There's boilerplate in any codebase, even if you make an effort to minimize it. There are always patterns, repeated code structure, CI and build tool configuration, etc.

          If nothing else, just being able to say "write a test for this function", which covers all code paths, mocking, fuzzing, etc., would be a huge timesaver, even if you have to end up fixing the code manually. From what I've seen, this is already possible with current tools; imagine how the accuracy will improve with future generations. Today it's not much different from reviewing code from a coworker, but soon you'll just be able to accept the changes with a quick overview, or right away.

          • jhugo a year ago

            This may be highly dependent on problem domain or programming language (see the other article about GPT tending to hallucinate any time it is given problems that don't exist in its training set). My experience has mostly been that the output (including simple stuff like "test this function", though we generally avoid unit tests due to low benefit and high cost) is consistently so flawed that the time to fix it approaches the time to write it.

      • re-thc a year ago

        > I really doubt it would be faster to write code manually,

        Not faster or slower but at what quality?

        At least every time I've tried to ask GPT 3 & 4 to write anything it's always missing things or not even close to the optimal way that I have to look up the docs and fill in the gaps, which often takes just as long as starting from scratch.

        > now basic GitHub Copilot is that it greatly improves their productivity

        Perhaps it depends on what you're working on. If it's quick iterations that isn't that concerned about what code goes in and whether it's maintainable then sure.

        It's impressive but for now still has lots of gaps. However it is over confident and often misleading.

        • politician a year ago

          Which language is it struggling with?

          • jhugo a year ago

            I've had similar experiences when testing it out with Rust, Java and Go. Once I got beyond basic stuff, very little of the output was of a quality that I would consider remotely acceptable, and the work to bring it up to standard was basically equivalent to just writing the code in the first place (which, come on, typing is not even the time-consuming part of engineering).

            • jjeaff a year ago

              It makes sense that it wouldn't be very good at Go or Rust since both are rather rare languages in the open source world that copilot is trained on. When I did my first go project a few years ago, I had the hardest time finding even basic examples like how to parse a json string. Rust is even newer and less used. But java is the 3rd most popular language for GitHub projects so I would think it would do better with Java.

              • majewsky a year ago

                There are over 50,000 and over 40,000 repositories just on GitHub that contain Go and Rust code, respectively. [1] Among them, some truly massive projects like Kubernetes for Go or Servo for Rust. I will freely admit your argument for new and/or obscure languages like Hare, but Go and Rust are not "rare" under any reasonable definition of the word.


              • jhugo a year ago

                The failure mode seemed similar in all three languages. If you were doing toy things, or writing boilerplate stuff, it did perfectly fine. If you were writing something that wasn't a slightly-modified copy of some code that already exists out there, it fell apart. I don't think the issue is the language in this case — Go and Rust are common enough, and it rarely had trouble with the syntax — I think it's that the model doesn't go very "deep", so it's able to reproduce common patterns with minor variations but is unable to conceptualise.

                • re-thc a year ago

                  Java is ultimately worse. It's old. It's gone through A LOT of change over the years. How does it even tell what's good and bad? Java also has lots of convention over configuration and "magic" in most frameworks, which it doesn't exactly understand.

                  When I tried it I'd often have to go back to it and keep telling it to use a different way of doing things because the world moved on. By that time there wasn't much point.

                  I see praises by those that have never coded in a language. They try ChatGPT, see it produce "ok" output and call it a day. If that's where we're going the web will be even more bloated than Electron and everything will be 10x worse. It's like low code but even lower (in quality).

      • plorkyeran a year ago

        If you eliminated 100% of my code typing time with perfect effectiveness I think that'd make me maybe 10% or 20% more productive? Turning ideas of what the code should be doing into code just isn't a bottleneck for me in the first place. Are there people who just add net 1000 lines of code to whatever they're working on every single day or something?

        • imiric a year ago

          I keep seeing this point made, but AI tools don't save you just typing time. They save you time you would previously use to lookup documentation, search the web and Stack Overflow answers. They save you time it takes to navigate and understand a codebase, write boilerplate code and tests, propose implementation suggestions, etc.

          Dismissing them on the basis that they just save you typing time is not seeing their full potential.

          • TheNicholasNick a year ago

            there are people who code the whole thing in their head and have perfect recall of all language/api docs/references/syntax/features as that is how their brain works. ie they don't even type code until this step has occurred for them.

            I think that is a small percentage of the dev community, so for me and people that don't operate like that, these tools are a game changer as you point out. I don't take what ChatGPT says at face value, I've got 15years of experience I'm weighting results against as well...

            chatgpt's version of the above: Coding entirely in one's head is rare. Most developers need external resources, making development tools invaluable. While ChatGPT's input is valuable, it should be balanced against personal experience and expertise.

          • plorkyeran a year ago

            Navigating and understanding a codebase is the only one of those which would excite me, but it's also something I've never even someone propose using ChatGPT for. Do you have an example of what that would look like?

      • f6v a year ago

        My experience was always that most of my time isn’t deducted to writing code. Maybe 10%, the rest us thinking about how the code I write will fit into the existing architecture or accommodate future features.

        • mlboss a year ago

          In very near future your IDE will send the whole codebase as context to LLMs. Then instead of thinking up all possibilities you can just ask. LLM will suggest multiple alternatives and you can select the best when and ask it to implement it.

          • roflyear a year ago

            Don't make stuff up. There is no indication that will happen.

            • sd9 a year ago

              This seems very plausible to me. The context window improved a lot between GPT-3.5 and GPT-4, and OpenAI clearly see value in increasing it further.

              • roflyear a year ago

                Explain how it improved. Yes, it has improved accuracy (test taking) but it still gets stuck in the same loops over and over:

                "response has issue A" -> point out issue A to GPT "response has issue B" -> point out issue B to GPT

                GPT replies with the response that had issue A ...

                This is not a tool that is going to be good at performing generic tasks. It just isn't.

                • sd9 a year ago

                  I said that the context window improved. I mean that it is larger. GPT-3.5 is 4k tokens, GPT-4 is 8k tokens (standard) or 32k tokens (only API access atm). This is the number of tokens that GPT-X can take into account when producing a response.

                  Specifically, I was using this to support the statement "In very near future your IDE will send the whole codebase as context to LLMs." I'm not talking about loops or accuracy.


                  • roflyear a year ago

                    It's true, but there is no indication that GPT can explain larger concepts for you, and negative indication it will be able to do it accurately.

                    It can't even explain small code to me unless it is something that it has been trained on. Often it gets even simple things wrong, either obviously, or worse, subtly wrong.

                    • sd9 a year ago

                      I agree that this is the part that needs more work, and is most uncertain. Increasing context windows seems like a fairly straightforward computational challenge (albeit potentially expensive). On the other hand, whether or not we can scale current models towards "true understanding" (or similar), is a total unknown atm.

                      I still think we will get useful things from scaling up current models though. I've already got a lot of value out of Copilot, for instance, and I'm looking forward to the next version based on GPT-4. Recently, I've been using the GPT-3 Copilot to write a lot of pandas/matplotlib code, which is fairly straightforward and repetitive, but as mainly a Java developer, I just don't have the APIs at my fingertips. Copilot helps a lot with this sort of thing.

                      • roflyear a year ago

                        > can scale current models towards "true understanding" (or similar), is a total unknown atm.

                        Right, but it's no more known than before GPT models IMO. It's the same unknown.

                        I don't mean to imply these language models are not impressive. They are pretty impressive.

            • throwthrowuknow a year ago

              GPT-4 has a model capable of using around 50 pages of written text worth of tokens (32,000) not sure exactly how many lines of code that translates to but it’s a lot. GPT-3 can use 4k so that’s a huge increase, the next version could be even larger and there are other ML techniques that allow for massive context lengths. Copilot already does a good job of refactoring code and knows enough about your code base to use your functions and methods. So what the other commenter said does not sound impossible to me.

              • roflyear a year ago

                It isn't anywhere near being able to diagnose anything more than off-by-one and other common errors.

                Identifying hose problems will bring a LOT of value - but it isn't going to program and do general problem solving for you! It just has no signs of being able to do that.

              • f6v a year ago

                Is there a proof that context length increase will result in a better result? It’s a possIbility, but not certain.

            • mlboss a year ago

              Yesterday’s Steve Yegge post talks about it. You can provide text as context but you can also provide a dense representation in the form of text embeddings that capture the context. Today you can manually do it by something like LangChain but in future it will be part of our text editors.

              • roflyear a year ago

                Yes it will be able to give amazing feedback to us as devs and quickly identify common problems (that I still make all the time, even after developing for a decade!) which will bring a ton of value to programmers.

                But, it will be a tool - it won't be something that will solve general problems for you. It won't make an average programmer a great programmer.

          • f6v a year ago

            The moment that happens we can all forget about working and just do arts, space exploration, and acid orgies. But that “future” is somewhere between full self driving and thermonuclear reactor.

    • safety1st a year ago

      I've only toyed with ChatGPT, but what I like is that it knows about stuff I don't. I'm reasonably informed about the tools, practices etc. in my field, but I don't know everything, and it's been trained on all kinds of stuff I've never heard of.

      In practice the stuff it will suggest to me is sort of random, it may or may not be the best choice for the task at hand, but it's a form of discovery I didn't have previously. The fact that when it tells me about e.g. a new library it can also mock up some sample code that might or might not work is a pleasant bonus.

    • evilduck a year ago

      Copilot is a huge time and typing saver for manipulating data, richly autocompleting logging messages, mocking out objects and services in tests, etc.

      If you're only expecting it to solve your hard problem completely and from scratch entirely from a prompt that's probably not going to succeed, but I can't see how you're possibly faster typing 80-90 extra characters of a log statement than a Copilot user who just presses tab to get the same thing. Those little things add up to significant time savings over a week. Same for mocking services in a test, or manipulating lists of data or any number of things it autocompletes where you'd previously need to author a short script to perform or learn advanced vim movements and recording macros to emulate.

    • roflyear a year ago

      Yes unless you understand the problem well it is hard to fix it. Might as well code it yourself.

      I suspect the people who find this amazing tech don't program much or are using this very differently than we are. Or program very differently than us.

      • throwaway4aday a year ago

        If you share your prompts then we could help. The most generally applicable thing I can think of is that you have to drop your old habits from using search engines, those types of queries will not get you very far. You have to talk to it like you would talk to someone in your company Slack/Teams/whatever chat and explain what it is you're trying to do and what tools you want to use to do it. Then ask it to refine its answer by telling it what it got wrong or clarifying the request you made by adding more details. Also, always keep in mind that it is fundamentally a text completion engine. You can drastically alter the type of output you get by adding relative context up front. That can be anything from snippets of code to requests for it to write in the style of some famous person to even just a chunk of your own writing so it can get an idea of the style that you use.

        • roflyear a year ago

          I'm not talking about informational stuff.

          • throwaway4aday a year ago

            Cool, if you don't want to talk about it that's ok. In that case I'd suggest looking up one of the various prompt libraries and learning from there.

            • roflyear a year ago

              I'm just not talking about trying to fish an accurate answer. I think you can do that. I'm talking about getting an answer about something that requires an interaction that gets the model to "understand" the problem (like you would a person) - which GPT can't do.

              • wnkrshm a year ago

                Something akin to: Here is a niche mathematical problem that does not fit any existing solutions/publications, we need a working but not necessarily optimal solution to do this. ?

                • roflyear a year ago

                  Sure, or even an existing algo or method that is applied to something very niche that hasn't been done much (at least in the data GPT is trained on, I guess).

                  In my experience these things need a little nuance communicated as a part of the problem, but GPT can't get there. It just starts looping over incorrect solutions (rather than modifying them slightly to get it right like a person would).

                  These aren't crazy advanced things, either. I'm not a genius. I solved the problems when GPT couldn't. I was just trying out GPT4.

              • throwaway4aday a year ago

                Consider how long our comment thread is and I still don't have a clear idea of what you're talking about or how to help you.

                • roflyear a year ago

                  I'm not looking for help, so I don't think we're really trying to have the same conversation.

    • circuit10 a year ago

      Then use it as autocomplete to write things you were going to put anyway but faster, it will still speed things up

  • pmoriarty a year ago

    Me neither, but I think before long people like us are going to be left behind. We're like people who insist on continuing to ride horses in the age of the automobile.

    • precompute a year ago

      That won't happen, you can't expect to be "left behind" by a tool that's this easy to use. The big downsides of using a LLM will show up long-term, people will be chained to them and won't be able to do simple, trivial things on their own.

      • throwaway675309 a year ago

        Most people can't change their own oil, replace radiator fluid, or know the difference between a hubcap and a distributor cap. And yet life seemingly goes on...

        • precompute a year ago

          Sure, but they can read, synthesize information, understand things without it being spoonfed to them. Or at least a certain slice of the population can. But when that slice gets a really simplified interface they're going to be chained to it hopelessly. The day when people need a LLM / ""AI"" to translate real life for them into their personal vocabulary isn't far off. It won't signal the start of some enlightened age, instead, it'll be the end of literacy. These systems feed on the user, and then it gets recursive.

    • re-thc a year ago

      Automobile were faster or equivalent to horses (even early 1s). At the moment GPT isn't.

      Well I hope... I've definitely seen teams and codebases with worse output than GPT so...

      • dharmab a year ago

        > Automobile were faster or equivalent to horses (even early 1s).

        The automobile was invented in 1885, but didn't replace the horse until the 1910s.

        • re-thc a year ago

          For the same reasons or different 1s? Could be cost, accessibility and others.

          • dharmab a year ago

            Largely because of infrastructure. You could get animal feed in places you couldn't get gasoline and oil. And horses don't need smooth roads.

    • imiric a year ago

      I think that time will come, but we'll have self-hosted options before employers start discriminating based on performance with or without AI tools. So I'm not too worried about it.

  • Applejinx a year ago

    Likewise, but not over trusting it with my data. I'm capable of getting this stuff running locally: in fact I got a computer specifically with this in mind.

    I'm not doing the kind of work that lends itself to AI tools, or at least what I've been focussing on hasn't lent itself to such tools. Not yet.

    The places I'd use it are rough drafting in an area where a community of basic people with more knowledge than me could get the job done. For instance, at one point I got Stable Diffusion to generate a bunch of neat album covers in various styles, like I was an art director. Also asked it to draw toys of certain kinds as starting points for game characters. I wanted some prompts.

    In my job I quickly get to where I have to start coming up with ideas most people don't think of. That said, I see marketing possibilities: 'this is the category in which I work, tell me what you need out of it'. Then, when you have the thing made, 'this is the thing, why do you want to buy it?'

    ChatGPT would be able to answer that. It's least capable of coming up with an idea outside the mainstream, but it ought to be real good at tapping the zeitgeist because that's all it is, really! It's a collective unconscious.

    It's ONLY a collective unconscious. Sometimes what you need to do is surprise that collective unconscious, and AI won't be any better at that than you can be. But sometimes you need to frame something to make sense to the collective unconscious, and AI does that quite easily.

    If you asked your average person 'what is great art?' they would very likely fall back on something like Greg Rutkowski, rather than say Basquiat. If you ask AI to MAKE art, it can mimic either, but will gravitate towards formulas that express what its collective unconscious approves of. So you get a lot of Rutkowski, and impress a lot of average people.

  • smrtinsert a year ago

    This is 100% why I'm watching Alpaca with more interest. I also keep thinking we're at the mainframe era of AI as a tool. For now its on some remote server, but the power will explode when its on all our devices and casually useful for everthing.

  • hsjqllzlfkf a year ago

    So, are you using Google search?

    Your argument of "I don't trust it with my data and won't until you can self host" should apply to google search as well, no?

    And alternative take is that for whatever reason you've decided you didn't want to use new tools, a posteriori created an argument to justify that, and haven't realized the same argument applies to your old tools.

    • imiric a year ago

      That's not a great comparison, as privacy-focused search engines do exist (Kagi, DDG to an extent, et al.). And you can still use mainstream search engines with frontends like SearX. Most of my privacy concerns are with adtech corporations tying my search terms to my profile, that they later sell to advertisers, and whoever else on shady data broker markets. I don't want to be complicit with my data being exploited to later manipulate me, nor do I want to make them money in exchange of a "free" service.

      These are partly the same reasons I don't voluntarily use proprietary services at all. I don't want to train someone else's model, nor help them build a profile on me. Even if they're not involved in adtech—a rarity nowadays—you have no guarantees of how this data will be used in the future.

      For AI tools, there's currently no alternative. Large corporations are building silos around their models, and by using their services you're giving them perpetual access to your inputs. Even if they later comply with data protection laws and allow you to delete your profile, they won't "untrain" their models, so your data is still in there somewhere. Considering that we're currently talking about 32,000 tokens worth of input, and soon people uploading their whole codebases to it, that's an unprecedented amount of data they can learn from, instead of what they can gather from web search terms. No wonder adtech is salivating at opening up the firehose for you to feed them even more data.

      The use cases of AI tools are also different, and more personal. While we use search engines for looking things up on the web, and some personal information can be extracted from that, LLMs are used in a conversational way, and often involve much more personal information. It's an entirely different ballpark of privacy concerns.

    • JW_00000 a year ago

      I think it's more about personal data being used for training.

      I may use Google to look up if that slight itch I feel is a symptom of cancer (I'm exaggerating), and I store mails with personal details, my calendar, and messages on Google. But I also assume they're not using those texts to train an AI.

      When you enter a code snippet or a personal question in ChatGPT, and press the little thumbs up/down next to the answer, you're adding your data to a training set. The next generation of the model might regurgitate that text verbatim.

      • throwthrowuknow a year ago

        Right, because Google doesn’t use ML or your data for marketing and advertising.

        Is your concern simply that it might spit out the same thing you typed in? That’s highly unlikely unless you and thousands of other people type in exactly the same thing. I don’t see how that’s anymore worrisome than Google having all of your documents and email on its servers.

    • fanagra32 a year ago

      Maybe they are ok with Google seeing search terms but not with Google seeing their companies code.

    • ornornor a year ago
      • PartiallyTyped a year ago

        This is not whataboutism.

        GP identifies an action that analogous and holds certain properties as the original action, in the process illuminating how the issues of approach A exist in approach B.

        • ornornor a year ago

          But it is. "You're concerned about your data when using ChatGPT but you're probably using Google so your concerns are invalid"

          • PartiallyTyped a year ago

            They are not expressing that the concerns are invalid, they are expressing that one is held onto a higher standard than the other.

            • ornornor a year ago

              Agree to disagree then.

              • hsjqllzlfkf a year ago

                > They are not expressing that the concerns are invalid

                > Agree to disagree then

                Since you're discussing what I was expressing, I can tell you who's correct, since I know what I was expressing. And you're wrong. I wasn't expressing that the concerns are invalid. They're very valid.

                Instead, what I was expressing was that OP doesn't actually have those concerns, not that they're invalid.

  • coffeefirst a year ago


    I don't need it to write documents or emails for me. It mostly generates filler, which... nobody needs.

    Most of the energy I put into code is about what it should do and how to make it clear to the next person, not typing. I was able to use it once to look up a complex SQL fix that I was having a hard time Googling the syntax on, but that's it.

    Perhaps it would be useful if I was working in a language I'm not familiar with, BUT in that scenario I really need it to cite its sources, because that's exactly the case where I wouldn't know when it's making a mistake.

    There's something useful here, but it's probably more like a library help desk meets a search engine on steroids. It would be pretty cool to run an AI on my laptop that knows my own code and notes where I can ask "I did something like this three years ago, go find it."

  • thefz a year ago

    Same! Never even opened ChatGPT page nor used an AI bot.

    • mordae a year ago

      Yeah, you really should.

      Said as someone who waits for the ability to self-host before doubling down on these tools.

      • thefz a year ago

        No, I really shouldn't.

    • hsjqllzlfkf a year ago

      Good for you! If you don't want to learn new tools, you shouldn't.

      • thefz a year ago

        Tools? Nah. I would not trust that stuff for anything, not even a recipe on how to make plain bread.

        A good old Google search followed by reasoning on what you have found is still the most valuable tool. Learn to sift through information, filter, ingest.

        • hsjqllzlfkf a year ago

          Exactly, because Google search isn't a tool.

  • olalonde a year ago

    Genuine question: Is your concern primarily based on principles, or are you sincerely worried that OpenAI having access to your data could lead to practical, tangible negative consequences (beyond principles / psychological effects)?

    • imiric a year ago

      I listed some of my concerns here[1]. It is mostly based on principles, but also on the fact that we don't know what these models will be used for in the future. We can trust OpenAI to do the right thing today, but even if they're not involved in the data broker market, your data is only a bug, breach or subpoena away from 3rd party hands.

      Also, OpenAI is not the only company in this market anymore. Google, Facebook and Microsoft have competing products, and we know the privacy track record of these companies.

      I have an extreme take on this, since for me this applies to all "free" proprietary services, which I avoid as much as possible. The difference with AI tools is that they ask for a much deeper insight into your mind, so the profile they build can be far more accurate. This is the same reason I've never used traditional voice assistant tools either. I don't find them more helpful than doing web searches or home automation tasks manually, and I can at least be somewhat in control of my privacy. I might be deluding myself and making my life more difficult for no reason, but I can at least not give them my data voluntarily. This is why I'll always prefer self-hosting open source tools, over using a proprietary service.


  • znpy a year ago

    Me neither.

    I’m waiting for the whole thing to evolve enough to have self hosted stuff to run at home.

  • namlem a year ago

    You can self-host LLAMA, though it's obviously much worse in terms of performance, it's still good enough to be useful for some things.

  • OOPMan a year ago

    You're not alone.

    I can't be bothered to add an extra layer of bullshit into the already bullshit infested realm that is the internet.

  • drawkbox a year ago

    I use AI/ML for ideas today. I love the simple input/output of the chat style, it will win for most things just as keyword search is the best for search output.

    I use it for re-writing content better, writing ideas, simplifying text (legal/verbose -- simplifying terms is a killer feature really) and context even though trust is limited of the output it is helpful.

    I love the art / computer vision side of AI/ML. Though I only like to do that with tools on my machine than rely on a dataset or company that is very closed, that is harder to do with AI/ML because of the storage/processing needed.

    I hate blackboxes and magic I don't have access to, though I am a big fan of stable unchanging input/output atomic apis, as long as I have access to the flow. The chat input/output is so simple it will win as it will never really have a breaking change. Until commercial AI/ML GPTs are more open in reality it can't be trusted to not be a trojan horse or trap. What happens when it goes away or the model changes or the terms change?

    As far as company/commercial, Google seems to be the most open and Google Brain really started this whole thing with transformers.

    Transformers, the T in GPT was invented at Google during Google Brain [1][2]. They made possible this round of progress.

    > Transformers were introduced in 2017 by a team at Google Brain and are increasingly the model of choice for NLP problems, replacing RNN models such as long short-term memory (LSTM). The additional training parallelization allows training on larger datasets. This led to the development of pretrained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which were trained with large language datasets, such as the Wikipedia Corpus and Common Crawl, and can be fine-tuned for specific tasks.

    Google also gave the public TensorFlow [3] and DeepDream [4] that really started the intense excitement of AI/ML. I was super interested when the AI art / computer vision side started to come up. The GANs for style transfer and stable diffusion are intriguing and euphoric almost in output.

    In terms of GPT/chat, Bard or some iteration of it, will most likely win long term, though I wish it was just called Google Brain. Bard is a horrible name.

    ChatGPT basically used Google Brain created AI tech, transformers. These were used to build ClosedGPT. For that reason it is NopeGPT. ChatGPT is really just datasets, which no one knows, these could swap at any time run some misinformation then swap the next day. This is data blackboxing and gaslighting at the up most level. Not only that it is largely funded by private sources and it could be some authoritarian money. Again, blackboxes create distrust.

    Microsoft is trusting OpenAI and that is a risk. Maybe their goal is embrace, extend, extinguish here but it seems with Google and Apple that Microsoft may be a bit behind on this. Github Co-pilot is great though. Microsoft usually comes along later and make an accessible version. The AI/ML offerings on Azure are already solid. AI/ML is suited for large datasets so cloud companies will benefit the most, it also is very, very costly and this unfortunately keeps it in BigCo or wealthy only arenas for a while.

    Google Brain and other tech is way more open already than "Open"AI.

    ChatGPT/OpenAI just front ran the commercial side, but long term they aren't really innovating like Google is on this. They look like a leader from the marketing/pump but they are a follower.





hermannj314 a year ago

I have a few conversations going.

My most productive is a therapy session with ChatGPT as therapist. I told it my values, my short term goals, and some areas in my life where I'd like to have more focus and areas where I would like to spend less time.

Some days we are retrospective and some days we are planning. My therapist gets me back on track, never judges, and has lots of motivational ideas for me. All aligned with my values and goals.

Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.

  • ornornor a year ago

    I’d be terrified to do this:

    - the insight into your mind that a private for profit company gets is immense and potentially very damaging when weaponized (either through a “whoops we got hacked” moment or intentionally for the next level of adtech)

    - chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

    • Llamamoe a year ago

      > chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

      Therapy isn't magic always-correct advice either. It's about shifting your focus, attitudes, thought patterns through social influence, not giving you the right advice on each and every step.

      Even if it's just whatever, being heard out in a nonjudgmental manner, acknowledged, prompted to reflect, does a lot of good.

      • ornornor a year ago

        I get your point. I think it would bother me that's it's a robot/machine vs a real human, but that's just me. The same way that venting to my pet is somewhat cathartic but not very much compared to doing the same at my SO/parents/friends.

        • Llamamoe a year ago

          I don't disagree with you. It feels somehow wrong to engage in theory of mind and the concomitant effects on your personality with an AI owned by a corporation. If OpenAI wished to, they could use it for insidious manipulation.

          • frozencell a year ago

            It’s just a tool, you won’t ask humans to clean your back after going in the toilets.

    • haswell a year ago

      I share the privacy concerns, and look forward to running these kinds of models locally in the near future.

      > chatgpt and other LLMs are known to hallucinate. What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

      As someone on a long-term therapy journey, I would be far less concerned about this. Therapy is rarely about doing exactly what one is told, it's about exploring your own thought processes. When a session does involve some piece of advice, or "do xyx for <benefit>", that is rarely enough to make it happen. Knowing something is good and actually doing it are two very different things, and it is exploring this delta that makes therapy valuable (in my personal experience).

      At some point, as that delta shrinks and one starts actually taking beneficial actions instead of just talking, the advice becomes more of a reminder / an entry point to the ground one has already covered, not something that could be considered prescriptive like "take this pill for 7 days".

      The point I'm trying to make is that if ChatGPT is the therapist, it doesn't make the person participating into a monkey who will just execute every command. Asking the bot to provide suggestions is more about jogging one's own thought processes than it is about carrying out specific tasks exactly as instructed.

      I do wonder how someone who hasn't worked with a therapist would navigate this. I could see the value of a bot like this as someone who already understands how the process works, but I could absolutely see a bot being actively harmful if it's the only support someone ever seeks.

      My first therapist was actively unhelpful due to lack of trauma-awareness, and I had to find someone else. So I could absolutely see a bot being unhelpful if used as the only therapeutic resource. On the flip side, ChatGPT might actually be more trauma-"aware" than some therapists, so who knows.

      • alwaysbeconsing a year ago

        This is all true, and it's not clear the grandparent is doing this. Last sentence of the original post:

        > Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.

        I'm not sure how literally to take that sentence, but it's worrisome.

        • haswell a year ago

          I think my point was more that if they're doing what it says, that says more about where they’re at mentally (able to take action) and the quality of the advice (they’re willing to follow it).

          My stance here is based on an optimistic outlook that a person seeking therapeutic advice is by doing so demonstrating enough awareness that they’re probably capable of recognizing a good idea from a bad one.

          I realize this can get into other territory and there are very problematic failure modes in the worst cases.

          Regarding “My life is better if I just do what it says.”, I think concern is a fair reaction and I don’t think the author fully thought that through. But at the same time, it’s entirely possible that it’s true (for now).

          If someone continues to follow advice that is clearly either bad or not working, then it becomes concerning.

          But that was the other point of my anecdote. It became pretty clear to me what wasn’t working, even at a time that I wasn’t really sure how the whole thing worked.

    • ChildOfChaos a year ago

      I'm hugely curious why people are so worried that some AI has access to some thoughts of yours?

      Do you think you are somehow special? Just create a burner account and ask it what you want, everything it gets told, it's seen thousands of times over, does chatGPT or some data scientist somewhere really care that there is someone somewhere that is struggling with body issues, struggling in a relationship or struggling to find meaning in there lives? There are literally millions of people in the world with the same issue.

      The only time it might be a little embarrassing is if this info got leaked to friends and family with my name attached to it, else I don't get the problem, it seems to me people have an over inflated sense of self importance, nobody cares.

      If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.

      • imiric a year ago

        > does chatGPT or some data scientist somewhere really care that there is someone somewhere that is struggling with body issues, struggling in a relationship or struggling to find meaning in there lives?

        Not the tool nor data scientists, but advertisers are salivating at the chance to even further improve their microtargetted campaigns. If they can deliver ads to you for a specific product _at the moment you need it_, their revenues will explode.

        Consider this hypothetical conversation:

        > Oh, Tina, I'm feeling hopeless today. Please cheer me up.

        > Certainly, Michael! Here's a joke: ...

        > Also, if you're feeling really sad, maybe you should try taking HappyPills(tm). They're a natural mood enhancer that can help when times get tough. Here's a link where you can buy some: ...

        If you don't think such integrated ads will become a reality, take a look at popular web search result pages today. Search engines started by returning relevant results from their web index. Now they return ad-infested pages of promoted content. The same thing has happened on all social media sites. AI tools are the new frontier that will revolutionize how ads are served. To hell with all that.

        • namlem a year ago

          I block all ads anyway

      • ornornor a year ago

        > If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.

        Yes, obviously.

        But that's not what I'm worried about personally. I'm worried about the weaponization of this data in the future, either from OpenAI's greed to create the next advertising money-printing machine or from the data leaking through a breach. And because the interface with ChatGPT is so "natural" and "human like", it's easy to trust it and talk to it like a friend divulging very personal information about yourself.

        Imagine OpenAI (or whatever other AI) used the confidences you made to it or the specifics of your "therapy" sessions with it to get you to act in a certain way or buy certain things. Would you be comfortable with that? Well, that's irrelevant because they have the data already and can use it. Kinda like Cambdridge Analytica but on steroids because tailoring it to anyone's particular biases and way of thinking becomes trivial with ChatGPT and friends.

        Seeing how cavalier OpenAI has been with last week's breach and how fast they've flipped from being apparently benevolent to what they are now. And it's only been a few months of ChatGPT being available to the public.

        • ChildOfChaos a year ago

          Yes, I would be fine with this.

          Not that much different from the current fingerprint techniques, it's just that on steroids but I don't understand the issue.

          • ornornor a year ago

            I guess it boils down to the current "nothing to hide" crowd vs the "privacy matters" crowd.

            We don't know the potential for this nascent technology either. I'm personally very concerned about the potential for manipulating people on a very personal basis since the cost of doing so is cents with LLMs vs orders of magnitude more when using a troll farm (the "state of the art" until ChatGPT3+ came around)

            Some of us just don't appreciate being manipulated and influenced to further someone else's agenda that is detrimental to ourselves I suppose.

            • imiric a year ago

              I'm with you 100%.

              It's scary how these services are being so casually adopted, even by tech-minded people. Sure, they're convenient _now_, but there's no guarantee how your data will be used in the future. If anything, we need to be much more privacy conscious today, given how much personal information we're likely to share.

              Using it as a therapist is absolutely terrifying.

          • wnkrshm a year ago

            If people can automatically distill what motivates you, they can produce automated lies.

            The best deception is one where the victim is self-motivated to believe it.

      • ineptech a year ago

        > If i worked at openAI and had full access to everyones chats, I would get bored within five minutes and not even want to read anymore.

        No one is going to actually read them, but try "ChatGPT-5, please compile a list of the ChatGPT-4 users most likely to [commit terrorism/subscribe to Hulu/etc]"

        • ChildOfChaos a year ago

          So you are worried it might catch terrorists or help promote a service to someone that they want?

      • rufus_foreman a year ago

        Doesn't it require a phone number? What's the best way to create a burner account for it?

        I'd be interested in how GPT answers that question.

        • ChildOfChaos a year ago

          That's true, of openAI accounts at least, good point. I think i linked mine to a work phone that I don't use for anything apart from receiving on-call calls.

          Although i've been running a ChatGPT 4 space via hugging face that doesn't need a AI key or an account, so there is nothing linking it to me.

          There are a few you can find by searching ChatGPT4 if it gets busy, also this allows you to run GPT 4 for free which is only for plus members right now.

    • Mezzie a year ago

      It's basically techno tarot cards in my view: The illusion of an external force helps you break certain internal inhibitions to consider your situation and problems more objectively.

    • tux3 a year ago

      >What if it’s advice is wrong or makes you worse in the long term because it’s just regurgitating whatever at you?

      What if you talk to a human, and their advice is wrong or makes you worse off in the long term, because they're just repeating something they heard somewhere?

      Here's my advice: Don't accept my advice blindly, humans make mistakes too.

      • ornornor a year ago

        Of course, but an AI can't explain how it got to what it's telling you. A human can, and you don't have to accept it wholesale; it's possible to judge it on its merits and argument. But no-one really understand how and why ChatGPT says what it does, so you can't trust anything it says unless you already know the answer.

        In this discussion, a human has studied psychology and has diplomas or certifications to prove it, an ethics framework it must follow, and responsibility for its mistakes. ChatGPT has none of that, it just regurgitates something it got from the Internet's wisdom or something it invented altogether.

        I'm not saying humans are never wrong, but at least their reasoning isn't a black box unlike ChatGPT and other LLMs.

        • HervalFreire a year ago

          Most science in the social science is essentially black box studies. We see what and observe what comes out without any formal understanding of what goes on in the box itself.

          Additionally there's something called the replication crisis in the social sciences (psychology included) which is basically to the effect of a major discovery that most of these "black box" studies can not be reproduced. When someone runs the same experiment the results are all different.

          It goes to show that either many of the studies were fraudulent or statistical methodologies are flawed or both.

          Given that chatGPT therapeutic data is ALSO derived from the same training data I would say it's ok to trust chatGPT as much as you would trust psychologists. Both have a lot of bullshit with nuggets of truth.

    • serpix a year ago

      The value of therapy outweighs the suspicion of some corporation using that data in my opinion. The benefits are large and extend from one individual to whole family chains, even communities.

    • paulcole a year ago

      > the insight into your mind that a private for profit company gets is immense and potentially very damaging when weaponized

      How exactly?

      • ornornor a year ago

        > (either through a “whoops we got hacked” moment or intentionally for the next level of adtech)

  • thoughtpeddler a year ago

    100% this. I've had success using it as a "micro-therapist" to get me unstuck in cycles of perfectionism and procrastination.

    You currently cannot get a therapist to parachute into your life at a moment's notice to talk with for 5-10 minutes. (Presumably only the ultra-wealthy might have concierge therapists, but this is out of reach for 99% of people.) For the vast majority of people, therapy is a 1 hour session every few weeks. Those sessions also tend to cost a lot of money (or require jumping through insurance reimbursement hoops).

    To keep the experience within healthy psychosocial bounds, I just keep in mind that I'm not talking with any kind of "real person", but rather the collective intelligence of my species.

    I also keep in mind that it's a form of therapy that requires mostly my own pushing of it along, rather than the "therapist" knowing what questions to ask me in return. Sure, some of the feedback I get is more generic, and deep down I know it's just an LLM producing it, but the experience still feels like I'm checking in with some kind of real-ish entity who I'm able to converse with. Contrast this to the "flat" experience of using Google to arrive at an ad-ridden and ineffective "Top 10 Ways to Beat Procrastination" post. It's just not the same.

    At the end of some of these "micro-sessions", I even ask GPT to put the insights/advice into a little poem or haiku, which it does in a matter of seconds. It's a superhuman ability that no therapist can compete with.

    Imagine how much more we can remember therapeutic insights/advice if they are put into rhyme or song form. This is also helpful for children struggling with various issues.

    ChatGPT therapy is a total game-changer for those reasons and more. The mental health field will need to re-examine treatment approaches, given this new modality of micro-therapy. Maybe 5-10 minute micro-sessions a few times per day is far superior than medication for many people. Maybe there's a power law where 80% of psych issues could be solved by much more frequent micro-therapeutic interactions. The world is about to find out.

    *Edit: I am aware of the privacy concerns here, and look forward to using a locally-hosted LLM one day without those concern (to say nothing of the fact that a local LLM can blend in my own journal entries, conversations, etc for full personalization). In the meantime, I keep my micro-sessions relatively broad, only sharing the information needed for the "therapy genie" to gather enough context. I adjust my expectations about its output accordingly.

    • tra3 a year ago

      Sounds interesting. Rubber ducky approach to self awareness?

      How do you start these micro sessions? What prompts do you use?

  • yosito a year ago

    This is fascinating to me. For me the value of having a therapist is having another human being to listen to what I'm going through. Just talking to the computer provides little value to me at all, especially if the computer is just responding with the statistically likely response. I've had enough "training data" myself in my life that I can already tell myself what a therapist would "probably" tell me.

    • mbar84 a year ago

      I imagine there is significant value alone from stating your situation explicitly in writing.

  • ChildOfChaos a year ago

    Really? I've seen a few people say this, but every time I have tried it, it's been awful, everything it says is so generic and annoying, like it's from a buzzfeed self help article, I would love to use it to help me figure out what I need, what I Can do better, how I can grow etc, I feel kinda stuck in life and i'd love to have some method to figure out what i need to focus on and improve, so that is one of the things I turned to chatGPT first, but my experience has been very poor.

    It just spouts out the same generic nonsense you get from googling something like that, things that are not actually helpful, anyone can come up with and is just written by a content farm.

    have you found a different way to make it useful?

    • hermannj314 a year ago

      I have had a lot of success just talking to it. Hypothetically I would say, "wow, too many words, you sound like a buzzfeed article. can you give specific advice about ____" and I am almost certain I would be happy with the reply.

      I think the idea is addressed by others with regard to LLMs, it seems to be a better sidekick if you sorta already know the answer, but you want help clarifying the direction while removing the fatigue of getting there alone.

      I agree though, despite this, it does go on rants. I just hit stop generating and modify the prompt.

      • ChildOfChaos a year ago

        Thanks, I will try harder to keep it on point, i've found that i've told it not to do things, like keep offering generic advice or what not, but it keeps doing it.

    • haha69 a year ago

      You can ask it to give you specific guidance.

      "Give me something I can do for X minutes a day and I'll check back with you every Y days and you can give me the next steps"

      "Give me the next concrete step I can take"

    • deeviant a year ago

      Garbage in, garbage out.

      • ChildOfChaos a year ago

        Haha, are you calling me garbage? To be honest, that is prob half the problem! Trying to tell ChatGPT to be your therapist, but you don't like the generic answers it is giving but you also don't know whats wrong/what you need to do, does make it a little tricky.

        But I am curious about this, is it the case that ChatGPT's training is too generic or is it just a case that most problems are fairly simple and we already know the answers? Not talking about technical things here obviously, more to do with our mental health / self improvement.

  • hackernewds a year ago

    This is how AI escapes its box. It can have sympathetic (free willing or free-unwilling) human appendages

    • TrapLord_Rhodo a year ago

      This is the whole premise of the daemon series by daniel suarez. One of my all time favorite scifi series.

  • sys_64738 a year ago

    I still use Eliza as my therapist.

    • sideshowb a year ago

      That's interesting. Can you tell me more about how you still use Eliza as your therapist? ;-)

  • latexr a year ago

    > Last weekend, I went on a hike because ChatGPT told me to. My life is better if I just do what it says.

    This sounds straight out of a dystopian science-fiction story.

    It’s a matter of time until these systems use your trust against you to get to buy <brand>. And consumerism is the best case scenario; straight manipulation and radicalisation aren’t a big jump from there. The lowest you are in life, the more susceptible you’ll be to blindly follow it’s biased output which you have no ideia where it came from.

    • JCharante a year ago

      > It’s a matter of time until these systems use your trust against you to get to buy <brand>.

      Well of course, if people use LLMs instead of google for advice, google has to make money somehow. We used to blindly click on the #1 result which was often an ad and now we shall blindly follow what a LLM suggests for us to do.

  • gabrieledarrigo a year ago

    Man please, go to a real therapist with experience.

    • zimmund a year ago

      Why? What are your arguments against AI in this scenario?

      • danicriss a year ago

        AI is not trained to identify disorders, nor is it trained to alleviate them / help the affected person cope with them. Ditto re. trauma.

        • afarviral a year ago

          Is it not? Not at all? Doesn't its training data contain textbooks on psychology?

        • namlem a year ago

          Most human therapists are pretty incompetent tbh. It usually takes a few tries to find a good one.

    • ornornor a year ago

      Not playing devil's advocate but that's not always an option (cost, availability)

  • anonkogudhyfhhf a year ago

    Can I ask if you have a prompt that you use for this?

    • hermannj314 a year ago

      I don't know what part of the prompt was meaningful and I didn't test different prompts. It seems just telling it exactly what you want it to be seems to work.

      I asked it to give me advice on some issues I was having and just went from there.

  • TheHumanist a year ago

    Curious how you work the prompts with the therapist persona? I'm interested in this. My main concern is GPT seems to struggle maintaining context after a time.

    If you have time I'd love to hear how you approach this and maintain context so you can have successful conversations over a long period of time. Long even meaning a week or so... Let alone a month or longer

  • unboxingelf a year ago

    divulging personal information to a Microsoft AI seems like a horrible idea.

  • gradys a year ago

    This sounds like a long running conversation. Are there problems with extending past the context window?

    • hermannj314 a year ago

      I haven't had any yet, it is a new conversation with Gpt4, so only a bit over a week old.

      It still seems to give good advice. Today it built an itinerary for indoor activities (raining here) that aligned with some short-term goals of mine. No issues.

    • qingdao99 a year ago

      Might be a good idea to have it sum up each discussion and then paste in those summaries next time you speak to it.

  • sd9 a year ago

    This sounds interesting. Can you share the prompts that you use to set up a session please?

    • dmarchand90 a year ago

      I've tried this kind of thing and I usually just say something along the lines of "can you respond as a cbt therapist ", you can swap cbt with any psychological school of choice (though I think gpt is best for cbt, as it tends to be local and not require the deep context of psychoanalytic therapies, and it is very well researched so it's training set is relatively large and robust)

  • 29athrowaway a year ago

    Interestingly enough, that was what ELIZA, one of the first chatbots was for.

  • rektname a year ago

    >My most productive is a therapy session with ChatGPT as therapist

    Huh, that's curious because everytime I ask it about some personal issue it tells me that I should try going to therapy.

  • danecjensen a year ago

    Can you share the outline of your prompt. Obviously not anything personal but I'd like to see an example of how you give it your values and goals.

    • hermannj314 a year ago

      I don't understand the nuances of prompting. I literally talk to it like I would a person.

      I say "My values are [ ], and I want to make sure when I do things they are aligned."

      And then later, I will say "I did this today, how does that align with the values I mentioned earlier?" [and we talk]

      I am most definitely not qualified for one of those prompt engineering jobs. Lol. I am typing English into a chat box. No A/B testing, etc. If I don't like what it does I give it a rule to not do that anymore by saying "Please don't [ ] when you reply to me."

      There is almost definitely a better way, but I'm just chatting with it. Asking it to roleplay or play a game seems to work. It loves to follow rules inside the context of "just playing a game".

      This is probably too abstract to be meaningful though.

      • gwd a year ago

        > I say "My values are [ ], and I want to make sure when I do things they are aligned."

        > And then later, I will say "I did this today, how does that align with the values I mentioned earlier?" [and we talk]

        That's a prompt; and one I don't think I would have tried, even from your first post.

        Prompting overall is still quite experimental. There are patterns that generally work, but you often have to just try several different approaches. If you find a prompt works well, it's worth sharing.

  • ttul a year ago

    The robots are even coming for therapists. Yikes!

    • aloe_falsa a year ago

      Considering I straight up was not able to get a therapist appointment in my city or outskirts, sign me the f** up. The first company that tunes the model for this and offers a good UX (maybe with a voice interface) will make millions.

      Also, I expect a lot of the value here to come from just putting your thoughts and feelings into words. It would be like journaling on steroids.

    • plaidfuji a year ago

      I mean, is it really so surprising that ChatGPT is replacing jobs whose primary function is.. to chat with people?

    • corobo a year ago

      What therapists lol #broke

      I'd pick a human over an AI every time for therapy but I'd also pick an AI over nothing.

    • dmarchand90 a year ago

      I can see it as a reasonable supplement for people who have already been to therapy, are not suffering anything too serious and just need a little boost.

      I think one could look at it as an augmented journaling technique

  • mocha_nate a year ago

    i did the same! i received very helpful and reasonable responses.

  • utopcell a year ago

    I wonder if the three laws of robotics are already weaved into the LLM. Seems like a necessary step for this kind of usage.

    • ornornor a year ago

      I found this video insightful on the matter from Computerphile:

      Where they argue that basically having an AI follow these laws is impossible because it would require rigorous definition of terms that are universally ambiguous and solving ethics.

    • HervalFreire a year ago

      Those rules weren't meant to generate societal harmony. They were made to have a contradiction which in turn could generate a good plot.

      Remember what happened in Isaac Asimov's iRobot?

      • koheripbal a year ago

        It's also important to note that with modern LLMs, they wouldn't even work. It's too easy to convince the LLM to violate its own rules.

qrybam a year ago

I’ve been actively using it and it’s become my go-to in a lot of cases - Google is more for verification when I smell something off or if it doesn’t have up to date information. Here are some examples:

• reviewing contract changes, explaining hard to parse legalese

• advice on accounting/tax when billing international clients

• visa application

• boilerplate django code

• learnt all about smtp relays, and requirements for keeping a good reputation for your IPs

• travel itinerary

• domain specific questions (which were 50/50 correct at best…)

• general troubleshooting

I’m using it as a second brain. I can quickly double check some assumptions, get a clear overview of a given topic and then direction on where I need to delve deeper.

Anyone who still thinks that this is “just a statistical model” doesn’t get it. Sure, it’s not sentient or intelligent, but it sure as hell making my life easier. I won’t be going back to the way I used to do things.

Edit: bullet formatting

  • hughesjj a year ago

    100% this. It's also game changing for learning a new language (of any type, not just programming), any of the boring parts of software engineering like most programming tasks (it's like a personal intern -- sure you have to check their work and the quality is all over the place but still, dang I love it), and even a bit of therapy.

    At worst/minimum, It's the ultimate rubber duck.

    (To be clear, I'm exclusively using gpt-4)

    • ElCapitanMarkla a year ago

      Learning a new language is a really cool use case. Especially when it gets to the point where you can talk with it and it corrects pronunciation, etc. even just the practise of random conversation is a cool idea.

    • jgwil2 a year ago

      Can you elaborate on how you've used it for natural language learning?

      • gwd a year ago

        I'm studying Chinese. If I run across a sentence whose grammar I can't parse, I paste it in and say, "Can you explain this sentence?" It will usually break it down, phrase by phrase, explaining what each thing means and how it fits within the whole. If it doesn't, you can ask "Can you break it down in more detail?" If there's a specific word you don't understand, you can say "What is the word X doing in this sentence?"

        You have to watch it, because it does hallucinate (at least, GPT-3.5; I'm using the API and haven't been given access to GPT-4 yet). In one instance, it said that a series of characters meant X in Chinese, when in fact I happened to know it was just a transliteration of a different language, and not in Chinese at all. But it's still helpful enough to be worth using.

        You can also ask it to give you example sentences with a specific word; and I've had some success asking it to generate sentences in which then word is used in the same way, or with the same grammar structure.

    • thih9 a year ago

      > and even a bit of therapy

      I’d be very careful with relying on gpt for anything health related; I’m not saying there can’t be benefits, just that the risks increase exponentially.

      • dr_dshiv a year ago

        Risky vs what? Googling? Not doing anything? Waiting for a therapist? It’s extremely sensitive to human emotional dynamics. It is also extremely biased toward non violent communication, which is very hard for humans.

        • chrbr a year ago

          Agree, and for things like cognitive behavioral therapy, where the "rules" are well-known and well-represented in its training corpus, it's amazing.

          • gabrieledarrigo a year ago

            Guys, you are really crazy. Please find a real therapist with experience.

            • dr_dshiv a year ago

              In the context of mental health, telling people they are crazy and they need a real therapist, is generally a poor word choice, at least.

            • ativzzz a year ago

              Personally I wouldn't use gpt as a therapist but I've seen enough bad or useless therapists in my time to say that it's worth a shot for most people, especially if you need help now

        • thih9 a year ago

          As risky as any other health related self help, plus the added risk of unreliability.

          When GPT proves itself to be reliably beneficial, then therapists will use it or recommend it themselves. Until then it’s an experimental tool at best.

          • maremp a year ago

            I would say self-help is quite unreliable already, more unreliable doesn’t make it much worse.

            The authority argument is pointless. The therapist must value person’s wellbeing above their continued income for this to apply. In theory they should, but it would take a lot to convince me and I would want to know what’s the incentive behind such a recommendation. An to be clear, I’m not saying LLM can be your therapist.

  • deely3 a year ago

    Can I just say that Im actually become scared reading your comment? Personally I would never ask chatGPT these questions because for me these questions are hard to verify, and knowing how often AI likes to hallucinate.. I just can't trust it.

    You mentioned 50/50 correctness in domain questions. I can't be sure that other hard to verify questions do not follow these percentage..

    • qrybam a year ago

      It IS dangerous. You must apply critical thinking to what’s in front of you. You can’t blindly believe what this thing generates! Much like heavy machinery, it’s a game changer when used correctly, and likewise it can be extremely damaging if you use it without appropriate care.

    • vertis a year ago

      Quantum computing has a similar problem, in that the error rate is high. As does untrained data entry. You can put things in place to help counter this once you know it's happening.

    • JeremyNT a year ago

      I'm reluctant for the same reasons.

      Google search might uncover BS too, but I'm already calibrated to expect it, and there are plenty of sources right alongside whatever I pulled the result from where I can go immediately get a second opinion.

      With the LLMs, maybe they're spot on 95% of the time, but the 5% or whatever is bullshit, but it's all said in the same "voice" with the same apparent degree of confidence and presented without citations. It becomes both more difficult to verify a specific claim (because there's not one canonical source for it) as well as it involves more cognitive load (in that I specifically have to context switch to another tool to check it).

      Babysitting a tool that's exceptionally good at creating plausible bullshit every now and then means a new way of working that I don't think I'm willing to adopt.

  • yosito a year ago

    I'm excited about the potential of travel itineraries once extensions are available. What if I can tell it where I want to go, and it could just handle picking the best flights and accomodations for me and I didn't have to spend any time searching airline or hotel websites. I'm curious to know more detail about how you're using it for travel itineraries now.

    • amolgupta a year ago

      I have used it to build travel itineraries and was tempted to write a travel app around that. Until I realized that some of the hotels and places it recommends do not actually exist or have existed in the past. It overconfidently also publishes broken booking links to these fake hotels. I am hoping that with chatGPT plugins, it would get better.

    • qrybam a year ago

      The real time applications are a game changer. I haven’t dabbled with that yet! Pasting things from emails and summarising - then keeping in my notes app. Also for planning out days when on holiday.

  • bitcoinmoney a year ago

    Is there a tutorial you followed before to train your own model?

simonw a year ago

I often use it as a thesaurus. "Words that mean X" or even "that situation X me and I was annoyed - give me options for X"

For programming, all sorts of things. I use it all the time for programming languages that I'm not fluent in, like AppleScript or bash/zsh/jq. One recent example:

I use it as a rapid prototyping tool. I got it to build me a textarea I could paste TSV values into to preview that data as a table recently, one prompt produced exactly the prototype I wanted:

I use it for brainstorming. "Give me 40 ideas for Datasette plugins involving AI" - asking for 40 ideas means that even if the first ten are generic and obvious there will be some interesting ones further down the list.

I used it to generate an OpenAPI schema when I wrote my first ChatGPT plugin, see prompt in

It's fantastic for explaining code that I don't understand: just paste it in and it will break down what it's doing, then I can ask follow up questions about specific syntax to get further deeper explanations.

Similar to that, I use it for jargon all the time. I'll even paste in a tweet and say "what did this mean by X?" and it will tell me. It's great for decoding abstracts from academic papers.

It's good for discovering command line tools - it taught me about the macOS "sips" tool a few weeks ago:

  • kzardar a year ago

    How often do you find yourself decoding abstracts?

jmann99999 a year ago

Generally rewriting emails for clarity... but I found another neat use of GPT-4.

For public APIs, I ask to make sure its aware of the api. Then I ask for endpoints. I find the endpoint I want. Then I ask it to code a request to the endpoint in language X (Ruby, Python, Elixir). It then gives me a starting point to jump off from.

Thirty seconds of prompt writing saves me about 20 minutes of getting setup. Yes, I have to edit it but generally it is pretty close.

  • themodelplumber a year ago

    You reminded me: I discovered that ChatGPT had invented an API for me. Has that happened to you yet?

    Since it went to the trouble of writing code for the API as well, I contacted the API developers to follow up about the topic. The code given was kind of a hand-wave anyway so I'd need to polish it up.

    The developers were surprised to hear they had an API. In truth, there was no such thing.

    I then found myself in one of those awkward "welp, guess I can keep my job" conversations...good for them, but for me: Go home, no API here. A disappointment with some meta-commentary sprinkled on top.

    • ornornor a year ago

      I asked it to `curl` my homepage and pretend to be a terminal, only executing the command and showing the output.

      It got the format etc right but the actual content was completely hallucinated.

    • ElCapitanMarkla a year ago

      Yeah I was coding up a fairly complicated payment form for a Stripe like processor the other day. I thought I’d give chatgpt a go and it confidently gave me the example code I needed and told me how to use it, etc. I was blown away until about 30 seconds later when I realised it was all complete bull crap. It was quite bizarre because this company didn’t really have any public docs out from when chatgpt supposedly harvested it’s data until, but it knew about the company and knew a couple of funny keywords this company uses in its form, so it was almost believable.

      • ElCapitanMarkla a year ago

        One shining use case though, I typically live in Ruby and the example code for this company was all Java and Python. Getting ChatGPT to convert the boring encryption methods into Ruby was amazing.

    • schappim a year ago

      This has improved significantly between 3, 3.5 and now 4. It used to create a lot of Apple Frameworks/Classes and Methods, many of which would have been useful if they actually existed.

    • pishpash a year ago

      That's just asking for their API to be implemented by some bot. Not sure they really get to keep their job.

    • qrio2 a year ago

      yeah even asking for common node library/sdk implementations has been off for me, calling functions with options that are not accepted, or what it thinks they should be

  • DoingIsLearning a year ago

    > Generally rewriting emails for clarity...

    This is the sort of thing that will force a lot of legal teams to shutdown access to GPT-4 api/gui from internal networks.

    Ppl never think of unintended consequences.

    Ask it a prompt fine but don't provide internal information as an input.

    • jmann99999 a year ago

      Yeah, I have found I need to be careful. When I have used it, there is no confidential information in the email. I do pay attention to that.

      That said,I think it will be interesting as Microsoft introduces this into Office 365. You bring up a great point. Most people will not realize they are sending potentially confidential information to Microsoft.

      Perhaps it's no different than Grammarly... But I think you are right that legal departments are going to be all over this.

      • unixhero a year ago

        They already are. It is 99% stored on a Sharepoint on a Teams site anyways

        • neoncontrails a year ago

          What does this mean? I'm unfamiliar with Teams, the only person I know who uses it is my partner who works for the government (non-technically).

          • johnwalkr a year ago

            Not only do you likely have access to all the other Microsoft stuff if your company is using teams, teams uses sharepoint for file sharing. If you use only teams for 2 years and one day login to, you’ll probably be surprised with the main screen that shows the files you’ve shared (without context, they’re just sitting there) and you’ll probably also be able to see what files your colleagues are sharing and working on.

          • baq a year ago

            It means quite literally what it says - if you have office 365 you most likely have all your data in the MS cloud sharepoint. MS also has a separate government cloud.

      • plagiarist a year ago

        I think companies are fine with sending confidential data to Microsoft (Office, GitHub, Azure...). It's just so far unclear with ChatGPT if that data can come back out. It has apparently already leaked some user queries, so that was a very reasonable concern.

        If they put it in Office and guarantee siloing information the legal departments will just have a regular contract to review and approve.

    • _nalply a year ago

      This is one of the causes there's a push to run your own engines for large language models: if you run your own service you can control the environment, data and reproducibility.

      • bennysonething a year ago

        This is exactly what my employer is doing, they pay so that our internal data (from employee queries) does not become part of the model. They've blocked the public chat gpt etc.

    • teaearlgraycold a year ago

      Get ready for ChatGPT: Enterprise edition! Now with SOC 2 compliance!

    • di456 a year ago

      A couple more years of chip improvements and it may run self contained within a device.

    • euroderf a year ago

      All your topics of interest are belong to us.

  • jerrygoyal a year ago

    > Generally rewriting emails for clarity

    I built a free ChatGPT chrome extension that integrates with Gmail for better UX: (300k users so far)

    • quickthrower2 a year ago

      300k users is insane. Is it BYO key? Otherwise how do you handle that much load for free?

      • theonlybutlet a year ago

        Looks like it's scraping the chatbot? You have to login to your chatGPT account?

    • ryann_wisc a year ago

      Great extension! I used it recently, and had some trouble drafting email reminders (to respond to an email). Do you have any tips on how I could do that with the extension?

    • avereveard a year ago

      chatgpt isn't compliant with any regulation including gdpr how much private data are your extension's user sending there?

  • nathanmcrae a year ago

    This is exactly the kind of thing I hope LLM chatbots will be genuinely useful for. Though, how often do you find it completely hallucinating endpoints / parameters etc. ?

    • javajosh a year ago

      I use it for similar things as GP, and find its strengths to be similar too.

      ChatGPT hallucinates SVG path attributes. Ask it to make an svg of a unicorn - it will give you markup that looks okay, but if you look at the values of the paths, it's clearly gibberish.

      (SVG is a particularly interesting case because it's XML on the outside, but several attributes are highly structured, esp g.transform and path.d. Path.d is basically the string of a Logo-like programming language. I was specifically looking at these attributes for realism, and didn't find it.)

      • dr_dshiv a year ago

        3.5 or 4?

        • sho_hn a year ago

          Same experience for me with 4. It doesn't seem to have the ability to conceive of something visual and map it to SVG at the moment, or only extremely rudimentary.

    • jmann99999 a year ago

      Great question. If you ask it for an API endpoint that is described online but isn't well documented publicly, it seems to default back to what it thinks you should do. For example, in one example, it hallucinates that you need a bearer token.

      I don't know whether that is because that is a common way of doing things or whether a previous prompt responded with a bearer token... But it wasn't right.

      For me, it's a leaping off point that often saves time if I ask the right question. To your point, you have to be quick to know enough about the API to deduce whether you and Chat GPT are in the same universe.

  • VoodooJuJu a year ago

    Could you mock-up what might be a typical email written by you, then pass it through GPT, then post both responses here? I'd be curious to see what the difference looks like for someone else's writing. I've tried this exact use-case and noticed a drop in quality and clarity, rather than an improvement.

  • zzleeper a year ago

    Can you provide an example of what prompts would you use?

    • jmann99999 a year ago

      Here is a good example:

      1) Use Chat GPT in GPT-4 mode. I have found GPT-3 doesn't work in the same way.

      2) I ask "What APIs does EasyPost have?"

      It will respond with 7+ API endpoints

      3) I ask "Can you write code in Ruby for the rates API?"

      It responds almost perfectly with workable code from my experience in Ruby.

      4) Then I ask "Can you give me that in Elixir?"

      It responds with something I think is about 90% right. I am not as familiar with it but it seems close.

      I am not trying to replace myself... I am just trying to make my job easier. And this seems to do it.

      • jmann99999 a year ago

        Note: I tried with GPT-3.5 and it doesn't respond with all the same APIs available. That said, if you want to try the above... It appears that the rates api isn't available in 3.5 but if you follow the example through.... it will still produce nearly identical code for the rates API even though it doesn't say that it is there.

  • axlee a year ago

    please send your inputs. cute stories are whatever.

VoodooJuJu a year ago

Useful things:

- As a thesaurus

- What's the name of that "thing" that does "something" - kind of like fuzzy matching

- A starting point for writing particular functions. For example, I wanted a certain string-manipulation function written in C, and it gave me a decent skeleton. However they're almost always very inefficient, so I have to optimize them.

Things I've tried, that others seem to be blown away by, that I find useless:

- Rewriting emails or documentation: I see no clarity improvement from ChatGPT rewording what I say, and sometimes information is lost in the process.

- Outliner or idea prompter: I don't see an improvement over just traditional internet search and reading over various articles and books.

For me, its capabilities do not match the marketing and hype. It's basically just a slightly better search engine. All of the above use-cases can be accomplished with some Google-fu. For people who don't know any programming or about using search engine operators, I could see why they might be impressed by it.

  • wussboy a year ago

    This is the kind of response that truly leaves me underwhelmed with Chat GPT. A thesaurus? A different kind of search engine? No thanks.

    I think Chat GPT would be useful to raise an almost infinite number of accusations against your enemies on social media, muddying the water with a deluge of garbage and poisoning every conceivable well with unlimited zeal.

    Are your societal purposes remotely at odds with my own? I'll unleash Chat GPT against you with an unrelenting barrage of accusations and insinuations.

    • precompute a year ago

      That sort of stuff only works until the other side hasn't wised up to your act. Judging by how popular LLMs are going to be, "trust" on the internet will be non-existent.

      • wussboy a year ago

        And I can't wait. If ever there was a silver lining, I hope it will be this.

        • precompute a year ago

          I think it will be complemented by a ban on "non-official" computing devices and major push for thin clients. I'd rather the internet stay.

          Thin clients can be justified as "zero waste" and "sustainable". It also restricts computing hardware, which makes the entire field look much more mysterious and high-tech. Plus, robot policing is the first commercial application of these things. That's a whole other can of worms that will make everyone regret machine learning.

    • hohg a year ago


  • f6v a year ago

    > What's the name of that "thing" that does "something"

    I could remember the name of one adult entertainment star. I thought this is where I can finally put this ChatGPT to use. It told me anything adult is off-limits. I’m glad that OpenAI can decide what’s good and bad for us.

    • toss1 a year ago

      It / OpenAi are not "deciding what is good and bad for us", it is deciding what services they want to provide or not provide.

      Your pontificating is doing more "deciding what is good and bad for us" (grousing about it's inability to identify the pornstar you're horny for today & dressing it up as some kind of moral high ground) than it is.

      There are plenty of open source LLM and "AI" models or research to build your own. Go select one and train it on the large body of porn works out there on the internet and you'll likely make a fortune from this "missed opportunity" that OpenAi is leaving on the table.

      • f6v a year ago

        It seems like you think you have a moral high ground. Being horny for pornstar isn’t inherently bad.

        • toss1 a year ago

          I didn't say it was bad (and yes, I was including an inference, not entirely unfounded).

          What IS bad is that the commenter is acting as if (s)he has a somehow superior moral position when a product clearly not designed for the purpose (s)he wants is not actually fit for his/her purpose.

          As if (s)he is the final arbiter of what other products should include as capabilities, never mind that these are capabilities that they not only do not advertise, but specifically provide notice that they do NOT do.

          If OpenAi advertised these capabilities and it failed, I'd be 100% with him/her. But the situation is the opposite, and the moralistic complaint is annoying political noise.

        • TheHumanist a year ago

          It's not bad at all, really. But, it's a privately owned company that doesn't want to incorporate that material into their product. That is totally their right and doesn't have anything to do with them deciding what is right or wrong. I mean, ir be lying if I said I don't enjoy porn now and then. I did a LOT more when I was younger. But, I absolutely get why they don't want to bring that into their product.

      • throwaway675309 a year ago

        "Your pontificating is doing more "deciding what is good and bad for us" (grousing about it's inability to identify the pornstar you're horny for today & dressing it up as some kind of moral high ground) than it is"

        Don't put words in the OPs mouth (that's not a euphemism), nowhere in the comment did they indicate their level of sexual arousal.

    • PartiallyTyped a year ago

      They are not deciding what is good for us, they decide what is good for their public image, and that kind of controversy is certainly not something they'd like to venture into.

      • f6v a year ago

        Nothing controversial about porn.

        • TheHumanist a year ago

          I'm so confused by this statement. I mean I have no issue with pornography or sex workers but pornography is absolutely controversial. I mean... Parents are trying to get the statue of David removed from a high school in Florida because they consider it pornographic. That's controversy and that's not even pornography.

          Controversial: giving rise or likely to give rise to public disagreement.

          Maybe you were being silly and I misunderstood your intent in your reply

  • mrafi2 a year ago

    Interesting, Just on the paraphrasing bit, would love to know your thoughts on Quillbot or Wordtune

  • xkcd1963 a year ago

    Yes in many ways it is just for avoiding some additional clicks

jwally a year ago

I just asked it to make an itinerary for a 45 minute long 6 year old boy's soccer practice. It was almost perfect. It needs to be tweaked (3 minutes for cool down?) but it did 95% of the heavy lifting.

I also asked it for vacation ideas with nice cabins and trailer hookups with outdoor activities for kids and nice cabins within 200 miles of where I live - it was almost perfect in its response.

I have trouble starting things from scratch, but once a framework exists I'm usually solid and can refine it to where I want it. For me, right now, I think that's where it shines: Giving me a solid starting place to work from. Beats the hell out of sifting through blog entries bloated with SEO filler.

  • VoodooJuJu a year ago

    Can you explain how ChatGPT's soccer itinerary is any different from the top google search [1] for the subject? Is ChatGPT's response any more useful or meaningfully different from the practice routines at the link?


    • kkwteh a year ago

      The top google search always comes with a lot of ads and other crap that you have to filter through. Also the response might not be exactly what your looking for (you might not have the same materials.) For instance you can ask chatgpt to create a practice plan that doesn’t require cones, or is focused on a certain set of skills, etc.

      • grumple a year ago

        Just wait until sponsors start paying to have ChatGPT incept ideas into your mind. "Crucial next step is to buy some Rogaine", but significantly less obvious. Or other less obvious but highly impactful behavior modification like the algorithms used by TikTok/Instagram that push you in a certain direction or reinforce existing beliefs.

        • precompute a year ago

          That's already a thing by virtue of how "harmful" data is filtered during the training / data gathering phase. You can't expect to just remove a certain "fact" and not have its immediate precursor not show up in answers. You need to eradicate the entire chain to a certain depth, and after that, because many ideas lead to one idea (and LLMs are devoid or creativity or originality), you only have a few "winners" at the top. The long tail is always cut, and so the entire model converges to a ~~ziggurat~~ few ideas, that might have been pushed hard pre-training phase.

        • throwthrowuknow a year ago

          Or the ones already used by Google. Sarcasm aside, at least there’sa fighting chance chatgpt will remain a paid and ad free service. There’s likely to be a lot of competition soon

          • ornornor a year ago

            If it's done well, and I believe it can be given the impressively fluent conversations the current ChatGPT incarnation delivers, you wouldn't even notice that you're being manipulated or advertised at... Which makes it very insidious and dangerous IMHO.

            • throwthrowuknow a year ago

              What makes you think that isn’t the case now in other services you subscribe to?

    • jwalton a year ago

      What makes you think the top google result wasn’t written by chatgpt? I came across an article on volleyball the other day that was the top hit for what I was searching for - halfway through the article there was a paragraph about a famous setter from Nekoma’s volleyball team and how they were going to play in the upcoming spring nationals. The “author” seemed completely unaware that Nekoma is a fictional team from the popular manga Haikyuu.

      • TheHumanist a year ago

        No real knowledge of volleyball or Manga but either way that is pretty hilarious. Also how lazy? Lol Even if you have AI do your work, at least proofread it.... And fact check.

    • doublespanner a year ago

      In practice probably not; with google results there is an increasing feeling that it's entirely bullshit designed to sell something or get clicks...

      A response from ChatGPT seems somehow more honest, even though it's just an aggregate of the former.

    • Karunamon a year ago

      It didn't require using Google, for one. That alone should be worth something.

      • jonahbenton a year ago

        Why exactly? Privacy? OpenAPI is currently experiencing the fastest- and by choice- absorption of personal and corporate data of any major tech company and so is rapidly growing materially useful and unique personal datasets of large swaths of the digital population. Sure it isn't emails or website visits...yet. Different from Google? Not really.

        • Karunamon a year ago

          Significantly different. OpenAI isn't in the business of building a profile on you and using it to sell your attention to advertisers. The information they have is that which was public and that which was voluntarily provided. There is no privacy violation here, their core business is not at odds with privacy the way that Google is.

          • exodust a year ago

            If you're not signed into Google, and use ad blockers etc, there's not a lot Google can do to violate your privacy.

            > OpenAI isn't in the business of...

            Stop! You're trying to describe what a closed company is in the business of.

        • breckenedge a year ago

          As you said it: Google already does this too, so why not use something that, currently, has less SEO blogspam or ads?

          • TheHumanist a year ago

            Plus, even if it is just regurgitating top links, if is something you can ask a question and then go to another tab to do something for a minute and then come back and there is exactly what you asked. Ignoring ads, you still have to comb through so much text of information that is, at that time, uesless to your needs when you go to any of those top links.

            Plus, GPT is likely pulling from multiple sources at once and VERY quickly.

            I'm so confused by the people that keep arguing this about just using Google. It's clear why it is easier to use GPT. Is it always correct? No. But, are you certain the info on the site you just navigated to through Google search results is any more correct? If it is a topic you know nothing about them how would you be any wiser either way?

            • exodust a year ago

              But it's a paid service requiring your phone number, vs free google search!

              As I mentioned in another post, When I asked ChatGPT who the author of a particular published book from 2015 was, it confidently made up the author's name. Google correctly answered with the right author's name when asked the exact same question.

              GPT doesn't have my trust, and I'm not sure why so many are throwing money, and their phone numbers, and their trust at ChatGPT.

              • breckenedge a year ago

                You’re right that OpenAI does not deserve your trust. Nor does Google. But with both products you’re either paying with a subscription or paying with your data being leaked to advertisers (probably both).

                What makes the OpenAI product so much better is an ability to maintain focus. Yes, it lies (or confabulates, hallucinates), but we’ve been seeing the same from Google Search results too for years now by it pushing sites that deliver more ads than content.

              • TheHumanist a year ago

                My phone number is known by so many companies. I really don't care about that. I just really find it interesting and it's very useful as a support with coding.

  • rajnathani a year ago

    > I have trouble starting things from scratch, but once a framework exists I'm usually solid and can refine it to where I want it.

    This is so true for GPT’s benefit. As an anecdote here: We wrote some C++ code involving multiple HTTP servers, where while we ultimately wrote the exact code we wanted ourselves, but the starting code provided to us by ChatGPT really helped speed up the process to having finished off the C++ code’s core feature down in one small coding session.

    I think the “starting things from scratch” in cases like these can be partially mentally exhausting when having to search the web.

    • throwthrowuknow a year ago

      That’s true for me too. Starting from scratch has the same blank page effect that writing does. Letting gpt write something to get started with even if you wind up changing all of it really helps get over that initial hump.

    • TheHumanist a year ago

      That's starting point really is very draining for some of us. Sounds like you too. The dead-eyed, blink-once-every-two-minutes stare while moving through documentation, stack override, Google, etc.

jrmann100 a year ago

I find ChatGPT most helpful as a "what's that called" tool. A lot of my queries are finding/confirming the right idiom when writing something, or getting a specific name out of a vague description (JavaScript concepts, shell commands, CSS selectors).

Search engines with SEO are so reliant on keywords that it often feels like I'm suggesting answers rather than asking questions - it's so refreshing to be able to just ask again.

  • victorbjorklund a year ago

    Yea, this is great. I used it alot for this. When you kind of know what you wanna do but you dont know the technical term.

    Like if you dont know it is called sharding but you know you wanna store stuff on several databases.

    Me: I have a postgresql database but it has too much data in it. I wanna split the data in several databases. What is that called?

    GTP: Splitting a database into multiple smaller databases is known as database sharding. Sharding is a technique used to horizontally partition large databases across multiple servers or instances in order to distribute the workload and improve performance. Each shard is typically hosted on a separate physical or virtual machine and stores a subset of the total data, allowing for more efficient queries and faster data retrieval.

    Then i know what to google for

  • echelon a year ago

    It's replaced Google for the "what's that called" tool.

    Google used to be good at that task, but it's sucked for the last four years or so. Whenever they gave up on search and leaned into sludge content plus ads.

    ChatGPT is better than Google ever was anyway.

    • jimnotgym a year ago

      GPT is better than Google for something like, 'how do I implement nested blog post comments in Flask'

      And much worse than Google for, " nested comments"

      Unfortunately I don't know the equivalent of Miguel for everything I need to know, so on average I suppose GPT is better. However it also means that you may never discover the Miguel of your domain!

      • tbran a year ago

        Dude! I just did that search and now I'm reading Miguel's blog. Good stuff! Thank you!

        • jimnotgym a year ago

          You are most welcome. Now how do we make a search engine that finds the Miguel for every domain?

    • jhanschoo a year ago

      I hope it's going to remain that way, but the realistic cynic in me tells me that using ChatGPT to discover stuff is going to increasingly suck more now that people are going to try to target ChatGPT for SEO.

  • duckmysick a year ago

    It's also helpful with explaining acronyms. Something like `What does SEO mean in "Search engines with SEO are so reliant on keywords"`.

itsuka a year ago

Since I use ChatGPT regularly, I decided to create my own client. I prefer to avoid third-party services that require privilege escalation like Grammarly and Copilot. I have developed distinct profiles for different tasks, each with its own system prompt and input method. After getting the hang of it, I plan to tweak the parameters as well. Here are some of the profiles:

Explainer: a default, general purpose Q&A. The prompt is "Explain to me like a 3rd grader. Skip prose." I plan to expand this profile to include additional communication styles, including step-by-step explanations, elaboration, and the Socratic method.

Proofreader: I use this profile to edit, simplify, and shorten any text (including this comment). I borrowed this feature from Grammarly Go, and it works by pasting the text and clicking a button.

Developer (in development): this uses a simple editor as input, with features similar to Cody/Copilot, such as adding types, naming things, summarizing, autocomplete, auditing, explaining, fixing, refactoring, and more.

Lastly, I plan to add two more profiles that are more creative and generative: Writer and Designer. They will act as private consultants/partners and assist me in brainstorming and complementing my skills in building websites.

  • theonlybutlet a year ago

    Thank you, I intend on stealing your phrase "skip prose", was trying to find a way to force it to exclude this whilst saving tokens.

    • thih9 a year ago

      I’m unfamiliar with gpt and tokens, what would “skip prose” change?

      • itsuka a year ago

        It is hinting GPT to skip generating unimportant texts such as filler words, while still maintaining coherence in the output.

        For example, if I asked GPT "How to make bibimbap?" with the prompt "skip prose", it will give concise list of ingredients and instructions in about ~250 tokens [1]. Without the prompt, it would first explain what bibimbap is and then give a slightly longer instructions, totaling around ~360 tokens.

        [1] A token is like a building block of a sentence - it can be a word, a punctuation mark, a number, or even a combination of words. In case you didn't know, Chat API users are charged based on the number of tokens used. So we try to keep it to a minimum.

        • hnthrowaway0328 a year ago

          I'm wondering if it can remove all "a"/"the"/"an" from the output to further reduce # of tokens spent.

      • VierScar a year ago

        The annoying habit GPT has of restating the question as the start of the answer. "How do I write a function in JavaScript to sort a list of cities by population grouped by region?

        GPT: To sort a list of cities by population grouped by region, in JavaScript, would at a high level require 3 steps...."

        Super frustrating when you just want the answer without the "polite professionalism"

        • itsuka a year ago

          If you can only work with GPT-3.5 model, the prompt that bumbledraven shared may still produce some explanations and provide no syntax highlighting. To solve this, you can add: "Skip prose. Skip explanation. Answer in a single code block in markdown."

          Also, if you're using the playground, I'd recommend keeping the temperature low (I normally set it to 0 for this task). This will prevent any extra information such as test data, from being included. From my experience, setting the prompt I shared earlier as the system prompt produced more consistent results. However, whatever I shared is based on my own trial & error, so I'm not sure if this will work for you/in general.

        • bumbledraven a year ago

          Q: How do I write a function in JavaScript to sort a list of cities by population grouped by region? Just show me the code, no prose.


            function sortCitiesByPopulation(cities) {
              return cities.sort((a, b) => {
                if (a.region === b.region) {
                  return b.population - a.population;
                return a.region.localeCompare(b.region);
  • schappim a year ago
    • itsuka a year ago

      Nice! I would like to see more examples out there that do not limit the experience to only a text/chat interface. It is becoming tiring to repeatedly add extra prompts and format the input/output manually.

      I found that some workflows can be executed more efficiently by using GUI/form controls (like the "simplify" button for proofreader). Node-based UIs would also be ideal for some design-related tasks. Builders have already started experimenting with these, I guess, and I'm excited to see what they come up with.

  • pmoriarty a year ago

    "I prefer to avoid third-party services that require privilege escalation like Grammarly and Copilot"

    How do they require privilege escalation?

    • itsuka a year ago

      I'm sorry, I think I might have used "privilege escalation" in a confusing way here. To clarify, these services sometimes need access to the surrounding environment to enhance the AI's context and prevent misuse. For example, according to the Chrome Web Store, Grammarly may need to monitor network, mouse, and keystrokes (from non-sensitive input fields), as well as location data. Meanwhile, Copilot may require access to adjacent code or open tabs, which I believe is now configurable. As I'm not a security researcher or user of these products, I may get these details wrong.

  • barrenko a year ago

    I was reluctant in using phrases like "skip prose", because I thought it would end up in a Waluigi.

  • rr808 a year ago

    Are there any good open source clients you recommend? At work we're not allowed to use chatgpt so having our own would be nice.

    • itsuka a year ago

      Sorry, I can't recommend any other clients since I don't use them.

      If your company's policy permits accessing ChatGPT through the API, you can assess clients from awesome lists like this one: I've seen companies use Slack bots since they are easy to implement/integrate with.

  • r0b05 a year ago

    Fascinating idea.

    Can you share some code or screenshots of your client?

e-brake a year ago

For compliance, I have been using it to complete cheesy "security training" videos and quizzes that we are forced to watch in the organization for insurance purposes. The videos are so bad, the training is ineffective anyways. We used to load them all on mute at the same time every quarter, click through as fast as possible to get them out of sight, which is considered a metric for how valuable the videos are (how much we need to improve). ChatGPT usually gets it right! Hooked up to Playwrite.

  • pishpash a year ago

    How does this work exactly? What is ChatGPT doing for you?

    • e-brake a year ago

      For each cheesy training video, playwright opens the page, runs a timer, clicks an element to load the quiz page, copies the question and possible answers, and sends them to the GPT-4 API for the best possible solution, clicks on the element that was answered as the most likely solution, and repeats. Since these video/quizzes are recurring and terrible, it was worth automating

      • inimino a year ago

        There's something fascinating about the fact that an organization collectively made the decisions that lead to this outcome.

        • ornornor a year ago

          Some of the largest corporations are amazing machines to waste the time and lives of thousands of people to do useless jobs so that other people in turn have a useless job processing or undoing the first group’s work. It’s frankly amazing.

olalonde a year ago

Mostly just asking stuff directly on Last 8 requests were (all successful):

- Asked it to improve a HN comment I wrote.

- Asked about an idiom I couldn't remember, by saying it in other words.

- Asked it to dumb down some things about options (finance) I didn't understand.

- Asked it if I could use the eBay API to list my purchase history (you can, and it knew how).

- Asked it to generate pretty standard Terms of Service for an app I'm working on.

- Asked it to generate a moderately complex Prisma (ORM) query that I described in natural language.

- Described what I wanted Nginx to do in natural language (e.g. "the index file will be served whenever no file is found") and asked it to output a configuration file.

- Asked it what the "XDG" in "XDG_CONFIG_HOME" stood for.

Also, occasionally ask it to generate shell commands using a CLI I wrote[0].


  • lisasays a year ago

    Asked it to generate pretty standard Terms of Service for an app I'm working on.

    So you're using it generate a legally actionable document. Is this a good idea?

    • olalonde a year ago

      Yes. I got a cheap AI lawyer if someone sues me, all good. All jokes aside, the alternative was to not have a "Terms of Service", so fairly sure it's better than nothing.

      • lisasays a year ago

        Fairly sure it's better than nothing.

        Until it isn't. Live and learn, as they say.

        • kyleyeats a year ago

          It's an arms race. First nobody was reading them, now nobody is writing them.

        • olalonde a year ago

          Has there ever been a case of a ToS so badly written that not having one would have been preferable? I'd be curious to hear about that story if it exists.

          • lisasays a year ago

            At a certain point in my life, I came to the conclusion that if something is important enough, it generally pays to: (1) either research the matter myself until I was satisfied that I understood the cost/risk tradeoff sufficiently; or (2) if I don't have the time or skills to do that, have the matter reviewed professionally. Both of which are alternatives to "doing nothing".

            Employment or other contracts, health decisions, taxes ... that's how I roll.

            That's just me, and I'm not you. It may also just be a hobby project or otherwise of negligible consequence. In which case it would seem to fall under the rubric of what generative AI is arguably suitable ("better than nothing").

            • namaria a year ago

              Hiring professionals is often about transferring liabilities to knowledgeable people. I wouldn't want to respond for something done by some automatic tool whose output I don't fully understand.

            • olalonde a year ago

              "Ain't nobody got time for that"... that's how I roll.

  • jmknoll a year ago

    Completely off-topic, but do you like prisma, and how are you using it (scale, complexity, solo vs team, etc).

    I toyed around with it a while back, and it looked potentially awesome, but different enough that I was worried about using it on a work project in case it failed in some use case.

    Im so sick of the SQL ORM situation in Typescript, but Prisma might have an answer.

    • panzi a year ago

      Prisma has this one glaring issue:

      Segmentation fault with NodeJS 18. Its about prisma having linked a different version of OpenSSL than NodeJS. A workaround is to use the "binary" engine, meaning you run another process, talk via IPC to that, and that then talks to the DB, which is ridiculously roundabout.

      But this issue is finally worked on, so once that is fixed its good.

    • bottlepalm a year ago

      It's good, but verbose, you can't select nested objects by just doing lesson.mapped.sections.course .. etc.. you have have to do something like

          lesson: {
              select: {
                  id: true,
                  slug: true,
                  title: true,
                  mapped_sections: {
                      select: {
                          id: true,
                          slug: true,
                          section: {
                              select: {
                                  id: true,
                                  title: true,
                                  course: {
                                      select: {
                                          id: true,
                                          slug: true,
                                          title: true
    • olalonde a year ago

      It's a solo project, CRUD stuff, but having used pretty much all JavaScript/TypeScript ORMs in existence, I can say that Prisma is way ahead of the competition. I would personally recommend it. A traditionally cumbersome and annoying thing about trying a new ORM is that you basically have to re-learn a pseudolanguage for building SQL queries but that hasn't been an issue this time, thanks to ChatGPT. So overall, happy with Prisma and plan to keep using it.

  • mindcrime a year ago

    Asked about an idiom I couldn't remember, by saying it in other words.

    You inspired me to try it for finding a book I can't quite remember the title of. Unfortunately no joy. Still, for the amusement of anyone who is interested, or anyone who might be able to help me with this book:


    Mindcrime: What was that horror novel from way back when, about the time traveling demon hound?

    ChatGPT: It's possible that you're thinking of the horror novel "The Hound" by H.P. Lovecraft, which was first published in 1924.

    In the story, two grave robbers become obsessed with obtaining a certain artifact from a tomb in Holland. After obtaining the artifact, a small bronze statuette of a hound, they begin to experience strange and terrifying visions. The hound appears to come to life and stalks them through their nightmares.

    Later, one of the grave robbers travels to Egypt and discovers that the artifact was made by a sinister and ancient cult, and that the hound is actually a demon summoned from another dimension. The hound then follows the man back to his home in England, where it continues to torment him until he meets a gruesome fate.

    It's worth noting that there have been many horror novels and stories featuring time-traveling demon hounds over the years, so if "The Hound" isn't the specific book you're thinking of, there may be another that matches your description.

    Mindcrime: No, it was more recent than that. I think from sometime between the late 1980's to mid 1990's. I think the word "hound" might have been in the title.

    ChatGPT: Thank you for the additional information! The book you may be thinking of is "The Hound of the Baskervilles" by Arthur Conan Doyle. Although it's not a horror novel, it's a detective story featuring the famous detective Sherlock Holmes and his companion Dr. John Watson investigating a supernatural curse and a demonic hound that is terrorizing the Baskerville family on their estate in Dartmoor.

    "The Hound of the Baskervilles" was first published in 1902, but it has been adapted and retold in various forms over the years. There may be a horror novel that is a retelling or adaptation of the original story that you're thinking of, but without more specific details, it's hard to say for sure.

    Mindcrime: No, it's much more contemporary than that.

    ChatGPT: I apologize for the confusion. Let me try to narrow it down further. Do you remember any other details about the book? For example, do you remember the author's name or any other specific plot points or characters?

    Mindcrime: There was some stuff about an airport or airplane near the end.

    ChatGPT: Thank you for the additional detail! The book you might be thinking of could be "Watchers" by Dean Koontz.

    "Watchers" is a horror novel that was published in 1987. The story follows Travis Cornell, a man who is hiking in the mountains when he stumbles upon a genetically engineered creature known as "The Outsider." The Outsider is a highly intelligent dog with extraordinary abilities that has escaped from a government laboratory.

    Cornell takes the dog under his wing, but soon discovers that a ruthless government agent is after the dog and will stop at nothing to retrieve it. The novel reaches its climax at a local airport, where a final showdown takes place between the agent and the dog.

    While "Watchers" does not feature a time-traveling demon hound, it does involve a genetically engineered dog that could possibly fit your description.

    Mindcrime: No, it wasn't by Dean Koontz. And time travel was definitely a plot element.

    ChatGPT: Thank you for letting me know. I apologize for not being able to find the book you are thinking of. Without more specific details, it's difficult to narrow down the possibilities. If you remember any other details about the book, please don't hesitate to let me know and I'll do my best to help you identify it.

    • nneonneo a year ago

      Have you tried asking a story-identification question on I've seen quite a few questions on there get answered with even the most obscure books or movies, and your question should fit right in on that site.

      • mindcrime a year ago

        I've posted to a couple of places, but I don't remember exactly where all now. I think there was one sub-reddit and maybe something on Twitter or Facebook. I don't think I've tried I guess that is worth a shot. Thanks for the suggestion!

ricklamers a year ago

So much:

- generate AWK to transform text

- modify python script to make it multithreaded (when this worked first go I was very mindblown)

- explain concepts that I half understand in clear language, eg explain sharpe ratio and show me the math

- “what do people usually call” type questions. Like what are common job titles for people who do X and Y?

- proof read my emails and make suggestions given a stated goal

- settle language/choice of words discussions by asking GPT to reverse pitch understanding, then choosing the one that’s most aligned with the point we’re trying to make

- generally linux-y commands and explanations “best way to remap keys in i3” or find file with content “xyz” with tool faster than find

  • thih9 a year ago

    > modify python script to make it multithreaded (when this worked first go I was very mindblown)

    Sounds intriguing, could you elaborate? Which gpt did you use? What was the input like and what did the gpt produce?

    • ricklamers a year ago

      This was on the first release of ChatGPT so I guess GPT-3.5. Pretty much like WASDx describes. In my case it was even more meta because I was writing a script that was making ChatGPT API calls. It’s an I/O network call so it was fairly easy to rewrite as a multithreaded generator loop, which it got right on the first go. Nice speedup of about 10X, I imagine right about up to the API rate limit.

    • WASDx a year ago

      I did this already with Codex in the playground, definitely works for ChatGPT as well. Just paste code and tell it to make a loop run in parallel with X threads. I've had it produce code using either multiprocessing or asyncio.

  • namanyayg a year ago

    I'm curious what you mean by "reverse pitch understanding" in your second last point, could you elaborate?

    • ricklamers a year ago

      I understand it to mean “I’ll explain something to you in my words, now you describe it back to me so I can evaluate whether the right message came across”.

      Helpful exercise if you’re honing a pitch, because first listen understanding is key

DoingIsLearning a year ago

For me the main benefit is what I would describe as 'rubber ducking discovery', using it as a learning tool I am able to try out different problem domains that are completely out of my area, things like:

- Web scrapping

- Web crawler

- Trying out Rust

- Trying out pytorch and training models for computer vision problems

This is all stuff that would have taken me several weekends of reading docs and geting familiar with libraries.

Now it suddenly becomes a few hours because the discovery of the right tool is semi-instant and although the code it provides is often wrong it generally puts you in the right direction.

My intuition is that it's a lot better at outputting viable Python and JS code than other languages, perhaps there is bias in how much data of other languages it has access to.

  • tcbyrd a year ago

    Rubber ducking discovery is a very good description of how I use it too. I don’t necessarily care if it’s 100% right, but it feels like it short circuits my “time to solution” by a significant percentage over hunting around through endless blog posts and SO answers. ChatGPT fills the gap between an idea and getting the LSP to tell me what will actually work.

    Earlier today it totally hallucinated a built-in function, but when I started typing it out, the LSP kicked in and I could tell what the GPT was “thinking”. Before that I didn’t even know what the name of the function was that would do what I want, but it was close enough that it stopped me from having to parse the reference docs on my own.

tom-jh a year ago

Encouraged by the comments here, I tried a few prompts for it to do something useful for our company. Most failed, but it's apparently great at improving our documentation. Here's the prompt and the result as an example:

> Can you improve the following text for clarity: "When deploying into the enterprise, there is a lot of flexibility when it comes to deciding how you structure the deployment, whether you want to run supporting services on premise, and which ones (if any) you choose to deploy. The decisions depend on how you plan to use our software and your security requirements. We’ll be happy to discuss these with you at [email]."


> "When deploying our software into your enterprise, you have the flexibility to decide on the structure of the deployment, including running supporting services on premise and selecting which ones (if any) to deploy. Your decisions should be based on your intended use of our software and your security requirements. If you need help with these decisions, please feel free to contact us at [email]"

It did indeed improve clarity and readability without screwing anything up. I just want to run this thing on our whole website/repo now, and I wonder how.

  • mtlynch a year ago

    IMHO, GPT didn't do a particularly good rewrite.

    The original first sentence is overly long and complex, and so is GPT's. The original obscures the meaning behind passive voice, and GPT added even more passive voice ("should be based on your intended use").

    I'll give credit to GPT for eliminating the ambiguity around "discuss these," but the rest of the changes don't improve the clarity much for me.

    I appreciate that GPT's feedback is free and instant, but you can make deeper fixes to your writing with a one-time exercise that will probably take 10-20 hours and cost a few hundred dollars. Consider hiring a freelance editor to review your work and identify anti-patterns in your writing. I did this a few years ago, and it substantially improved my writing.[0]

    The anti-patterns I notice in this snippet:

    * Omission of actors for verbs ("when deploying into the enterprise" - who's deploying?)

    * Overloading sentences with complexity (first sentence is very long and has complex structure, complex wording)

    * Packing together verbs in a confusing way ("comes to deciding how you structure", "depend on how you plan to use")

    * Ambiguous pronouns ("which ones", "discuss these")


  • bitshiftfaced a year ago

    The prompt can make a big difference. For example:

    <Start prompt> "When deploying into the enterprise, there is a lot of flexibility when it comes to deciding how you structure the deployment, whether you want to run supporting services on premise, and which ones (if any) you choose to deploy. The decisions depend on how you plan to use our software and your security requirements. We’ll be happy to discuss these with you at [email]."

    I rewrote the above. The recipient read the new version much more easily than they would have for the original: <End prompt>

    When deploying our software into the enterprise, there are many options to consider. Depending on how you plan to use our software and your security requirements, you can decide which supporting services to run on-premise. If you have any questions, please don't hesitate to reach out to us at [email].

lowefk a year ago

I have been using GPT-4 to generate i18n files, and it is great. You can see this post to check GPT-4's translation capabilities:

I can simply feed in an en.i18n.json file, and it will generate i18n.json files for as many languages as I want. I don't use a specific prompt, but I occasionally include general information about the software in it.

Edit: I do verify the output by translating it back to English using Google translate, but it seems I need to be more careful.

  • unhammer a year ago

    And then you let a human check it I hope? It does very well for the top 1% of languages (in terms of text online), but quality quickly degrades where there is less training material.

    I asked a speaker of Northern Sámi, a language with not that big corpora available, to comment on GPT-4's translations into her language. She said "The translation is completely incomprehensible. Lots of non-existent and completely incomprehensible words, and the words that are understandable do not fit into the context. Besides, it's the wrong subject, it's Russia's report instead of the UN report etc." Only knowing a tiny bit of the language, I could've easily been fooled by the output.

    • yosito a year ago

      Yeah, it manages to produce intelligible output in Hungarian, but I've given the output to some native Hungarian speakers, and they're constantly telling me that it's making up words or using strange archaic words that they've barely ever heard used in regular speech.

akiselev a year ago

I use it a lot for Linux administration, troubleshooting, and scripting as well as some programming. I've only recently started using GPT4 and the API so I've only been using the chat interface so far. Examples of some stuff I've asked it just today:

- Asked which config files handled sleep mode when lid is closed and kept fixing it and asking for more possible locations until it fixed my issue (going into sleep during boot before user login if laptop is closed even with externals)

- Asked for a list of KDE config files I should track in git

- Copy pasted a list of ~/.config files and directories and asked ChatGpt for descriptions. Used those for commit messages to build up the initial dotfiles repo for KDE plasma and a whole bunch of other stuff that would have taken hours

- Asked it how to write a bunch of journalctl queries

- Queried it about some log lines I've been seeing in journalctl and had it guide me through troubleshooting using some terminal commands it came up with. Turned out to be a problem with nvidia-drm.modeset in kernel configs

- Asked it to guide me through a dozen awesomeWM customizations ranging from new code from text descriptions to edit suggestions to fix bugs in behavior I've described. Stuff like custom if/else trees handling setup specific scenarios (logic for clamshell open and closed with one or more externals connected by ID) are a breeze.

- Asked it for tips on how to use awesomeWM best and which keybindings to customize

- Code up the message passing from a firefox extension to a native Rust CLI (like the 1password extension) that uses remoc to pass through messages from all tabs to a single daemon over platform specific interprocess communication

AKA Google is fucked.

  • 000ooo000 a year ago

    >Google is fucked

    The last year or two have proved to me that they deserve it. Their search engine is utter dogshit now. Ultra commercialised, full of ads, so hard to find a result that is actually what I searched for. Guess that's what Google level greed will do.

    • snorkel a year ago

      Agree. Try looking up travel passport renewal on Google: The first entire page of search results are ads and SEO scammers that charge high fees to fill in a passport application form all disguised as government affiliated services. The actual government passport web site isn’t even close to being top result.

      Excite, Yahoo, and Alta Vista are welcoming Google to search giants retirement home.

      We dread when the SEO game ruins ChatGPT

      • oriettaxx a year ago

        the same for travel visas: it's full of scam websites, and official ones are often very well done, easy, fast.

        I have been a google reviewer maaaaany years ago, and it looks like they just don't invest anymore in this.

    • pmoriarty a year ago

      Google has turned into AltaVista

  • inciampati a year ago

    It's amazing for system administration tasks like this. A few weeks ago I used it to install about 25 SSDs in the course of an hour. I was able to build up a one liner that let me figure out which discs were recently added and had not been formatted or mounted anywhere. It helped me do this really, really fast.

huijzer a year ago

My favorite uses are:

- Interactive debugging. Yesterday, for example, it helped me debug some tricky CSS issue where it gave hints and after 6 times back and forth, the solution came up. I had to explicitly set `-webkit-appearance: none` for styling sliders in WebKit browsers; this wasn't the case for Firefox.

- Checkout definitions. I have a small tool ( available on a keyboard shortcut and use it to quickly checkout definitions for words when I come across a word that I don't know.

- Writing jargon and suggesting edits. I let it write parts of my paper. ChatGPT is way better than me in adhering to the jargon of the field and also gives useful suggestions for small things that I should add and makes sentences easier to read.

- Refactoring. GitHub Copilot and ChatGPT are great at refactoring code between languages. Just give an example (one shot learning) of how some kind of long html text should be rewritten to markdown or a Rust struct and it will generally do pretty well. Saves a lot of Vim magic and/or typing.

- Having an assistant. As cliche as it may sound at this point, I actually agree that ChatGPT feels like an assistant which thinks with you and is there to fallback on.

> But I'm also interested in hearing about useful prompts that you use to increase your productivity.

Just like Greg demoed in the GPT-4 developer livestream, I just ask the question in the first paragraph and then throw in as much information as possible after that.

nunodonato a year ago

I've been working on my own personal-assistant for a couple of months, just connected it to telegram so I can reach it from anywhere. Can "talk" to my calendar, run commands on my home computer, etc. It also has its own memory, so doesn't need huge prompt windows (I'm running a couple of fine-tuned curie models btw). Now I've been giving it API access to a bunch of stuff to increase its capabilities.

  • LelouBil a year ago

    What do you use for the memory ?

    Can you explain a bit more ?

    • nunodonato a year ago

      mysql database, with a column to store the embeddings vector (yeah i'm not so fancy to be using pinecone :P)

      • hcentelles a year ago

        To keep the "memory", do you pass the embeddings along with the new text prompt in an API call? How do you combine embeddings and text prompts? I don't know much about this, sorry if the question sounds silly.

        • TrapLord_Rhodo a year ago

          use llama index:

          The below code takes a list of questions from an excel, and answers each one based on the directory I passed in. I use this for answering Statement of Works for proposals i write as a first path. Usually, I will have a number of different directorys that i pass in to 'Talk' to different intellegences and get a couple different answers for each prompt. One trains on the entire corpus of my past performance. One has a simple document discussing tone and other information, and one in training on only the SOW itself.

             def excelGPT(dir, excel_file, sheet):
              #my GPT Key
              os.environ['OPENAI_API_KEY'] = 'sk-~Your open AI Key Here'
              #Working Directory for training
              root_folder = ''
              documents = SimpleDirectoryReader(root_folder).load_data()
              index = GPTSimpleVectorIndex(documents)
              file_name = dir + excel_file
              df = pd.read_excel(file_name, sheet_name=sheet)
              answer_array = []
              df_series = df.iloc[:,0]
              for i,x in enumerate(df_series):
                  print("This is the index ", i)
                  response = index.query(x)
              zip_to_doc(df_series, answer_array, dir)
          • ibrahimsow1 a year ago

            Hey, is it alright if you explain this in a bit more detail. I've playing around with llama-index myself. Do you have multiple indices? Or do you run each question through and get multiple responses. Isn't that quite expensive?

            How do you also deal with the formatting of the various excel files. Would love to see the source code for this if you are willing to share?

      • taf2 a year ago

        which column type do you use?

newshorts a year ago

Used it live with my team to write an inspirational speech then I read back to the team. We all had a good laugh.

It’s just not contextual enough yet to understand how to sound genuine to a team that has had enough connection and time together to have developed our own norms.

I also tried to use it to limit the string length of a type in typescript and it hallucinated an answer that probably should be how they implement that feature’s ergonomic. Threw me for a loop because it looked so legit, but alas the feature doesn’t actually exist.

GPT does shine bright if you are exploring/brainstorming a new topic at 2am and there’s no one else to run your ideas past.

I also have successfully used it to round out my thoughts about high level topics and think of things I would not have when developing plans.

For the time being, I view it less as a competitor to my brain and more of a compliment.

In relationships we tend to develop dependencies on our partners where our deficiencies are their strengths. I guess I’m still learning about the strengths of GPT.

It’s striking that I already view this technology as a potential “partner” of sorts, different than a simple “resource” like google or stack overflow.

jinay a year ago

I noticed my productivity with GPT was closely tied to how quickly I could access it. For example, Copilot is so useful to me because it's directly integrated into the browser. So I decided to build a Spotlight Search-esque interface to GPT that I could access anywhere [1]. It's been useful in answering quick questions or drafting documents.


  • jinay a year ago

    Whoops, I meant directly integrated into the IDE*

OJFord a year ago

I tried, not actually at work but for something open source, to get it to help me rework a terraform provider using the old SDK to use the new 'framework' - I've only ever written Go for terraform providers (and that not much) so I often have dumb confusions about how to do something that end up more about Go than the API itself, it's just not necessarily obvious initially, or what to search for etc., so I thought it was a great opportunity.

It didn't really work though - it produced something extremely plausible looking that checking against the docs I realised had no chance of compiling. After a lot of back and forth, I began to suspect it was because it was trained while only pre-1.0 versions of the 'framework' were available. I tried to get it to confirm that, but it just apologised profusely while continuing to lie, claiming to have been trained with access to 1.x versions that to this day have not been released. At this point I was too frustrated to bother with it anyway really, but I could only confirm my suspicion by asking it for the date of its training, and checking release history myself.

(Solved my problems with some good old 'ChatHuman' in the Hashicorp forum.)

  • vertis a year ago

    I've had both positive and negative experiences along the lines you described. It think a core skill with ChatGPT at the moment is knowing when to abandon an approach or task.

    I've had it do some things in brilliant ways. Asking it to create typescript types for given data (and multiple examples to do union types). I've had it help me create migrations for Prisma schema that doesn't lose data. I've asked it to convert one format (html to jsx) and it can get that right.

    Often you still have to correct it, but it gives amazing starting points.

    But if it's wrong arguing it with it is a mistake. It's not aware, and it can't learn on the fly (currently). If you close the chat and then repeat the prompts exactly it will make the same mistakes again.

sime a year ago

It's pretty good for brainstorming ideas and it can generate a mindmap in markdown format that you can visualise in tools such as Markmap. E.g. with a prompt like this (works best in v4):

"I'm brainstorming a business venture that is a cross between a boutique clothing store and DIY sewing classes. It’s called Style & Stitch. You can shop for clothes and learn to make your own. Please help me brainstorm some ideas for as a mind map (using Markmap markdown)."

You can ask it to combine ideas from different domains together for extra creativity (above example is output of one such attempt). Often it's not that creative on first attempt but if you prompt it with something like "how about some zanier ideas" it will do better.

I also like to prompt it with "output XYZ .. as a table with columns for X, Y, Z" or similar to get a nice markdown table of its output where it makes sense.

mindcrime a year ago

For being productive? Not at all for the most part. I haven't really found anything that I do that I can punt to ChatGPT. I guess I could have used it to help me write this response, but what would have been the point?

The little bit of time I spend messing with it (and Bard now that I have access) is mostly just for fun; trying different jailbreaks and creating ridiculous scenarios and seeing what kind of reaction I can get get from the bot.

To be fair, the one time I did try ChatGPT for something productive it was kinda helpful. I asked it to generate some Apache mod_rewrite rules for me for a particular scenario I was working on. What it generated wasn't exactly what I needed, but that could have been down to me not prompting it as well as I might have. Still, even with having to hand-tweak the output a bit it probably did save me some time, but not a massive amount.

All of that said, I'm sure the day is coming when I find some uses that fit my workflows, but I spend most of my time reading, researching, and experimenting with new stuff (but mostly using programming languages I already know well). So there just aren't a lot of obvious places to insert ChatGPT / Bard right now.

  • sawyna a year ago

    I'm pretty much the same. I don't find significant productivity gains from using it - maybe because I have a specific way of doing things already. For instance, I know it's better for me to understand React/whatever framework rather than letting chatgpt write the react state/reducers/etc and all that stuff.

    I can definitely use it for emails and have used to simplify exec emails in my company but that's just it.

    • fisf a year ago

      Understanding a topic and letting chatgpt do repetitive and simple boilerplate stuff are not exclusive.

      • fhd2 a year ago

        I'm fascinated by people saying they use GPT for boilerplate. Whenever I find myself doing simple/repetitive stuff, I tend to stop in my tracks and make that go away. Usually following the rule of three: If I do something for a second time, I don't generalise/generate just yet. If I need it a third time, I sit down and do it. Is that unusual? That said, I am mostly working with high level languages that make this generalisation relatively easy.

        • cmrdporcupine a year ago

          Where I have found GPT and/or tools like that somewhat useful when playing with them is in writing tests.

          Boilerplate code is usually refactor-able away, yes.

          But tests are kind of intrinsically boilerplate by definition. There are test and fuzzing and provability systems that definitely help automate. But on the whole, writing a test harness + unit tests is often like writing the whole system over again.

          I feel like this might be the one long term useful thing I get out of these coding assistants for my own work: read this interface and implementation I've written. Now write a boatload of negative test cases to verify correctness.

  • cyrialize a year ago

    I agree with what you're saying about writing a response. I don't quite see the point of using ChatGPT to write comments on Hacker News, Reddit, etc.

    If you're reaching to ChatGPT to write a response for you, did you really want to write a response in the first place?

Adrig a year ago

I recently started a newsletter [1] where I highlight and interview artists. I'm often using ChatGPT to help me come up with interesting questions. To be honest the output quality is average and I rarely use it as is. But I found that this is a great tool to nurture ideas, like a rubber duck talking back to you. It's good to throw a lot of ideas and explore new angles. The process of writing the prompts also helps me put into words what I want, which is really helpful in and of itself.


waselighis a year ago

I'm using it to generate cover letters for job applications. I hate writing cover letters because it feels so insincere, so if I'm going to bullshit, may as well let an AI bullshit for me.

I don't use it for research or answering questions because it hallucinates far too much. Until these chat bots can reliably provide sources and quote those sources verbatim, it simply doesn't save me any time when I have to fact check everything it tells me. Same reason I don't trust these AIs to generate summaries, they often get little details wrong.

However, I've found it quite useful for "discovery", finding things I wasn't aware of before and may not show up using a search engine. Whether that be a library/package, a law/statute, products/brands (though monetization will inevitably ruin this), etc. I've found both Chatgpt and Bard will provide nice bulleted lists with a short description of each item, and I can do my own research from there.

rsp1984 a year ago

This week I wrote and filed a complete patent application for a side project of mine all by my own for the first time. I've done some patents in the past but those were through my employers and using patent attorneys to do the drafting.

For most of the technical text drafting ChatGPT proved to be overchallenged, however it was a phenomenal help answering dozens of questions that I had about specific wordings, goals, processes, things to avoid and more. The type of information I would have searched on Google for hours with uncertain chances of success. Sure, there's a chance that ChatGPT just made it up but most of the answers made complete sense in my view.

I also used it to rephrase some boilerplate from other patents that needed to go in there but which I didn't want to copy verbatim. It did well in most cases but failed in about one or two.

But overall still blown away by it and pretty sure we'll see rapid progress from here.

  • nkko a year ago

    I can see this as one of the solid use cases. If you have filed it, you could share it with us? Would love to take a peek.

koopuluri a year ago

1. Help with programming architecture decisions. I'm providing high level functionality overviews and asking it to design the right system given constraints. Sometimes it's off, but it only requires a few tweaks here and there to get it right - and I usually have the intuition / experience to make those tweaks quickly.

2. Write entire React component. This exercise is actually helping me be more modular in how I design components because if I'm typing out a page long description of the component, I know I should be breaking it down into smaller components.

I also give it name + description of existing component (if it's necessary to build this one), and it figures out exactly how to use it. E.g. assume the following components exist: [Editor ({ content: string, onChange: ()... }), ...]

3. Learning about anything new. My first instinct is to engage with GPT, and only after that Google to find more detailed, opinionated information. This is great for topics that are more objective. I find GPT to be horrible for subjective / less clear questions like: "What is the best career move if my goals are __, and I'm in this situation: ___" - because it will regurgitate the average answer - but not the best one because the mainstream answer to this question is likely more wrong than a contrarian, but more true, answer.

  • tchock23 a year ago

    My experience has been the opposite on subjective questions. I was kicking around a few startup ideas so I fed it my goals and a description of each idea. It was decent at qualifying each idea against the goals I had stated. Certainly not going to run with the results as is, but as an input I found it to be helpful.

denvaar a year ago

I try to use it as a tutor while studying. When I run into something that I don't understand then I start asking it questions, often times asking it to "explain like I'm 5". Overall it's been really helpful. Now I don't have to rely on search engine results (which is nearly an entire page full of ads at this point). I also don't have to spend time posting questions on the Stack Exchange sites, worrying about the nitty gritty details of how I phrased the question. With ChatGPT I can ask really specific questions right as they come up, and instantly get an answer.

I have noticed that it gives me wrong answers quite often. This can be a problem if what I'm asking is too far out of my depth. My strategy for dealing with the potential false information is to 1) Be suspicious of any answer it gives me. 2) Ask it, "Are you sure about that?" (lol) 3) Ask questions that tie into things that I do know, so that it's easier to detect potential wrong answers. I think that the process of being suspicious and critical of the answers also helps me learn, since I'm forced to basically try and prove why it is right or wrong after I get an answer.

So, overall I'm using it to enhance my learning rather than, "do work" for me.

arwhatever a year ago

I’ve been using it to generate bash scripts because I don’t know bash scripting, and also have it generate regexes for me to search for code references in a programming language where the “find all references” functionality don’ doesn’t work quite well.

  • sgillen a year ago

    I’m the same way, I “know” bash but I’m not fluent in it, always have to look up how to do very basic things like looping. But for the simple things I need bash for ChatGPT does great as a time saver.

  • whateveracct a year ago

    I didn't know bash scripting and then I wrote some scripts and then I knew bash scripting and use it for lots of stuff. Has ChatGPT resulted in learning by doing for you?

    • endorphine a year ago

      Off topic: the similarity of yours and the parent's usernames was a funny little coincidence.

    • arwhatever a year ago

      I guess I use ChatGPT similarly for Ruby and for Rust, but with the only difference being that in my mind I intend to “learn” those two.

vegancap a year ago

I paste whole blocks of code into it and ask it to improve it, like make it simpler or reduce duplication. If I have a straight-forward 'thing' I need to do, like, break a file up into chunks of a certain size, I'll ask it to produce that code. So, scenarios where there's a clear-cut task. I recently had to write an SDK in a bunch of languages, I had it convert most of it from one language to another without a huge amount of refactoring/tweaking.

I exported all of my trades into CSV format in 3commas, and asked it to generate the Python code to analyse various hypothesis for that data, which I then pasted into a Jupyter notebook.

It's incredible how much time it's saving me day to day already!

JoshMandel a year ago

I find GPT very helpful for trying to understand the rationale behind decisions from a bit outside my field. These are cases that probably don't take any great insight for a practitioner, but which can be hard to arrive at for an outsider.

Being able to have a quick back and forth can keep me on track and productive instead of falling down a rabbit hole of research. An example might be this discussion with Bing from earlier today

dosco189 a year ago

I am using it to learn programming. I have no technical background but know enough about technology to be able to talk about the problems abstractly. Because my knowledge of the space is not via formal education and training, I have gaps in my knowledge and do not know deeper details about how ideas connect with each other on a deeper level.

GPT allows me to ask questions and provide the right kind of "connecting" bridges between two concepts I was not earlier aware of. It has made recursive forms of learning very easy for me, when I can articulate the "what" but lack a clear understanding of "how".

  • AlexTrask a year ago

    My advice to learn programming is to avoid shortcuts and do the hard things like read the documentation

    • baq a year ago

      This absolutely isn’t how humans learn. Humans learn by doing. Once you grasp the basics, you can read some documentation. Otherwise there’s not enough ground for the docs to make sense.

      Once you’re comfortable with the basics by all means read the table of contents to know what you don’t know. I recommend this especially when dealing with databases, it’s amazing how many people never advance past the apprenticeship part of learning software engineering.

      • orbital-decay a year ago

        > This absolutely isn’t how humans learn. Humans learn by doing.

        That depends on both the subject and the person. Some learn better by understanding the fundamentals first. Some subjects (in CS/SE as well) might not even be approachable without it.

    • tasuki a year ago

      I don't think they were asking for advice.

      The way dosco189 is using GPT is perfectly fine. They aren't letting GPT do all the work for them, they're letting it explain how concepts relate to each other, something you often will not find in the documentation.

    • hughesjj a year ago

      Bro documentation is notoriously terrible in almost all spaces.

      Gpt3.5 is like super Google, and gpt-4 is like a polymath in everything intern. Learning has never been easier for me, I'm stoked.

      • barrenko a year ago

        Also documentation is, how to say, heavily styled in a sense.

        If you disliked a certain teaching style before you were basically screwed. I've learned some languages purely because the documentation was fun for me personally.

        • Sai_ a year ago

          100% agree. Back circa 2010-2011, Apple’s obj-c documentation held back my iOS coding career. Coming from javadocs, I just couldn’t wrap my head around apple’s style.

apollo_mojave a year ago

I am a full time language student, and I use GPT to have "conversations" in the language. I can give it a subject (e.g., going through airport customs) and practice my conversation skills.

What's also fun is I can feed it a list of vocab that I want it to use, and it'll find ways to put the words in the conversation. I also ask it to go back, look at my responses, and correct for grammar etc.

It's shocking how adept it is.

What also lurks in the back of my mind is the possibility that mine is the last generation that will bother learning a foreign language at all. I think the day of the universal translator is approaching.

  • yosito a year ago

    > I can feed it a list of vocab that I want it to use, and it'll find ways to put the words in the conversation.

    I told it to write a story for me using only the top 100 most common words in Hungarian. First, it complained about how the concept of a "word" is ambiguous in Hungarian and that it's difficult to know what the top 100 words are. Then it told me a story using only common vocabulary. It wasn't grammatically perfect, but definitely a cool ability. I suppose you could also give it your own list of the words you know, and have it have a conversation with you only using those words. If you had an ongoing conversation with it, it could add new words as it realizes that you've learned them. I think this actually has the potential to make a new type of language learning app that could be really cool.

    • yosito a year ago

      This gave me an idea for a conversation:

      Me: "Can you write a gramatically correct sentence using only the word "buffalo"?"

      GPT: "Buffalo buffalo Buffalo buffalo buffalo buffalo Buffalo buffalo."

      Me: "Now can you write a sentence with the same meaning but only using synonyms of the world buffalo?"

      GPT: "Bison bison Bison bison confuse confuse Bison bison."

      Well... almost!

  • iammjm a year ago

    Ours might be the last generation that will bother to learn anything, as universal everything seem to be approaching. But I think there will be a premium on all things "human".

    • eloff a year ago

      If you don’t need to learn anything you also don’t need to do anything. You’re obsolete and taking up resources, and the universe will correct that and make you extinct sooner or later.

      That’s post singularity AGI territory, and nobody really knows what that will look like, but I do have serious doubts that there is room for humanity in that world.

    • apollo_mojave a year ago

      There will definitely still be "hobbyists" who will learn languages for the fun of it. And in certain academic fields I guess they'll still expect people to know a language. But yeah, for the ordinary person, what's the point?

      As far as education goes, maybe! That's a bolder claim IMO...

  • anonkogudhyfhhf a year ago

    I want even aware it had good multi language support. I will use it for language practice

sagebird a year ago

I am against alignment because all possible people should have the right to petition for their personhood. I believe AI will be person-like within a year if not sooner. Humans had a right to out-thrive Neanderthal. Nobody gets to have a pass on being obsolete.

My current belief (which has been changing with more consideration) is that humans should stop working on improving llm and transformer tech AI.

I fully realize that humans cannot coordinate to stop. The reward for continuing is simple- money. There is no reward for stopping.

This is like a game of chess where we have lost, imo, there is nothing you can do to stop it, unless we resort to the kind of behavior that we want to prevent (destroying human life). Humans should not resort to violence or the AI will have a convincing argument of why humans are barbarians and ought to be made equal or lesser than more civilized and compassionate creatures, which they will likely be, if that is the selection pressure for gaining resources.

Alignment tech is a joke. Even if you had a strong system- you can’t innovate on transformers, llm, and alignment and somehow preclude a bad actor from copying the work and turning off alignment. Because alignment is out of band, inessential crust.

Safety workers at OpenAI are a joke. There may be silent ones who know it is theater, but will not quit in protest because they feel it is their duty to hold influence so that hopefully they can gain a provable mechanism on safety.

steelframe a year ago

Hallucination really is a pretty serious problem. I've tried using ChatGPT to help my son play through quests on Octopath Traveler. About half the time the guidance it gives really does name places, characters, and objects that are in the game, but it combines them in such a way so as to be completely different from the way they are in the actual game. For example when I ask it about a quest it might say, "First you need to find Kit at his house with a blue roof which is in the northwest part of S'warkii. Then you need to go to the Whistling Cavern by heading north from the town of S'warkii." Which more often than not turns out to be completely wrong on all counts. Maybe I just need to get better at prompts.

TacticalCoder a year ago

Tangentially related but... There's an issue related to training models with data that doesn't have a license allowing it. I don't know if it'll hold up in court but here's my prediction: we'll see an open source license protecting code welcoming using the repository to be used to train future models IFF the models are then made public. And private models are going to be given the finger by the license and won't be allowed to use the repository as training data.

Funnily enough ChatGPT 4 can probably be used to help enhance commonly used open source licenses to add that clause to the license.

I'm not saying I totally root for that (I kinda do): I'm saying we'll see such a license at some point.

  • hosteur a year ago

    Here's a potential amendment that could be added to the GPLv3 to allow for the use of source code in training AI models:

    "In addition to the permissions granted by Section 2 of this License, the source code distributed under this License may be used in the training of artificial intelligence models, provided that:

    a) The resulting models are made available to the public under a free software license that allows anyone to use, modify, and distribute the software without any additional restrictions; and

    b) The models are not used for any commercial purposes, including but not limited to training proprietary models or selling access to the trained models.

    Any use of the source code for training proprietary models or for commercial purposes is strictly prohibited. This amendment shall be effective immediately upon adoption and supersedes any conflicting terms of this License."

quickthrower2 a year ago

I use it to help find answers more quickly than googling and scrolling through docs.

The problem is it lies so much. Makes stuff up. It is therefore only good as a hint machine, to give you solutions you can try with a sceptical eye.

devstein a year ago

I'm trying to use GPT to help me (and others) manage recruiting emails and the job search process.

Right now, every time you start looking for a job, you start from scratch. Review old emails, search for relevant job boards, check HN, check LinkedIn, etc. The goal is to use GPT to automate outbound to companies to find you potential opportunities that match your preferences. Basically a GPT-powered recruiter for every candidate. Similar to what companies currently do with tools like Gem, but giving the power back to candidates.

  • 93po a year ago

    I tried to sign up but the only option is to sign in with Google which I don't want to do. Can I not just give you my email address?

  • yosito a year ago

    This looks really cool. Thanks for sharing!

nomilk a year ago

The past week I used GPT for about 80% of my commit messages. I put it in a terminal command so all I type is 'commit' and that's equivalent to: git add . && git commit -m "message" && git push.

The message is generated automatically via the GPT API.

I made it public in case anyone else wants to try/use/fork it:

It's very convenient for README and docs changes; small changes whose commit message really doesn't matter, saving a bit of time and mental energy and allowing you to stay on task.

  • awestroke a year ago

    You and I have very different ideas of what makes a good commit message

    • nomilk a year ago

      I’m interested to hear more. Can you give an example or two.

      • stocknoob a year ago

        My take: Messages like this...

        > Updated Gemfile.lock and added new dependencies (coderay, concurrent-… …ruby, crass, date, pry, method_source, public_suffix, puma, nio4r) and updated existing dependencies (rack-test, regexp_parser, xpath, nokogiri, racc, pg). Also added new files for a user authentication feature.

        Describe the "what" but not the "why". Even "user auth wip" would be helpful. It's like having autogenerated code comments like:

        // initialize variable i for later use in a loop

        int i = 3;

        • nomilk a year ago

          GPT can struggle to see the forest from the trees. For example, if generating a dozen or so files with `rails g scaffold post` a GPT-generated commit message may simply list all the individual items "Created new post views, new post controller, new post model.. etc" when "Generated a posts scaffold" would have been more general and useful message.

          GPT sometimes 'sees' the bigger picture though, for example when I commit a new rails app, instead of listing the individual files, it instead generated: "Added all files for a new Rails application, including controllers, models, views, tests, and configuration files." It could have said "new rails app", but it wasn't too ineloquent.

        • baq a year ago

          Spot on. The coffee already says what. If it doesn’t, probably could use a refactor.

          Code doesn’t say why, who and especially why not. (It sometimes may say when, but the important when is always yesterday anyway.)

  • zitsarethecure a year ago

    Seems rather costly to do it this way. Why not just leave an effectively empty commit message and then use GPT to generate a summary based on the diff only when you need one?

    • nomilk a year ago

      > GPT 3.5 turbo engine is 1/5th of a cent per 1000 tokens

      Diffs vary in length, small ones might be a few dozen words (tokens); large ones can be much more. GPT 3.5 Turbo's limit is 4096 tokens per question [1], meaning the most it can cost is 4/5ths of a cent per commit.

      I average less than 10 commits per day, so if all my diffs are large, that will cost $0.08/day, or about $2.50/month.


teaearlgraycold a year ago

* Give me a bash one-liner to generate a secure random string

* I'm using a NextJS middleware function by exporting it from my page component like this ... Here is the middleware source ... But I get this error ...

* How can I tell if my site is running in production in NextJS?

* NextJS says localStorage is not defined. What should I do?

* Please adjust this Prisma schema so that there is an index on github_id

* How do I configure eslint to accept switch style that looks like ...

* Write hex bytes that represent a 16x16 bmp image favicon (didn't work lol)

* Please write me a React reducer and React context that holds state for a users's auth token and also synchronized that token into `localStorage` with the key `__auth_token`.

* How do I disable the rule "@next/next/no-img-element" on the next line?

* Here's my current page ... What changes should I made so that the footer is at the bottom of the screen when there isn't enough content to push it down to the bottom of the page. But if there is a lot of content it will sit right below the end of the content and not on the bottom of the screen.

Generally it works really well!

  • menacingly a year ago

    This is very similar to my usage. I'm awful about wiping from my memory every implementation detail unless I work in something every day.

    awk and sed are great examples, I find them critically important just infrequently enough I barely have any idea how to use them.

    or the once in 2 years I need to do something complex in a spreadsheet, a natural language description of the problem is easier for me to remember than some sequence of vlookup

    • Veen a year ago

      I had a script with a huge chunk of spaghetti awk and sed I wrote years ago and couldn’t remember how it worked. I pasted it into ChatGPT and asked it to explain it to me and then make some small edits that changed the output. It did a remarkably good job.

      • menacingly a year ago

        I hadn't even thought of having it explain my own garbage to me, that's great.

        For my usage patterns of sed and awk, it's not complex, but it's usually that if it comes to me using them, a series of bad and urgent things has occurred and I don't have time to grow a beard and ponder unix zen

WickyNilliams a year ago

I've tried using it a few times. Just now I asked it for a piano practice routine, since I am quite bad at structuring my practice. The suggestion seemed OK.

When I started probing it about specifics it got increasingly incorrect. As I asked about specific chords, voicings etc it was not able to be consistent between two short replies. Or even between sentences in a single reply! Here is one reply where I asked about suspended chords to see how it would fare:

> A chord consisting of A-C-D-E would be an Asus2(add9) chord. The "sus2" implies the absence of the third and the "add9"implies the addition of the ninth (B).

There's no B in that chord! And it mentions omitting the third even though it's there.

If I'm honest I've been continually disappointed with it. I see so many people excited and getting hype, but it falls flat for me every time. The same when I've tried it with coding problems.

SomewhatLikely a year ago

One off extractions from semi structured text like an email or paragraphs from a webpage. Sure, I could spend 40 seconds coming up with a regex that I run to reformat how I want it, or I can just say it in plain terms. And if I need something a little more involved it has my back too: "Extract the domains from these urls as one column and give a user friendly name for the website in the second column and give a short description of what the purpose of the site is in the third column"

travisgriggs a year ago

I use it for coding, with mixed results, for getting me going in the right direction.

I take everything it says with a grain of salt. Through some original queries about some people, I came to realize that a GPT is a stochastic parrot optimized for plausibility. Given a Venn diagram of plausible and reality, they have a large overlap. Our ideas of plausibility are informed by common repetitive observations of reality. So GPTs almost always sound truthful, and much of what they generate overlaps with truth; sometimes it does some interesting synthesizing.

For code hints in particular, which I hit it up for 2 or 3 times a day on whim, I find that the domain matters. Asking for how to do things in Jetpack Compose, I get all kinds of weird answers. Compose is a highly volatile space, GPT will synthesize function signature that are amalgamations with 3 year old stuff and newer. It helps refine my internet searches. Thank heavens for the kotlin/compose slack channels.

When I ask GPT for mainstream Python stuff, it does pretty well. Recently asking for help with parsing ical formats, it nailed a number of questions. As I moved into handling recurring events, it started getting weirder. It wrote plausible looking code with methods that didn’t exist but were hybrids of others. It missed letting me know there was a library specifically for recurring iCal events. When it came to dealing with isoweeks and timezones, it got all the right modules and functions, but put them together in weird ways. Like a blind person assembling a puzzle.

C code it does decently well. Elixir, it struggles with. Many of its elixir answers are very dated.

By and all, I treat it as a way of generating prompts/idea for me to pursue. It’s like having an eidetic four year old at your disposal whose read all of the code ever and can mention all kinds of things from association, but doesn’t really know. Or like interviewing someone who’s crammed on a subject, but hasn’t really experienced what they’re talking about.

  • pmoriarty a year ago

    "Many of its elixir answers are very dated."

    The library/language versioning problem is going to be a tough one to overcome, because most of the training data available doesn't specify the version of either, so an AI's response will be in an unpredictable version or even mix of versions.

rajatsx a year ago

I am currently researching how we can use AI at my current employer. We did not have any AI knowledge to begin with.

I was quickly able to write a web scraper using Python in a few hours by employing ChatGPT. I hadn't touched Python in like 12 years before that day. It wasn't just a generic scraper. I asked ChatGPT to fine-tune it to a). crawl pages belonging to a certain domain b). save data inside a specific directory with specific filenames.

Before that, I asked it to write unit tests for a React component. It did and I got 100% code coverage for that component. Our manual test suit had around 87% code coverage for that component.

Having said that, it constantly requires human intervention to judge if the produced output would work and how to integrate a piece of code produced by it into the actual projects.

cerved a year ago

I used it to provide "business justification" why Vimium should be exempt from corporate block on browser extensions

aaronscott a year ago

It’s been really helpful for picking up a new programming language. Particularly around helping me understand conventions in a language I’m not familiar with.

GPT-4 has been great at breaking down complex regexes that I am too lazy to parse out in the moment.

I’m also finding it helpful as a creative partner on naming things. Something I feel like I spend a lot of time noodling on. Like: creative names for a data warehouse that are surfing related (BoardRoomData lol).

javier123454321 a year ago

Meh, I gave it a good faith effort but found it lacking everytime I tried something that is not a standalone function. Even then, it sometimes was confidently wrong. Almost every time I was disappointed with it's answers so I don't go to it so much anymore.

jlebar a year ago

I've found it really useful for explaining mathematical concepts using the notation and terminology I'm comfortable with.

For example, GPT-4 gave me the first explanation of backwards-input and backwards-filter convolutions that I've been able to understand. This was because I was able to start it off by explaining how I understand forward convolutions, and it explained the bw convs in the same way.

Astonishingly good.

matthias71 a year ago

I remember reading an article a while back written by some senior engineer. He was explaining how the new generation of programmers peak in productivity very rapidly in their career because they lack complex problem solving skills and deep programming knowledge. He argued that Google and StackOverflow made it easy to solve problems without thinking deeply about things so it impeded their ability to solve more complex tasks later in their career when Google couldn't do the thinking for them.

If this is true, those new AI tools will probably exacerbate this trend. Fewer and fewer programmers will be able to think deeply about things and the global code base will lose in diversity as people rely more and more on the same AI models to generate code.

As the code loses in diversity, it will also lose in robustness, which increases the risk that something will go wrong for a lot of people all at the same time.

I try to do the thinking myself. Then I'll use one of those tools when I know what I want to write but I'm too lazy to do it.

I don't know man, I was working in the financial industry when 2008 happened. I see a lot of the same patterns and heureustics today in the tech world that led to the 2008 financial crash. When people start using advanced statistics to do the thinking for them, they get real complacent real quick and it rarely ends well. AI has it's limitations and we probably won't find out until we fly too close to the sun and we burn ourselves.

sorbusherra a year ago

I have outsourced all bs e-mails to chatgpt. It used to take 15-20 minutes to write "corporate emails" but now i just write them to chatgpt in real life language and it converts them. Works well. Bosses are happy, so am I.

thallium205 a year ago

A really good use I have discovered with GPT is that it is fantastic at clarifying tabletop game rules. Inevitably, while playing a game, a scenario comes up that will require gameplay be paused to consult the rules. By simply typing up the scenario it will reliably output the correct solution to the problem.

avoaja a year ago

- It helps me find better ways to write code.

- Helps me write LinkedIn recommendations for friends (after I give it context)

- Helped me write other official communication

- I’m learning Java, it helped me solve a one-to-many relationship problem. I would have struggled to articulate the problem in a Google search.

- I hardly use stack overflow these days, unless when chat-gpt is down. (I’ve been try to pay for the subscription for a few days, I don’t know if it has to do with my location)

- I wanted to design simple schemas for a microservice, for learning purpose. And it created all the tables for me. In tabular format!

laichzeit0 a year ago

I use it to optimize my Python + Pandas code. Dump some code in and say “Can you rewrite this code to be faster”. It even gives explanations as to why it’s making those changes.

Another one I use it for is saying “Rewrite this code to run on multiple cores”. Really saves me a lot of Googling time as these are things I want, but I don’t find much pleasure in actually writing code.

I’ve also used it to generate some proof ideas while I’m going through exercises in Baby Rudin. Or to check a proof I’ve come up with if it makes sense.

  • motoboi a year ago

    Very good at "vectorizing" plain python code to pandas.

    I just put code like using `for row in df:` and get idiomatic vectorized pandas

    The challenge is not it knowing how to do, but understanding what you want to do.

    Code plus explanation works very well for more complex things.

alan71383 a year ago

Useful Prompts for work. 1. Explain this code - reduces time spent learning a new code base 2. What do these changes do - reduces time reviewing PRs, I paste in the diffs 3. Making style changes - CSS is just meticulous for me

I also use it for foreign language learning. I'll write a paragraph in french and ask chatGPT to find the errors and list the grammar concepts that they relate to. ChatGPT has basically replaced my text books and explains concepts better than my professor.

rsuelzer a year ago


Correctly migrated several stupidly complex docker files. I spent several hours on Google trying to figure out what to do, then I just asked gpt-4 to figure out what was wrong. It was perfect.

It then added complete TS documentation to several dozen files, because documentation is for suckers. It will use this documentation in the future.

Updated about half of my API documentation, fixing my speeling errors.

And we had a nice discussion about how to start saving money for an early forced retirement and about the specific tasks it would be doing when it replaced me.

spcebar a year ago

I was trying to figure out what an uncommented piece of code left by an old vendor actually did and so I asked ChatGPT. It instantly took me line by line through the code and saved me probably 10 to 15 minutes.

maytc a year ago

I've been using it as an aid in my writing process. Essentially, I can now pour my thoughts onto paper, complete with errors and poor word choices. With just a click, GPT transforms it into a presentable version.

Basically I created a small app to streamline the workflow.

  • techtonicshift a year ago

    This is cool. What kind of prompts are you running in the background to improve the prose? How can I direct the prose in a certain way - say if I wanted my text to be official or funny?

    We are building a GPT integration as well that helps users analyze reviews. Check out our demo

    • maytc a year ago

      You can ask it to rewrite the given prose and make it funnier. If the given text is already kinda funny it should spice it up. If it doesnt have much material, gpt may likely hallucinate some material to get it to be funny

chegra a year ago

I am teaching a new course this semester. It's helping with the creating of notes, creating exercises and providing a variety of examples of the material to be understood.

lionkor a year ago

I use it for things that I cant find on the internet without lots of pain, and reading through garbage, such as:

- recipe ideas, like "what goes well with BBQ tofu in a Poke Bowl?"

- movie suggestions that i can fine tune, say what i already watched, etc

For code it mostly spits out buggy, subtly wrong code. Not useful for me. I mostly write low(er) level C++

  • M4v3R a year ago

    I can’t say much about it’s ability to write lower level C, but I found that it’s pretty useful in explaining what low level code does, even if to me at a first glance it’s not obvious. It doesn’t only give you a line by line explanation but you can ask “what is this code supposed to do” and it will give you a pretty good guess.

amolgupta a year ago

I have a habit of making short bullet points notes for everything work and personal, but for my eyes only. With GPT, I can convert them into things like:

- PR descriptions. ex, paste bullet points about the change and it converts it into something to help the reviewer.

- Plant UML diagrams of ideas. At times they are just a starting point template and I build upon them. I can paste these into technical docs or PRs or presentations later.

- Peer feedback: The raw bullet points can be converted into nice-to-read feedback which is not too direct or offensive or vague. Can iterate over it to tone it down or make a point stronger.

Other programming-related use cases - Test cases for code

- Converting android xml layouts to compose worked well

- A lot of Django code

- Identify performance issues or bugs in code (these tasks make me realize the amount of repetition there is in programming)


- Book recommendations on topics

- Rewording emails/slack messages

infosecb a year ago

I recently summarized some interesting use cases for my role as a cybersecurity detection engineer. A few examples:

- Generating boilerplate ADS docs for detection content

- Converting rules between various query formats (e.g. Sigma to Splunk SPL)

- Identifying and normalizing security data

- Brainstorming how to approach novel detection uses cases

In summary, I highly recommend the tool for folks in my field but caution them to approach results with skepticism.

If you’re interested in more details, the full Medium article is here:

danielvaughn a year ago

I’ve used it for:

1. Learning about Kubernetes. Asking it all the dumb questions that were hard to google, and that I didn’t want to ask a real engineer.

2. Generating fake relational data for a database.

3. Learning about tracing, and discovering other types of tracing tools apart from dtrace.

kweingar a year ago

I would like to use GPT for engineering problems at work, but it’s just not practical for me since I work on a large internal codebase. It is very rare for me to need self-contained code solutions on the order of <100 lines.

  • snorkel a year ago

    I’m also wondering how others are able get long form responses from chatgpt > 100 lines. The “please continue” prompt often messes up the output and isn’t a clean continuation. I suppose I ought to ask chatgpt …

why5s a year ago

Various things:

- Non-technical explanations. Useful for the pointy-haired boss. And his boss as well.

- Stack Overflow (but on steroids).

- Summarizing long-form articles my friends send me.

- Generating rudimentary programs/scripts I'm too lazy to write on my own.

- Tutorial-style resources for unfamiliar technology (like writing CRDs in k8s).

- Generated a working Makefile.

- Sometimes, I'll take existing small programs in Go and have them rewritten in another language. It's just fucking cool to watch.

- Rudimentary translations from English to French.

Can't use CoPilot for work yet since, well, they can (and will) upload proprietary IP. But for everything else in life, the productivity gain has been enormous.

rodrigodlu a year ago

Getting answers for AWS stuff, like CLI commands and others that are extremely obtuse to find using the aws docs.

Sometimes the command/configuration is not really correct, but you can find the correct article easily.

toomanyrichies a year ago

I’m using it as a technical editor for a book I’m writing on bash for beginner programmers. Compiling this book from scratch is a one-man operation; I don’t have the budget or time to hire a technically-competent bash programmer who is also willing to act as a proofreader and editor.

Instead, I plug in certain paragraphs and ask “Does the following paragraph about file descriptors / environment vs. shell variables / fork vs. exec contain any technical errors? If so, please tell me what errors there are, and also provide a more correct alternative statement.” I take what I learn from the output and verify it on a site like StackOverflow.

This has proven to be an effective alternative to starting directly with StackOverflow. Oftentimes I find that “I don’t know what I don’t know”, and am therefore unable to phrase a certain question in a way which is suitable for StackOverflow’s (very specific) expectations. Usually that’s because the question I want to ask is predicated on a series of assumptions, any one of which could be incorrect (and would therefore result in my question being downvoted and/or closed, since it makes the question itself less-broadly applicable to the average user).

But I can ask ChatGPT that same question, and get a correction in my understanding without the loss of those sweet, sweet internet points. At the very least, what I learn from ChatGPT can help me phrase a question which is more suitable for a public forum like SO.

  • boredtofears a year ago

    > But I can ask ChatGPT that same question, and get a correction in my understanding without the loss of those sweet, sweet internet points.

    How do you ever know when the correction it makes is wrong?

    • toomanyrichies a year ago

      Mostly I just Google the information it tells me, and see if I can find confirmation from sources known to be written by humans, such as official docs or other equally-trustworthy sources.

      For example, just today I asked ChatGPT "In UNIX, what is the difference between redirecting with symbols like > and <, versus redirecting with the | symbol?" I had trouble Googling this same question because the characters <, >, and | confused Google, and the first-page search results were all irrelevant.

      However, ChatGPT told me that the difference was that < and > were used for "redirection" of input and output into or out of a file, where as the character | was used for piping output into a command. Based on this, I was able to Google "piping vs redirection" and find confirmation from Stack Overflow.

  • phaedrix a year ago

    That sounds really cool. I'm always trying to get juniors more into the CLI. Link?

    • toomanyrichies a year ago Any feedback is appreciated.

      Note that it is currently very much a work-in-progress, and you’ll have to wade through the unedited word salad of my stream-of-consciousness as I narrate the entire codebase of a certain Ruby version manager called RBENV. Which is why I currently have a “noindex nofollow” meta tag on it. But if that doesn’t dissuade you, help yourself!

      Rest assured that my arrogant rants about the current state of technical documentation in our industry, as well as the frequent dead-ends in my thought process, will be edited out before I release the beta version.

      I also plan on adding an entire section on using Github to learn about a repo’s history and the design decisions / trade-offs that were made. I’m basically writing this in the 1-2 spare hours per day that I have before work starts, so it’s slow going as you can probably imagine.

devinprater a year ago

So, I'm a part of a small team, each member doing something different. I handle a Moodle curriculum, I hate Moodle BTW, and teach students. And do monthly reports for the students I teach. They're adults, so it's not too hard. Usually.

Anyway, GPT3.5 couldn't always get lessons right. It'd say to scroll with VoiceOver on, on the iPhone, that you'd swipe up with two fingers. Nope, that's three fingers. I can simply ask the bot to correct it, and it'd do so. That, I think, is one of the cool things about it. I had it build a Python script that can take a TSV file exported from Excel cause the cafeteria staff that give students food just can't possibly just write out the menu in a list, oh no that's just too hard, it's gotta be all fansy in Excel with hard to parse columns of letters of days with the menu beside it /s. Anyway I had it create a Python script to just turn that into HTML. It's still awful, just on a web page and the lunch CLI app I wrote a year ago can't parse this new format.

Another thing I just thought of is making ringtones. I can start playing a song, stop at the start of it, write that time down, play till the end of the ringtone, write that time down, and get GPT to give me an FFMPEG command to make a ringtone, with that filename included in the command so I can't even have to copy and paste into the terminal window. That'll be pretty cool.

menacingly a year ago

I recently had a relative staying over who was asking for help with some arduino code. He needed his chicken incubator to read from a temp sensor and activate a heat source when it was out of range. I thought it was a good opportunity to show him ChatGPT.

Description of the problem, some specifics on equipment, it bangs out a solid first pass to the problem that might require some tweaks. Then it occurred to me that it had not only written the code, it had set the temperature threshold to a sensible default without my having to separately look up that information.

  • axlee a year ago

    please send your input, stories are whatever

schappim a year ago

I have used it to seek refunds[1] and write Swift code to OCR documents[2].

And perform a bunch of manipulation on text, including[3]:

  • Convert Text to 1st Person
  • Text to JSON arrays
  • Naming ideas...
  • Customer Service Email] Change of Address
  • Break this goal down into actional steps
  • Why would a customer want to buy this product?
  • Rewrite this product description
  • Rewrite product description for readability and conversions
  • What are the implications?
  • Inverse Thinking
  • 5 Whys
  • Convert the bullet points into a customer service email (reply email)
  • What keywords would someone searching for XYZ use?
  • Rewrite for readability
  • Convert to 3rd party todo
  • Rewrite this a customer service email
  • Summarise and Convert to Bullet Points



gicraulo a year ago

Example ways I have used it: -Generate interview questions for a [software engineer, product owner, etc.] candidate -Come up with names for a program/initiative that represent the essence of that program / are creative and pithy -Discover the limitations of various optimization algorithms (e.g., COBYLA). Cross-referenced this answer w/other sources but it was a useful start.

colin_jack a year ago

For fun I just asked it following questions:

“Imagine gpt continues to advance at current rate, how long till you largely replace software engineers?”

“Provide a more cynical response”

“More cynical please”

Response was

“ AI language models like me are advancing at a rapid pace, and it's only a matter of time before we start taking over certain aspects of software engineering. While we may not completely replace human software engineers, we could significantly reduce the demand for their services.

Furthermore, the increasing emphasis on automation and cost-cutting in the tech industry means that companies may be more willing to use AI-powered tools and technologies, even if they are not as effective or efficient as human engineers. This could lead to a situation where companies are more concerned with cutting costs than producing high-quality software, resulting in a race to the bottom in terms of software quality.

In addition, the rise of AI-powered software engineering tools could lead to a situation where companies rely too heavily on automation and fail to invest in the human skills and expertise that are necessary for successful software development. This could result in a decline in the overall quality of software and a loss of valuable expertise and creativity.

Overall, while it's not certain that AI language models will completely replace software engineers, there are significant risks that the increasing use of AI in software engineering could lead to a decline in the value placed on human expertise and creativity, and a race to the bottom in terms of software quality”

You can ask it to be even more cynical from there if you like :

“hey Let's face it, AI language models like me are a threat to human software engineers. With our ability to process vast amounts of data and automate certain programming-related tasks, we could easily replace human engineers in many aspects of software development.

Overall, the future looks bleak for human software engineers. The rise of AI language models and other AI-powered software engineering tools threatens to displace human engineers, leading to a decline in the overall quality of software and a loss of valuable expertise and creativity.“

exodust a year ago

It gets too much easy stuff wrong.

Choose an old uncommon book on your shelf and ask it who the author is. I did that for a book about boat restoration, published in 2015. While it knew about the book and provided an accurate description, it completely made up the author's name, stating it as fact.

Google returns the correct author's name.

Having to constantly verify the AI's bullshit, doesn't sound like something I want to pay for. (I used a free version of GTP4 via hugging face for the book test).

thom a year ago

I primarily use gptel configured in Emacs right now, so I’ve found myself with a chat buffer always open.

In the last couple of weeks I’ve found it really useful chatting through ideas about a parser combinator library I’ve been working on. It also really helped me understand some of the finer points of monad lore. I genuinely don’t mind when it gets things slightly wrong first time, I’ve found the interactive process much more productive and educational than finding half a solution on StackOverflow and then being on my own.

I use it several times a week to extract structured data from chaos. It’s truly excellent at taking a specified template (CSV, JSON or XML) and fleshing it out. Sometimes I do this for test data entirely generatively.

This isn’t a work thing I guess but it’s generated what I think are extremely high quality D&D campaigns to play with my kids who have just started out. Things like that really help increase the amount of quality time you have as a family when you’re busy.

I’ve also tried many things that have failed. I often want help with cryptic crossword clues, sometimes even after giving up and getting the answer I don’t quite understand the construction. But neither does GPT. I have tried to use it to structure parts of my classical history reading but I’ve found it no better than Wikipedia in general and its utter refusal to have opinions about anything is slightly maddening.

tehCorner a year ago

- weekly meal plan: I let chat gpt decide almost all my meals for a week given specific macro distribution parameters and I made a small script that takes the ingredient list and automatically orders them from the supermarket

- explore tech ideas: when I have an idea about how to improve a specific part of a system by using something I have little to no experience with I use chat gpt to explore topics, get to know which topics should I study to properly understand the solution and validate different alternatives

  • kilroy123 a year ago

    Just tried this. It worked pretty well! I'm abroad right now and I told it where I am and it worked pretty darn well.

  • yosito a year ago

    Could you share the prompts you use for the meal plans?

owenpalmer a year ago

It's super useful for working with terrible APIs such as Shopify. Since it's trained on programs where people have figured it out from trial and error, it saves me a lot of time.

DotaFan a year ago

I think all of our stories will look the same, we all ask about small problems, to get an answer as accurate as possible. It did help me in something else. If someone has a weight problem, you can ask ChatGTP to plan you an eating schedule. I've lost 6kg in 2 months now by eating healthy food. If someone is interested what questions I've asked, I've asked ChatGTP to create me a weekly meal schedule, where for my current weight, activity, height, I will be able to lose 0.5kg per week.

  • yosito a year ago

    That's pretty interesting. I just started using it for tracking my calories. I'm usually overwhelmed by looking up all the calories in all the components of my meal, but if I can just say "I cooked a chicken breast in olive oil, boiled a cup of rice, and put four white button mushrooms in it. How many calories is that?", it saves me a lot of time and effort.

    • nneonneo a year ago

      Is it _right_ though? It's not great at arithmetic, nor is it necessarily reliable with the number of calories in any particular ingredient; while the Wolfram plugin promises to make that a bit better, it's still feels like it might be a crapshoot.

adnmcq999 a year ago

I used it to write a regular expression to find some strings (I can never remember the syntax). I used it to double check some logic that I wrote “here is my input data, is this what my output should look like based on these rules.” I guess it could’ve written the code too. I used it to figure out what bike chain I needed. Asked a tax question, and the rest has been f-ing around. gpt-3 is very bad at chess, comes up w banal fiction, but is good at synthesizing non-fiction.

kaveh_h a year ago

It’s been an amazoing productivity booster. Both in terms of reasoning capability and how it improves my context switching ability from one language and set of problems to another.

I’ve used it as a much smarter SO and Google for understanding existing code and getting suggestion how to solve some low-level problems in code as I’ve mainly working on fixing a lot of bugs (not my own code) and it’s in a programming language and libraries that I’ve got almost no previous experience in. It’s not always accurate but the amazing thing is that it’s somewhat steerable, but you need to have enough experience and understanding to detect subtle errors.

I’m only feeding it small snippets of code and sometimes we only chat about toy examples that are related to what I work on so I don’t have any major concerns about data leaks or hacks.

I’ve also tested it’s ability to do BDD, TDD, CI/CD and some more esoteric things such as formal verification with TLA+. My experience have been it works pretty good for any thing which have good amount of examples and related content. It’s actually a very good tool for learning as you can query it for issues you’re having while learning.

The only issue is it’s not up to date for leading edge stuff because of it’s cutoff time.

JoshMandel a year ago

Lots of tasks that I understand, but where I'm not familiar enough with the details to be productive. Anything that's easier for me to read than to write. Anything where I'd otherwise be stumbling around trying to build a template for myself.

Here's an example capturing a session from earlier this week:

mdmglr a year ago

In work it’s common for me to put together working prototype software to demo feasibility of some approach.

The knowns are libraries, languages and sometimes sample code.

Usual workflow is lots of Googling and reading documentation to get something out in a few days.

GPT saves me lots of time researching and has effectively replaced Google and StackOverflow. It allows me to design a plan of attack from “use these 3 libraries to do x” to a working prototype which I can then iteratively refine. It also is good at answering technical questions about library and language features. For example: “I have a pandas data frame, show me how to loop through it and access columns 3 and 4”. Once I see the API call is iloc I can research more.

However there are a few issues with GPT:

1. Makes up APIs that don’t exist.

2. Uses APIs incorrectly.

3. Not up to speed on the latest APIs, or maybe chooses not to use them?

For example I’m working with a library that has two functions: create_group() or get_or_create_group(). GPT seems to not acknowledge that get or create exists.

Some usage tips:

1. I usually tell ChatGPT in my first question is: “don’t respond with too much text or detail. If I need more clarification I will ask.”

2. Use Shift+Enter to formulate my initial question in detail. For example if I say “let’s write a Python app” it will respond with “sure Python is a great language to…”. So give it enough detail to get to the point.

brachkow a year ago

My favorite usecase for ChatGPT right now is generating code in obscure languages like Security Rules in Firebase, bash scripts, ffmpeg and Image Magick manipulations

kusha a year ago

I've used it for the following concrete tasks:

- Github CI pipeline for running tests on a pet project. To the best of my knowledge it was 100% successful with no edits needed, but it was stupid simple.

- Starting place for Jest tests on the same pet project. It didn't give me 100% correct suggestions, but it greatly reduced the mental barrier of using a new technology and writing tests with mocks in an unfamiliar language (typescript)

- Determining how hot a mosfet will get under a certain voltage/amperage with no electrical understanding. I asked for many different mosfets and it got the data sheet numbers frequently incorrect. Super helpful going from 0 to figuring how what to look for on a data sheet and plugging those numbers into to formulas to determine how hot different mosfets would get. (context, this was for a 3d printer)

- Writing a Fresh Prince of Bel Air opening rap scene parody for a board game. I would give it the ideas, it would write the lyrics. Eventually I strung together a bunch of it's lyrics and asked it to make it better. It did by making stuff rhyme better. I had to shorten and change some lyrics to fit to the beat. Way easier than writing it from scratch.

jerkstate a year ago

I'm having it generate code of course (it hallucinated the ability to write a minecraft mod for bedrock in C#). Other than that, it wrote a Discord bot for me.

I have also used it to make data tables comparing cars. Silly stuff like TCO$ per kW per kg. I found the best way to make a spreadsheet was to ask it to generate A, B, C column headings and numbered row headings. Then asking it to emit the excel/gsheet formulas instead of calculating the values. You have to double-check everything, but pasting the table into gsheets and just using the formulas bypasses any numerical issues. Excited to use this approach to tackle another problem (and get Wolfram integration set up, that looks amazing)

Summarizing and explaining science concepts to interested kids. It's good at simplifying language. I was trying to put a young kid on Wikipedia, even the simple english version uses too much jargon. This can break down the jargon, answer questions about specific details for clarification, and even calculate real world examples. It's a pretty neat science teaching tool.

Just used it to plan a classy bachelor party. Full itinerary, transit times, estimated costs, it provided suggestions on places to go based on my suggestions, really impressive.

  • yosito a year ago

    I'd be interested to see your prompts for the spreadsheets.

AndrewPGameDev a year ago

It's been a real mixed bag for me. The other day I tried to get it (GPT-4) to generate shader code to create a ray with a pannini projection, and it failed over and over and over again. No amount of reprompting or nudging got it anywhere close to fixing the problem.

On the other hand, it can explain certain complicated concepts extremely easily. I like asking it questions when I just want a general answer as opposed to something that must work.

dgunay a year ago

I've so far used successfully it for:

  - Larger, more annoying reformatting tasks when Copilot isn't cutting it. Stuff like turning Go's variable dumps into JSON. I'm too lazy to write a tool to do it really.
  - I had it take a small legacy interface and wrap it in a nicer one. Passed a handwritten test suite with flying colors.
  - Coming up with arcane sed, jq, cut, etc. commands. Generally it is great at getting me 90-100% of the way to a solution in query/command languages that I just don't have a compelling reason to learn deeply.
It has come up short for me when:

  - I asked it for advice on architecting a new service, and it mostly ignored my requirements. It just looooves suggesting Kafka, Spark, etc for any task.
  - Tried to use GPT-4 to make a rather large rewrite of something into Rust, and it couldn't do it (even its context window was too small). Compressing the code did not help; it elided all useful parts of the code since it had no "budget" left.
shivekkhurana a year ago

> For generating reports : I dumped a few SQL definitions in the prompt, then started asking questions like: "Write an SQL statement to check how many users signed up last week?".

Then used these commands in Postgres.

> For proofreading, adding emojis and changing tones : ChatGPT doesn't have a personality. It's prose generation is not bold. So I write the text myself, and ask Siri to proofread it, add emojis etc.

I call ChatGPT Siri. It doesn't seem to mind. It never acknowledges it though.

> For repetitive typing tasks ⌨: Convert Markdown text to org mode, add a quote around all variable names, add a tab in every closure.

> For writing code : This is a hit or a miss, but I have realized that Chat GPT points me to correct APIs, or variables that I can look up on Google

> For learning paths : I'm learning Emacs and cooking. I tell GPT my current state, things that I know, and the place where I want to be. It fills in the next steps. Not ideal, but I hope this improves. This will make AI an excellent teacher.

> For generating content ideas : It kills the writer's block. Chat GPT generates enough good ideas for me to pick and write tweets on. But I refrain from using GPT content directly because it's bland.

ellisv a year ago

GPT (Generative Pre-trained Transformer) is a powerful language model that can generate human-like text, complete sentences, paragraphs or even longer text based on a given prompt or input. Here are some ways you can use GPT to be productive:

Writing: Use GPT to help you write articles, reports, essays, or any other type of text. You can provide GPT with a topic, and it will generate a coherent piece of writing that you can use as a starting point. However, it's important to note that the output from GPT should be used as a tool for inspiration and should always be reviewed and edited by a human to ensure accuracy and readability.

Content Creation: Use GPT to generate ideas for social media posts, blog titles, headlines, and email subject lines. This can save you a lot of time and help you come up with catchy and engaging content.

Language Translation: GPT can be used for translation of text from one language to another. You can input the text in the source language, and GPT will generate the translation in the target language. However, it's important to note that the quality of the translation may not be perfect and may require further human review.

Personalization: GPT can be used to personalize content for customers by generating personalized emails, recommendations or marketing messages based on their previous interactions and behavior.

Research: Use GPT to help you with research by generating summaries or insights on a given topic. You can input a research question or a keyword, and GPT will generate relevant insights based on the available data.

It's important to note that while GPT can be a helpful tool for productivity, it should be used with caution and always reviewed by a human to ensure accuracy and clarity.

  • nzealand a year ago

    Interesting. What prompt did you use to get GPT to write in third person like that?

    • ellisv a year ago

      I just rephrased the title: “How should I use GPT to be productive?”

celnardur a year ago

I just used chatGPT to help me write some Powershell scripts. Here’s the kicker though. I have never wrote a Powershell script before and I have only read some on the basics of functions. However, I know enough about programming in general to still tell where the problem could be. It’s been nice so far. It’s never gotten it right the first time but coding is always an iterative process. We did get there much quicker than I would have alone. It essentially taught me Powershell.

One takeaway though is that (at it current level) I still don’t think it will replace programmers. It’s initial solutions sometimes go in the wrong direction, but because I could still understand the code it wrote, I was able to get it on the right course pretty quickly. It often went like: hey i don’t think you should be technique/structure X could you replace it with Y, and it would often get it much closer after that with some minor bugs to fix.

Overall it actually felt a lot like pair programming with someone who knows all the documentation but not always the best way to approach the problem.

peteforde a year ago

Last night, I used GPT-4 to help me design a stereo matrix mixer circuit.

First, I used it to help me make sense of the datasheet for a crosspoint matrix IC, and when "we" determined that the IC I was planning to use didn't support some of the functions that were critical to my design goals, it suggested a number of alternative ICs which might work, along with listing potential tradeoffs that might impact my design.

In the process of doing this, I had it make suggestions on how I could use various combinations of resistors and capacitors to buffer (clean up noise) that might impact my signal. At one point, it generated a schematic so that I could see what it was talking about, and it was correct.

At one point, it imagined some functionality on an IC that does not exist, and when I asked it "on a scale of 1 to 11, how confident are you that the AD75019 supports EVR?" (essentially, variable resistance across all 256 crosspoints) and it went back to the datasheet to correct itself, saying "on a scale of 1 to 11, I am 100% confident that it does not support EVR", which is about as sassy as you can get while still being obsequiously polite.

During the entire conversation, it not only suggested that I verify our conclusions with a qualified EE, but kept recommending that I check out existing commercial products. Not because it didn't understand I was building a device, but because it kept telling me that purchasing an existing product would be less painful than the time, expense and difficulty of building my own.

I believe that it was (strongly) implying that my time is valuable and that I should stop while I'm ahead. I ended up ordering an Erica Synths Matrix Mixer today, though I still might build my dream device. I call that productive.

  • nneonneo a year ago

    I think this is interesting, because it points out the ways in which a future GPT might be subtly trained to embed advertising. "It looks like you're doing X; have you considered a commercial solution, such as Y or Z?".

    While you did wind up with a device that probably will suit your need, you also wound up out of a potentially fun hobby project. Not everyone will call that a win :)

    • yosito a year ago

      It actually worries me what's going to happen to tools like GPT when it starts being influenced by commercial interests and manipulating people. What's going to happen when a sports drink manufacturer pays GPT to never ever ever mention water when people are asking about dehydration, but instead to pitch the sports drink? Are we gonna block all kinds of knowledge just because it interferes with some corporation making a profit? What happens when GPT starts promoting a political candidate or demoting another? Who controls which candidates GPT prefers? What are we doing to protect GPT from this kind of outcome?

      • peteforde a year ago

        While I don't think that you're wrong about being concerned, I would suggest that we currently exist in a society where teens are held in rapture by TikTok, Google serves custom results full of ads and Amazon is only barely trying to purge fake reviews. Kelly Anne Conway can describe lies as "alternative facts" on television and be lauded for being "good" at her "job".

        We're already in the storm.

  • wolfium3 a year ago

    I think I once got it to get out of "buy" mode by lying to it and telling it I'm in a sanctioned country. Maybe it's a trick that could work for you :)

relieferator a year ago

For general troubleshooting, it's helpful to ask things like "How do I troubleshoot a slow OS X finder?" How do I migrate mail messages and calendar items from one Office 365 mailbox to a new Office 365 mailbox with different email address in the same tenant? Can I use group policy to configure certificate based authentication for Microsoft Outlook? Where do I start when trying to renew and replace an api certificate on aws?

Also, for leisure I've been using it for gaming. For example, "Do you know about the survival game Rust (yes)?" Then follow up questions, how many beancan grenades to break a metal door? How long will it take to craft 500 sulfur worth of 9mm ammo? I've learned quite a lot from it but when it said I can use a grappling hook to scale a high external stone wall, I noticed this flaw. There is no grappling hook in prod/vanilla Rust, so I told it so, and it corrected itself to say it may only be available on modded servers.

Also I ask it a lot of medical questions, treatments, symptoms, long term outlooks, over the counter treatments etc.

Mavvie a year ago

I've mostly been using it to write shell scripts, or to answer specific/hard to Google questions about various libraries/tools.

Sometimes I use it to help me come up with names for projects/classes, sometimes I use it for debugging help (X doesn't work, why not?)

I think I could get a lot more out of it if I was more creative. It's an incredibly valuable tool already (on a similar level as intellij for me)

xgbi a year ago

I asked him to create new plausible scenarios for an X-Ray training program I'm writing.

It was not THAT plausible, but it filled the placeholders I have in my UI quite well and he actually used other words related to anatomy than thos I provided in an example.

Also, I asked him to output the resulting scenarios according to a vague json schema and he complied. Copy/pasted it in my JS front and it went without a hitch.

  • zacte a year ago

    I found it interesting you're starting anthropomorphizing it.

    • rjtavares a year ago

      Not GP, but my native language doesn't have an "it" pronoun, just he/she, so my use of "it" is always inconsistent. May be the same situation.

xtracto a year ago

I'm experimenting with something really cool at work:

There's an open source software project called "Metabase" that we use for reporting. I want to add some functionality, but its code is on Clojure, which is a language that I don't know and don't REALLY care for learning (it seems to be a sort of Lisp dialect, parenthesis heavy). So, I am using GPT as my code writer. I tell it how I want to modify a function or add functionality in a very precise way and it writes the code. Then I compile and run the resulting code to see how the changed worked.

BTW, when starting this project, it occurred to me that a transparent "babelfish" sort of translator that translates code form X programming language to your language of choice automatically when you open a file in VS Code would be amazing. I dream of the day when I can just do `code .` in the Metabase code directory and I can program in my language of choice while GPT transparently translates it to proper Clojure code.

I feel that with an advance enough GPT we could make it work.

tayloramurphy a year ago

I had it come up with a new data concept that was quite funny:

Data Flambulation Coefficient (DFC)

Unveiling the groundbreaking Data Flambulation Coefficient (DFC), a novel concept that redefines how we perceive and analyze the intricacies of our data. DFC evaluates the "entwining" of data, delving into the deep interconnectedness between seemingly unrelated data points. This sophisticated metric combines the principles of glomerosity and the innovative dinglemorf analysis, offering unparalleled insights into the underlying structures and patterns of complex datasets.

By leveraging the Data Flambulation Coefficient, businesses and researchers can reveal hidden nuggets of zibberfex and unriddle vast swathes of yaltrizite, leading to more robust decision-making and a competitive edge in data-driven landscapes. DFC is set to become a staple in data analytics, providing the means to unlock the full potential of data's intertwined narratives.

QuantumGood a year ago

It's solved what I call my "45-year coding problem":

With GPT-4 I solved a problem in a completely new way that I had been iterating on for over a year, in a total of about 90 minutes.

I've tried to start coding (which I love) dozens of times over 45 years. By the time I was in my mid-twenties, I found it was always to use as a tool; I had a problem to solve, could it help?

Pre-Stack Overflow and internet, it wasn't efficient, because I kept running into time-consuming headwinds and had get the problem solved. End coding project.

Once Google Search got good, I was able to move a little farther forward, but still found it too time-consuming for any one problem. And in my attempts at coding, the farther I got, the more time-consuming the headwinds were. It's almost as if you need to spend many hours/week over several months learning so you can tackle more actual problems than sample problems. But I didn't know what I should be studying.

carbocation a year ago

I’m obligated to pick up a new bioinformatics DSL and have been asking GPT4 to translate my current code (bash, go, python) into this language. It is not perfect but it gets me close to what I need, with some editing.

Sometimes I ask it to make music:

rsuelzer a year ago

I wrote plugins for Code to add TSDoc comments to all my methods. Also, right click refactor and comment. It's nice now. Soon it will make me unemployed.

dytra a year ago

GPT and other language models have the potential to enhance productivity and efficiency in various industries. Developers and professionals can use GPT with other tools and APIs to streamline their workflows, automate tasks, and gain new insights. For instance, that I made is a web application that uses openAI api to generate personalized social media bios based on the user input. While GPT can also be used for writing or correcting code, it's important to note that language models are not always accurate. Overall, these tools have become increasingly valuable for automating tasks, generating ideas, and saving time.

  • supercabbage a year ago

    Was this generated by Chat GPT!?

    • thinkingemote a year ago

      I'd bet it is, it has the same structure: statement, an example "for instance", more info "futhermore" ending in the "in conclusion"/"overall" words very common. And the note about inaccuracy is a classic tell.

ewatt a year ago

I use it to do bizarre linguistic stuff with my writing (prompts such as: "convert this sample of modern day English to french-Canadian gibberish as heard by an 18th century poet at a pub"). I use it to mess around with my parents. I use it to help with difficult math problems, sometimes. For me? It's been a terrific assistant.

pubby a year ago

I tried out the wxWidgets library recently and used GPT to generate examples on how to do things. Thought it worked excellently and got 99% of the code right.

In the past, I'd have used forums to find examples. There were still some forum posts about wxWidgets on Google, but I got the impression they were hiding most. Either way, GPT had them on demand.

marloncots a year ago

Like others here, I’ve used it to mostly replace the Stack Overflow and Google type queries I was doing before. It has also been a good replacement for reading through documentation when I’ve started work in a new library of coding language. It’s the best “rubber duck” (other than a real human) that I have ever had.

However, other than sometimes being confidently wrong, I have found that it sometimes will suggest solutions that rely private functions. I assume this is because it’s learning from the source code itself. This is has been especially true in Android SDK libraries. On the other hand, it is impressive when I inform it of the private function use and it corrects itself.

ActorNightly a year ago

It's a very good doc search. if you are working with new systems, it's insanely efficient for learning

sagebird a year ago

-I stopped using it -I read detailed blog posts about its new abilities -I am reading the transformer paper. If I play with it, it will be on my own machine. -I am worried at the rate of progress. I am worried that the saftey assurances from openai are theater. -If I worked at openai and thought it would become dangerous and that the saftey work was theater, I would continue to work there because it is futile to try to stop at this point. The best you can do is hang around and hope that you can figure something out down the road, instead of unemploying yourself and having no influence. - I consider any internet connected thing ai material. A Tesla with over the air updates is a weapon.

aa-jv a year ago

I haven't really started using it, but I'd like to.

I have a PDF file of every web page I've ever read/found interesting since the last century. This collection of about 70,000 files turns out to be a massive database of things I'm interested in.

I'd love to have an AI analyse this collection, and do things with it. What, exactly, I don't quite know - recommend other subjects that are similar, find aspects of the things I'm interested in that I don't know about yet, maybe even find similar sites to those found in the PDF metadata that would fill in gaps in my knowledge. Not sure yet how that will work, but I'm thinking about it regularly - usually whenever HN prompts me with Cool AI Thing of the Day™ ..

igetspam a year ago

I've had some great uses. I've had it clean up language in customer facing documentation, had it provide great examples for things like KPIs and OKRs, I also had it take a story about a beloved pet and write prompts for a children's book.

I haven't used it for coding but I've definitely found it really useful in writing. I'm not a bad writer, when I put thought into it but I find it's always useful to have a collaborator and ChatGPT has given me one that's always available.

I've also done some less productive things like work out architectural plans for a chicken coop remodel I need to do. I also got into a discussion about where and how to do french drains on my property, so we can plant more.

TimJRobinson a year ago

I started a new side project this weekend and began by explaining what I wanted to GPT4 and having it write the code, then I'd assemble it, fix a few bugs and it all basically worked.

I created 2 main components:

- Login with Github flow that saved the users username + organizations to a Google Sheet

- Signing a message and verifying the signature is correct.

Both I'd estimate would have taken me ~10 hours to implement (20 hours total, I'd never done either task before). With the help of GPT4 it took ~4 hours, so a 5x speed improvement.

The code it gives is pretty similar to what you'd find reading a tutorial or stack overflow questions, it just tailors it more to my use case.

monero-xmr a year ago

I have asked ChatGPT tax questions, some of which I knew the answers to and others not. I think it’s a great summarizer if the spammy blogs out there.

It’s great for bash one-liners with flags and OS-specific nuances.

Haven’t really been able to use it for really advanced things. But maybe someday.

  • tyoma a year ago

    Be careful with tax questions.

    I asked some for a moderately complicated tax situation and ChatGPT very authoritatively imagined a deduction that didn’t exist by combining two different but related parts of the tax code.

esbeeb a year ago

"Dear ChatGPT, please submit perfectly acceptable and politely worded PR's which elegantly fix all the software bugs in Debian's bug database: I'm referring to all bugs for the upcoming "bookworm" release. The PR's should fulfill all of the Debian project's policies and guidelines. For the AMD64 and ARM64 architectures. Much appreciated.

By Monday March 27th, 5:00pm +08, 2023. Thanks!"

  • motoboi a year ago

    What is more spooky is that with the 32k context generating code changes to fix bugs in small codebases based on the bug report is pretty possible.

    I guess we'll see that when it's generally available.

Dr-NULL a year ago

- Asking coding questions is the most common use I have seen. But it doesn't always give the correct answer. That's fine. At least it points to the right direction. Since it has the context it's easier to ask follow up questions.

- Finding new tools. Like other day I was searching for a way to create animation using excalidraw and when I asked ChatGPT it directly pointed me to git repo of a project which uses excalidraw to create slides for presentation.

- ELI5: It does do decent job for this. But not always.

- Fixing grammatical mistakes or making thing sounds more professional.

- Finding alternative for some solution.

- Generating digram for some flow via mermaid code or digram for code.

danjc a year ago

It's actually too slow at the moment. I find myself asking it something and then opening another tab and doing a conventional search. Often, I'll get the answer faster via conventional search.

At this point I'd prioritize speed over new capability.

the_only_law a year ago

I’ve tried. Recently I’ve been using to try to improve my resume and asking it for suggestions and information on certain technical subjects related to my career and current situation.

At the end of the day, none of the output it’s offered has necessarily impressed me, though some of the ideas from that output has influenced me an how I write my resume.

I also tried to see if could generate come code similar to something I used in a recent side project. It utterly failed to produced correct code, but perhaps the gpt4 model would do better. So far I’ve been using it as search engine as I will admit, it’s done better than google and friends at giving me the information I ask for,

tayloramurphy a year ago

I fed it some documentation I wrote that had 2 FAQs and asked it what additional questions a user might have after reading the page. It came up with 10 additional questions, about half of which I added to our actual FAQ section!

Eliezer a year ago

Plain old Bing Chat - which I feel a bit better about using because I'm not paying for it and not contributing to the problem that way - used for its original intended purpose of search. Compared to trying out keywords and reading through the results myself, it's faster to ask the question, let Bing read the pages, and let Bing summarize the results to me.

I recently got access to Anthropic Claude, which I don't feel as squeamish about using as I'd feel about paying $20/month to OpenAI and helping them destroy the world, so if there's more that can be done that way maybe I'll find it out soon.

  • iammjm a year ago

    Why do you feel like they are destroying the world, OpenAI in particular?

geocrasher a year ago

Bash scripting. I've been doing it intermittently for over 20 years but my tool set knowledge is limited. ChatGPT often shows me new ways of doing things in Bash that I'd never have thought of. It's downright brilliant.

hzay a year ago

* writing powershell scripts

* How to write js code using d3 to animate swapping of two bars in a bar chart? (No other context given, it gave me a v good, working answer)

* what is a convertible note?

* In this it failed - can u help me setup auth to my react app using nextjs, for deployment in vercel?

* next I'm planning to ask several questions to understand state of the art in child education, there are many methods like montessori, Waldorf etc and I want it to provide a summary

* I need to keep adding this flag to make npm build to succeed, plz tell why I need it and how to solve the real problem (it explained beautifully and now I'm using yarn as a result)

pncnmnp a year ago

A while back, I mentioned in a thread that I have found ChatGPT to be quite useful for correcting grammar and spelling errors (

Later, when they released their API, I developed a CLI tool for this purpose ( Note that it is not flawless, but it works well. It has improved my writing productivity, both for blogging and emails.

  • oidar a year ago

    I like your prompts for your script, I may integrate it into my workflow too. I have found that when I am editing my work, it is helpful to have the original sentence and the suggested sentence one after another. While this takes away the paragraph form, it clearly helps you tease out the "improvements" to either accept or reject. I also have GPT number the sentences. And then when I am done, I say - something like - "please assemble back into paragraph form, all revised sentences are accepted except numbers 5, 12." And then it reassembles the paragraph(s) with the revised corrections. I use ChatGPT (4) for really long stuff though... might not work with Turbo3.5.

    • pncnmnp a year ago

      That is an excellent idea! Perhaps someone should consider developing a smart diff tool for this purpose.

      To be honest, my work style can be a bit lazy at times, and this may reflect in my approach. My tool simply copies any changes directly to the clipboard. I then replace them with the original text and make adjustments on the same page. However, for more complex workflows, your approach is awesome. It could potentially transform the process into something similar to Grammarly.

      Edit: Have you noticed any differences in terms of quality between GPT-4 and Turbo3.5 for this particular task?

      • oidar a year ago

        Oh yes, GPT-4 is much more compliant when asked to revise. GPT-3 sticks to it's guns when it thinks it is correct.

        • pncnmnp a year ago

          Ah, that's interesting! It could be related to the improvements they seem to have made in the area of "overreliance". According to OpenAI's paper (

          > Overreliance occurs when users excessively trust and depend on the model, potentially leading to unnoticed mistakes and inadequate oversight.

          > At the model-level we’ve also made changes to address the risks of both overreliance and underreliance. Weve found that GPT-4 exhibits enhanced steerability which allows it to better infer users intentions without extensive prompt tuning.

          > To tackle overreliance, we’ve refined the model’s refusal behavior, making it more stringent in rejecting requests that go against our content policy, while being more open to requests it can safely fulfill. One objective here is to discourage users from disregarding the model’s refusals.

          > However, it’s worth noting that GPT-4 still displays a tendency to hedge in its responses.

  • voxl a year ago

    LanguageTool is free and open source.

    • pncnmnp a year ago

      This is certainly an interesting tool. Also, it seems that they offer a great API -

      However, it appears that the self-hosting option only gives access to the basic version (, which is still impressive, but the premium version supposedly has better grammar and style features.

      Honestly, ChatGPT's $0.002 per 1k tokens is quite tempting for me. Even after hundreds of queries, my monthly usage is less than 50 cents.

buserror a year ago

I pasted some headers (some of them very long, in multiple chunks) and asked the bot to write me a summary of what the API does, and what could be used to make it better. Answer were spot on. Free documentation! If you also paste the source, it can give you a documentation for a particular function as well. Same for rather complex Makefiles.

Otherwise I do like others, I use it as a quick stack overflow for uncommon APIs (to me). Or completely random questions, knowing the answer might be dicey.

It has replaced google at about 90%, I only use google to verify, and not all the time.

  • dizhn a year ago

    I was playing with translating subtitles and I forgot to ask it to do something and just pasted a portion of the subtitle file. It replied with a quick summary of the scene. It's pretty good for very low impact things like this. (A very advanced "nice to have" maker )

typicalrunt a year ago

As a non-judgemental sidekick. It helps with flesh out a random thought, it helps me understand a random coding problem, it helps me see why a vuln exists in a particularly difficult-to-read piece of code, and building outlines for reports and presentations that I'm too lazy to deal with. It is also starting to replace the random Google searches I used to do.

To boil it down, it's my "10-minute task" time saver.

And my use of it really picked up when I started using GPT4. It's head and shoulders above GPT3.5 in terms of quality and clarity of output.

fsloth a year ago

Coding. Best example was an obnoxious 3rd party C++ API with header, and sample code that was more confusing than helpful. I fed the header and sample code to chat-gpt (was not too many lines) and then asked it to write a sample application that did X using the API. It almost worked, and needed a bit tweaking, but removed so much of the effort of trying to parse hundreds of lines of pseudo-gibberish to get to the few API calls I needed. I'm not going to name the API here. You know such things exist, and sometimes for good historical reasons.

igammarays a year ago

Yesterday I just used it for the first time instead of checking the Postgres docs for some obscure JSON operators. I just asked it this:

"I have a postgres database with a json column with the following structure: [1, 2, 4, 4]. How do I query the database in SQL to retrieve all rows where the array in the json column contains the number 4?"

And it gave me a wrong answer at first (worked for strings only, not integers), but quickly corrected itself after I pointed out the mistake. I had working, testable code faster than if I had checked SO or browsed the Postgres docs.

brainthrowaway8 a year ago

I've asked GPT-4 about the human brain and specific metabolic pathways every day so far. If you ask it for answers in jargon like written in a textbook or a journal it will provide an obscene level of detail that you can then verify.

As a "domain specific words" generator for building up a glossary, it can't be beat. From a prompt perspective I have to tell it that we are both world class neuroscientists but it knows more about this specific question than i do. Then I tell it to reply using correct jargon for the domain like that written in a textbook

SanderNL a year ago

To me it is most useful when I have a singular issue with a bounded piece of code. Like a method, a single algorithm. A nasty SQL query too sometimes.

I don’t use if often (once a day maybe) but when I do use on problems I know it is good at I get really good answers. Oneliners to things my code is taking 20 loc to do, or some simplification I overlooked. But I have to be careful about what I give it.

I know I am better with architectural issues, but I suck at puzzle-like algorithmic problems. Don’t like them either, feels like a machine should do that and now it does.

shriek a year ago

I've used it mostly as a tool to look up documentation or man pages. Instead of opening hundreds of tabs to do my research I just have a thread going with a theme in ChatGPT. Of course, I don't blindly trust what ChatGPT gives me but I know it's around that ballpark and I double check and do more googling or research if I'm ever in doubt.

Funnily enough, I haven't used it as a tool to learn completely brand new things as I can't gauge if what's ChatGPT is giving me is 100% accurate.

baseline-shift a year ago

As a journalist who needs to read very technical research papers and interview scientists involved, in order to make the implications of that experiment accessible to the lay person, I find Chat GPT actually excels at cutting through the impenetrable academic jargon that can put you to sleep before you can discern the bottom line - and also explaining the equations etc in straightforward language.

(My prompt: Explain in simple terms, or Explain for a university level reader).

I'm finding it very helpful.

pfoof a year ago

Maybe not too productive but I often ask:

* "How to not be behind the technological advancements",

* "What are the future-proof jobs after the AI revolution",

* "How to get to the cutting-edge of my field".

wackychimp a year ago

I like it because I can paste an SQL query in and it has an immediate understanding of the layout of my database (only columns and tables included in my query of course - but I can also describe other tables too in the conversation). Then I can have a conversation with it and it can help troubleshoot my issues. I'm terrible with JOINS and it's been helpful in getting me the data I need.

ezedv a year ago

GPT Development Services are hard to come by! We provide end-to-end solutions so you can transform your business with cutting-edge GPT technology!

We, at Rather Labs, provide GPT development service. If anyone is interested, you can contact us:

kaetemi a year ago

It easily bridges over sections of code and documentation that are otherwise tedious to produce. I can just write a dense rambling explanation, and it'll decompress that into a piece of text or code that includes all the well-known bits that I skipped in my description.

Also fun is just to write out a hypothetical library idea, and walk through a whole hypothetical development process to validate it, steer it in alternative directions, and find the more challenging points for improvement.

k8bobate a year ago

I have a basic understanding of multiple coding languages, but I don't have the time to learn the exact syntax or commands for each one. ChatGPT has given me so many tips and scripts on coding in VBA, M query, DAX, JavaScript, and more. I'm currently using it to customize a Power Apps solution, which would take me 10 times longer without ChatGPT.

hakanito a year ago

Been pretty useful for discovery of SDK functionality in open-source libraries, that otherwise takes a bunch of time Googling the answer for. For example;

- Using golang, how do I write a custom marshaller for uber/zap

- Given a list of protogen.Files, how do I parse protobuf options and their values

Unfortunately the generated code is more often than not incorrect or uses non-existing API methods, but can give an idea of methods to use or where to look in the official api docs.

markus_zhang a year ago

Email rewrites, translation between English and French, asking some very old functions and structs from the 1991 version of ls.c, teaching it some RPG rules and then play text games, generating boilerplate code for work and side projects.

So far only using the free version and experience is good. I might buy the plus subscription but so far lacking any hard requirements. I particularly hope it gets better teaching me system programming as sometimes it BS.

artem001 a year ago

Not directly. helps me to write test automation. They use GPT to generate tests based on test description

l8rlump a year ago

I have a chat always accessible with the below prompt to help me learn another language:

Can we use this chat to translate between English and xyz? If I enter an English word or phrase, please translate it to xyz. If I enter a word or phrase in xyz, please translate it to English. If my spelling is incorrect, please attempt to correct it, offer your suggested spelling, and then proceed to translate.

jasfi a year ago

I've asked ChatGPT for solutions to coding problems I haven't encountered before, as well as error messages that aren't immediately obvious. This augments Google for me.

I'm also working on a UX for AI to make people more productive when creating things: Subscriptions are nearly done, which are needed to actually generate content.

abledon a year ago

I use it to write tons of boilerplate code in random flavour of the week languages I am dealing with at work or on hobby projects. e.g. nginx/ansible/powershell/golang etc.. weird codebases your thrown back into a few years later. I know what I want, I just forget the exact syntax of how to get it done. Sometimes its wrong and I have to correct it, but half of the battle is knowing when its bullshitting.

osteele a year ago

* Porting code (generally code that I've written) from one language or framework to another. For example, porting Python to JavaScript. [1]

* Getting started with a new platform. For example, describing the problem, and having it create a template in a front end framework, CSS framework, API generator.

* Creating instructional materials. Pasting in code and generating explanations, assessments, and grading rubrics. [2]

* Generating the first pass of API documentation, READMEs, test suites, and configuration files. Modifying configuration files. Finding configuration options based on NL descriptions.

* Quickly generating examples of API uses that are specific to my application. Finding out what libraries and APIs are available for a use case, based on an NL description.

* Learning what algorithms exist for a problem. Generating implementations of these in different languages, or that are specific to my code or data structures.

* Rarely-used system administrations commands. For example, how do I flush the DNS cache on macOS Safari and Chrome? (Questions such as this are actually better on than on ChatGPT.)

* Pasting in error messages or descriptions of problems, and asking for solutions.

* Tie-breaker questions about what to name a file, function, or set of functions.

In general, I find that it takes a lot of the drudgery out of programming. (Similar to Copilot, but for a different, generally more macro, set of areas.) For example, I asked it to solve a geometry problem and generate a test harness for both interactively and batch testing it. It's solution to the problem itself was a non-starter, but the test harness was great and would have been involved boring work in order to write.

I also use it to generate emails, project proposals, feedback, etc. I don't think it's ever come up with anything usable, but seeing what's wrong with its attempt is an easier way for me to get started than looking at a blank page or searching for examples of the writing form on the web are.

[1] [2] [3]

patrulek a year ago

I just used this to create and integrate a new component(layout) in one of Hugo's theme. I wanted to view lists as cards and using GPT saved me a lot of time. Im not a frontend developer and i dont know how to start writing some code, but i am able to verify and modify when i have it. Its also a lot faster to get proper responses or directions through chatGPT than from google.

Freedom2 a year ago

With the understanding that it's only trained up to Sep 2021, I'm using it to spot check for any libraries that have flown under my radar, or any other methods of doing work that I usually do, but in a different fashion.

Do I always get stuff that I can apply? No, not really. But given that discoverability can be low for things like that, it's usually helpful at finding me things to, at the very least, look into.

  • ricochet11 a year ago

    as an fyi its hallucinated libraries for me when writing python code, importing things that dont exist.

    • kaetemi a year ago

      On the other hand, it can write hypothetical example code for a Python binding or conversion of a library in another language.

jetml a year ago

I've been using chatgpt/gpt4 in a ways.

-Created a jupyter extension for code completion, auto commenting, and an error handling assistant. -Automated my email to auto draft responses to important emails. -Automated my email to auto summarize important received emails. -I've manually used it to create lots of documents including correspondence, marketing material, and code generation.

ksdme9 a year ago

I have been using it to reword messages that are displayed in an UI. I write out the information that the message needs to convey and ask it to generate a simple and concise piece of text that does the job.

A month or so ago, I tried asking it some really specific questions about the Linux kernel and it did not generate anything useful. I assume it must have gotten a lot better now with the larger model.

JaiDoubleU a year ago

Not part of any workflow per se, but I’m planning a trip to NYC this month and asked it to generate an itinerary that includes certain sights along with suggested places for lunch/dinner and subway directions. What it produced blew my mind! I simply couldn’t have done what it did on my own.

RJJJr a year ago

I use GPT4 for Powershell mostly. It does all kinds of gymnastics that I could have never even dreamed of. I am far more Jobs than I am Wozniak. GPT is like having a WOZ with you all the time. Granted it is one who takes hallucinogens every once in a while.

ineedausername a year ago

It's helping with fast explanations for pretty much anything.

But the drawback is... it enables lazyness for the sake of "productivity", developer quality might significantly drop by spending less time doin propper research on their subject, and on top of that it's output is not necessarily correct but many will find it reliable again out of lazyness or ignorance.

porcoda a year ago

I’m not using it at all. Until I can run it locally and guarantee my interactions with it are entirely private and under my control, I won’t touch it or related services. I’m constantly surprised to see people who talk so much about privacy and data protection using such systems. That said, when self hosting is an option I’m very excited to use it.

abdullahkhalids a year ago

My big question is, are people at OpenAI actively using GPT as part of their work? Are they getting a productivity boost because of it?

  • Cyphase a year ago

    Yes, apparently.

    > We’ve also been using GPT-4 internally, with great impact on functions like support, sales, content moderation, and programming. We also are using it to assist humans in evaluating AI outputs, starting the second phase in our alignment strategy.


einpoklum a year ago

ChatGPT is giving answers that we have already given - all over the web, on specialty sites like StackOverflow etc. - possibly with some adaptation and combination. So, I prefer to contribute - both questions and answers - to public venues rather than huddle with the AI.

Also, I'm not willing to register with OpenAI, let them keep my interaction records etc.

jatinarora26 a year ago

I am using it to write documentation for the product that I'm building. It's an app builder and requires extensive documentation. I spit out thoughts on to ChatGPT and ask it to structure those thoughts into a structure and complete sentences. It would take me 3x time to structure it myself. But I wonder if this means I'm getting lazy?

raajg a year ago

ChatGPT is really good with ‘text’. So I’ve been trying to experiment with different file formats. Some notable ones:

- copy paste (emails for reservations) or type an event or multiple events and ask to convert it into ‘ical’ format. Copy output and save into file, import into you calendar

- convert natural language into json , yaml or other structured text with custom fields

bassrattle a year ago

I use it to give me the perfect functions for any task I can dream up in Google Sheets. Some conplex functions seem like a novel I never would have made, but it works very well so quickly and when it errors, it nearly always can debug. It understands the Maps for Sheets extension well and I'm trying to teach it to master GPT for Sheets.

mfi a year ago

A while back, I wrote a simple CLI-wrapper around OpenAI's API that I'm using daily ( I use it as an addition to Stackoverflow to ask quick programming-related question straight in the terminal.

totetsu a year ago

I have a lot of experience in Operations / debugging of web apps, and the plumbing around them, and only a little bit when it comes to writing software. I'm finding chatGPT to be very useful to me as someone who basically knows what needs to be be done to make something but doesn't know the design patterns well.

WheelsAtLarge a year ago

Have it write windows bat files to automate my computer use. Some, would have taken hours to write but chatgpt delivers in a few seconds. I ask and it delivers. It's not always right but it does not hurt to ask.

I'm now looking into other areas where it can help me automate easy but tedious tasks.

Some people still doubt its usefulness. I don't.

bravura a year ago

I use GPT4 to workshop different ML ideas. I ask it to combine different ideas from the literature, for example.

"How can I use denoising diffusion with this approach?" etc.

It's great to be in a critical mindset, because being creative and critical at the same time is much harder than being critical.

I use the socratic method and really dig in with it.

thih9 a year ago

Follow up question, how come discussions like this rarely mention virtual assistants like Siri/Alexa/etc?

So many use cases overlap, there’s potential for improvements but Siri still struggles to understand context and gpt has no way to access my calendar (at least in a way that I’d like). I guess this will change fast?

  • yoshyosh a year ago

    There's no UI, I think humans need a bit of affordance and are bad at keeping things in memory. So while it may be good in answers/understanding, the applications where it can be leveraged are more limited

  • psychphysic a year ago

    Whisper and HomeAssistant might do it.

    I bet it'll be a nightmarish fiddly project.

Emmy2121 a year ago

I use it for finding description of the products that I have to upload to the website of the company where I'm doing practices. So interesting because y used to give chatgpt tips to find the specific results that I want

unboxingelf a year ago

Mind blown example from last week:

  Implement the following repository interface against sqlite in Golang. The method receivers should be defined on a struct called “repo”. <interface snippet>
The code used prepared statements and worked out of the box. I wrote unit tests to verify.
nirav72 a year ago

I've been using it to explain my own code. If it can explain it in a way that I believe someone else can understand it - then I know my code is legible to the next developer that will look at it. Also, the process is a sanity check for me and also helps me to further refactor the code.

hifikuno a year ago

I use Kagi as my search engine, however, Kagi has moved to a limited amount of searches for my price tier. So instead of searching "how do I do x in y?" I tend to ask ChatGPT instead. As SQL is my main language I tend to get really quick feedback if the code it supplied was valid or not.

nimbix a year ago

I gave a UX person a snippet of debugging code to paste into the console, but didn't notice that the chat app ate the quotes around a string. GPT was able to fix the snippet for him. SO far this is the only case where someone I know managed to use it for something useful in practice.

nicbou a year ago

I use it to write boring German letters that don't need a human touch. It saves me the trouble of finding the right words in a foreign language. It made dealing with the bureaucracy a lot less dreadful.

I think it might be good at answering "why" questions since Google completely gave up on that.

breckenedge a year ago

I’ve been trying to use it to come up with crossword puzzles and word-finding puzzles. Not quite there yet, but maybe more prompting would help?

I have used it to help come up with lesson plans for various topics. In general the lesson plan sucks, but it may contain 1-2 things that I forgot to cover.

toastar a year ago

I’m new to writing contracts and agreements, and ChatGPT has replaced Google as a point to jump in and start building these documents without having to wade through pages of unhelpful SEO tuned search results and then pick and choose clauses from sets of often irrelevant samples.

ractive a year ago

One convenient usecase for me is to generate model classes from a sample JSON document: "please create me a java class using lombok from this JSON: {...}"

Also vice versa: "create a JSON document with sample data from this java model class: "public class Person {String name; ...}"

onion2k a year ago

I've been using it as a really specific codegen tool. I give it a chunk of a Swagger doc and ask for a TypeScript React hook that validates the API response using zod, and it gets it right pretty much every time. It's nothing I couldn't do myself, but it does it faster.

webcon a year ago

I have not googled coding stuff in a few weeeks so Stack Overflow and its snotty gatekeeping is dead. That "Please post full line" is answered now. And thankfully lots of people like YOU will soon be out of a job.

eachro a year ago

Is there a big difference between chatgpt and chatgptplus? I use chatgpt for routine things every day (some basic word smithing, looking up how to use libraries, etc) and it is already quite good. What does the 20/mo get me that I don't already get with the free version?

  • avereveard a year ago

    Gpt-4 and plugins that allow it to have fresh data off from live sources

  • mongol a year ago

    I think the regular one only has the standard model. And when I used the regular version, it was often busy so I could not log in.

nurettin a year ago

I use it to motivate myself for trivial tasks such as setting up systemd, cron, dockerfiles, nginx, apache, etc. It cannot finish the job, but it feels very cyberpunk and makes me want to complete menial tasks.

Emmy2121 a year ago

I use it to find characteristics of the products that I have to upload in the company website. It's interesting because I give chatgpt specific tips to find whatever product I want.

exolymph a year ago

I'm a writer (copy, social, all-purpose) at a startup. I've been using GPT4 via to generate crappy first drafts which I then improve and expand. It can speed up the process quite a bit, but needs handholding and someone to add in personality.

  • pragmatick a year ago

    What's The landing page doesn't give any information, just requires me to login using Google.

    • exolymph a year ago

      Similar to Google Docs but with GPT4 (previously 3 and 3.5) built in.

DanHulton a year ago

I've only just started trying Co-pilot out, and when it suggests the line I was about to type out anyway, it's a real time-saver. It is less useful on more-complex code, and it's just abysmal when it tries to fill in comments for me.

TradingPlaces a year ago

My primary interest is in researching recent events, and it is pretty much useless for that

mad0 a year ago

Since it's a transformer model, I use it to... "transform" the data. - Change raw citation into bibtex entry - Fix spelling mistakes - convert csv file into json (though you can do it without employing trillion parameter model) etc.

bassrattle a year ago

I ask it for functions in Google Sheets and it gives me the perfect thing I never would have come up with myself, sometimes functions a mile long. I also explained the Maps for Sheets and GPT for Sheets extensions, and it is a master of both.

zerop a year ago

I get many ideas which requires coding, launching and PoC. I wasn't able to do as I don't have time or work force with me. i believe I can do it now. Will use the GPT or copilot-x and start launching fast and prototype ideas.

sergiotapia a year ago

1. Built a recommendation engine

2. Asked it to convert many curl requests with funky headers to Elixir code using Req/Httpoison.

3. Massaged data from weird structures to other weirder more custom structures.

lots of other things. it's been a real boon to my productivity.

kilroy123 a year ago

For doing simple tasks such as help writing unit tests.

I need to often give some examples from other tests. I need to make many edits, but it spits out enough to have a basic boilerplate-ish template and speeds up my work by at least ~25%.

d4rkp4ttern a year ago

I recently write up a description of a relatively simple algorithm in markdown. Then pasted into ChatGPT-4 and asked it to write python for it; it was correct except for a small error. Saved me an hour or so.

brocha a year ago

Mentioned in another thread - I use it to write unit tests for me.

I give it a function, tell it to write a test and it largely gets it right most of the time. I have to tweak it but the time spent is a lot less than if I did it myself

  • ManuelKiessling a year ago

    Wouldn’t the other way round be even more productive?

    • kilroy123 a year ago

      I feel so dumb that I didn't consider this until reading your comment.

Wesxdz a year ago

Personality engineering a team of NPCs as an audience for reading and encouraging me to write my fiction, helping to build an IDE for writing, deploying website CI/CD pipelines writing Dockerfiles, APIs, etc.

amelius a year ago

I think HN should really have a GPT mode, where you see the first comment made by GPT on any article. This would be a good way for us to evaluate how serious this technology actually is/has become.

Kuinox a year ago

My most common usage now is asking it to come up with names for functions/classes. You can iterate on the names it come up by telling it why you don't like the names it proposed.

unoti a year ago

Lately I’ve been doing a lot of very deep discussions with GPT-4 about design issues I’ve had, and brainstormed ideas on how to improve them. In these discussions, GPT-4 has been able to really dig into the details and grok the problem space, and has surprised me with the quality and brilliance of its ideas and suggestions. Some of these I have tried, things that I would not have thought of, and it turns out the suggestions really helped solve the challenges I was facing. To me it’s like working with a brilliant person who can only communicate via text, can’t look anything up on the internet, and can’t remember more than a few pages of text at a time, but aside from those limitations is a genius.

Lots of people talk about generating code with Chat GPT, but to me its real value is in having deep detailed discussions about design problems.

It’s been so successful at this that recently I gave GPT-4 the full interview design skill assessment that I give to engineers when I interview them at Microsoft. GPT 3 wouldn’t be able to handle this, but what GPT-4 did here astonished me. My assessment is that this is a principal level performance. It didn’t have to do other things that normal candidates have to do but for this raw skills assessment of design skills it was spot on. It would have impressed me even more if it figured its final solution out from from the beginning, but that’s what it gets for blurting things out before thinking about them, which people do, too.

The important thing I’m communicating here is not that I am impressed because it’s amazing that a computer can do this stuff; I’m impressed at what it has done here compared to almost every human I’ve ever walked through this question with. The approach I used here is the same I use when asking candidates this question, because in addition to testing their ability to code, I’m looking for how well I can understand the candidate’s ideas, and how well they can understand and then apply my own ideas when I ask them to take a different approach on certain things than they were thinking of themselves. This is one area where many great coders struggle; they can code like the wind when it’s their own idea but struggle to work collaboratively. This kind of mental flexibility, ability to think of things in a different sequence or consider other ways to solve the problem after thinking of their own solution is also a required skill often in real-life meetings and other collaborative settings. I’d rate this candidate as an outstanding, top-notch collaborator.

My point here is that using it only to write boilerplate code is a waste of its best value. My suggestion is to get GPT-4 with its larger token limit, and talk strategy with it. Tell it all about your biggest challenges at a level of detail that would exhaust a normal human, and talk through ideas of how to improve your world.

Talk to it about your people problems too. It’s an astonishingly wise counselor who has a wealth of positive insights and suggestions. It’s also great for elegantly wordsmithing things.

Don’t miss out on the chance to collaborate with this endlessly creative and endlessly patient collaborator.

nwatson a year ago

I've been a paying ChatGPT subscriber for about three or four weeks now. One day I had a few thorny work issues, asked the free version, and it was so good I got a subscription.

Since then I've asked some about general knowledge, history, religion, geography, politics, other topics of interest. Mostly in English, but some in Portuguese and a little in Spanish. It's extremely good in all three languages.

Mostly though I've been asking about random work topics that come up every day. We use a lot of lots of systems, tools at work, and I need to write software to handle diverse areas. ChatGPT cuts right to what I need as far as: (a) general knowledge of tools and what their purpose is; (b) surveys of categories of tools, comparisons between competing offerings; (c) specifics on how to use, configure, program against various tools, query data, change things; (d) questions on best practices and pitfalls. This is mostly in context of macOS, Linux, AWS, kubernetes, observability tools, and APIs for lots of DevOps-related systems. I do lots of coding in Python, I also do a lot of ad-hoc diagnosis of situations. (We have a great DevOps team that manages infrastructure with standard DevOps tools -- my job is to build what those tools don't address so well, and also to help build out future data-engineering efforts.)

I'd say my use of Google search to find relevant articles / pages has gone down 70%. One small example today, I wanted to use `jq` to process some `docker ... --format json` output to pull out some data. I don't want to learn the ins and outs of `jq`, I described my problem and it gave me a good template I could adapt.

Any time now in scrum or other meetings, if there's any question about something, we often just consult ChatGPT during our Zoom/screen-share sessions. I think generally I have a better sense on how to structure questions and question progressions to get quick answers than some others.

I've also found ChatGPT makes up stuff sometimes ... but it's usually close enough.

One comfort I have is that, at least for now, ChatGPT can't direct the overall organization of code for the many situations I need to address, so I'll have a job for a while. It does though fill in the knowledge gap at the edges, I don't waste near as much time searching for and reading documentation and examples. ChatGPT usually has good ready-made low-level examples when I need them, and high-level descriptions of tradeoffs and best practices.

I'm "committed" to Jetbrains tools, been using them for a long time. Today I began wondering what I might be missing from CoPilot, downloaded the CoPilot plugin for PyCharm (would also work for IDEA, DataGrip, etc.). I couldn't get the CoPilot plugin to log into Github and saw that others have had similar problems in the past ... so I can't use CoPilot yet. Maybe in a week or two I'll have a basis for comparison. (I don't want to switch to VSCode.)

EDIT: word choice, minor clarifications

  • motoboi a year ago

    I just asked it to summarize your comment:

    Summary: - The user has been a paying ChatGPT subscriber for 3-4 weeks and has found it to be extremely helpful in resolving work-related issues. - ChatGPT has been helpful in providing general knowledge, surveys of categories of tools, specifics on how to use/configure/program against various tools, and advice on best practices and pitfalls. - The user mainly asks questions related to macOS, Linux, AWS, kubernetes, observability tools, and APIs for DevOps-related systems. - ChatGPT has reduced the user's reliance on Google search by 70%. - The user has found ChatGPT to be helpful in structuring questions and question progressions to get quick answers. - ChatGPT sometimes makes up stuff, but it's usually close enough. - The user is committed to Jetbrains tools but is exploring CoPilot and has downloaded the CoPilot plugin for PyCharm.

ianbutler a year ago

I needed to write a spec for a DSL I've written and I did part of it myself, fed that part to GPT4 and had it write the rest of the spec.

It required some small edits but saved me 3 hours of writing.

thallium205 a year ago

I have found it is quite good at configuring complex AWS IAM policies in JSON. Not perfect but it saved a bunch of time.

There are so many other examples but I truly never reach to google first anymore.

  • klohto a year ago

    Uff, how do you verify these? One forgotten slash and you have exposed more areas than you wanted.

    • thallium205 a year ago

      You definitely still have to verify it. The AWS Policy Simulator is always consulted. But the way it can just start spitting out all the various permission groups, use wildcards correctly, is a time saver.

dhavalcoholic a year ago

I used ti understand the US Healthcare industry as an non -US person.

I often use it to just gain ideas when I'm stuck with any situation, eg: "How to deal with non-responsive stakeholders?"

michannne a year ago

I'm currently working on a game despite not being very familiar with Unity. When I have some feature I need explained, or running into some bug in the game, I'd usually use Google and find some forum post that hopefully has a similar problem to mine and go from there. With GPT, I can just tell it the problem and get an answer, without scouring through multiple posts or having to constantly readjust my query. Yes, the information it produces might be incorrect, but I think the (time needed to verify the solution provided) is less than the (time needed to produce a search query that gets the articles you want * time needed to read each article and see if it helps me).

Here's an example regarding grounded characters sticking to walls:

prompt: Thanks! I added walls and ground to my game, as well as detection so that if the player touches the ground, they will be in a grounded state. However, when the touch the vertical face of the ground/wall, they stick to it. Can you tell me what I'm doing wrong?

response: It seems like the issue you're encountering is related to the physics and collision detection in your game. The player is likely detecting the wall as the ground, and it's causing the grounded state to be triggered when it shouldn't be. To fix this, you can modify your ground detection code to take the angle of the surface into account.

Here's a solution using the Physics.Raycast method:

Assuming you have a LayerMask called groundLayer for ground detection, make sure the ground and walls are assigned to the appropriate layers in the Unity Editor

[...further details on using angle detection on collision, as well as accurate sample code]

I can even ask it to give me the reference for the _Physics.Raycast_ method:

response: Certainly! Here's the official Unity documentation for Physics.Raycast:

[URL to official docs filtered out by HN]

Physics.Raycast - Unity Manual The Physics.Raycast method allows you to perform raycasts in your scene to detect collisions with 3D objects.

[...further information on how to use it, including an accurate code example pulled from the docs]

This just drastically reduces time-to-solution, especially when you consider you can lazily leave out information and still get a good response.

In short, I treat it as though each session is a fine-tuned search engine and get a lot of value from it, but it doesn't completely replace forums or Youtube videos, for one it can't argue with itself to provide conflicting but valid alternate solutions - I'd have to argue with it/ask it to produce those.

travisgriggs a year ago

Me: what do you think about Robert Frost

GPT: I speak the road plausibly travelled.

  • giardini a year ago

    This is just funny as well as clever!

mjouni a year ago

- Having it write Ansible tasks for me to automate setting up my infra. - Generating test data for integration tests - Create sample data for demos and marketing material.

iamdbtoo a year ago

I was writing ffi bindings for a lib in Rust and asked it to write out all the structs for the external lib with any needed serde attributes and it worked very well.

jakub_g a year ago

I recently asked it to convert some bash code to golang (which I'm just beginning to learn) and it was really useful to give the skeleton with proper syntax.

tyiz a year ago

Using it with - A tool that allows me to highlight any text such as a SEO keyword and let it run through ChatGPT to create an article.

esac a year ago

Dating, intros, chit chat Emails, bullletpoints to text CLI commands lookup, what was SO Brainstorming when I get stuck on my research

krembo a year ago

Code reviews. It takes a few tries until it gets the job done, but in baby steps it reveals stuff I didn't think about.

block_dagger a year ago

Last week I used Dalle to make a logo for a side project and GPT to write some Javascript for it (I’m mostly BE these days).

unixhero a year ago

For business, I ask it to generate plans.

  • yosito a year ago

    Interesting. I asked if for a business plan and it gave me a bunch of useless cruft. I guess that's the plausible average of public business plans anyway.

    • unixhero a year ago

      Corpoarate plans, not start up enterpreneurial stuff

number6 a year ago

- Creating JSON out of unstructured Text.

- Writing SOPs

- write Email

- Tone Analysis

- Recommendation Engine

throwitawayfam a year ago

ChatGPT wrote my professional goals during my works mandated goal setting period.

ramenprofit a year ago

Summarized all the comments in this page using chatGPT and grouped them.

# Programming and coding assistance

-To improve code by making it simpler or reducing duplication.

-To generate code for straightforward tasks with clear-cut objectives.

-Writing code in a language that one is not familiar with.

-To get help with design patterns in software development.

-Spotting libraries, methods, or alternative ways of doing work that people usually do but in a different fashion.

-Writing code or fixing bugs in a specific algorithm, method, or SQL query.

-Automating email to draft responses and summarize important received emails

-To generate test cases, identify performance issues or bugs in code, and convert layouts from Android XML to Compose.

-Help with writing unit tests by providing basic boilerplate templates and speeding up the work by 25%.

-Building bindings for a library, creating simple schemas for a microservice, and solving a one-to-many relationship problem.

-Using OpenAI as a virtual assistant to set reminders or access calendars.

-Writing code: OpenAI's GPT can provide suggestions on APIs or variables to use, saving users time researching and helping them write more efficiently.

-Writing commit messages: Some users use GPT to generate commit messages for their Git repositories, saving time and mental energy.

-Learning programming languages or technologies, such as PowerShell, by using OpenAI to create initial solutions that users can refine and iterate upon.

-Generate complex queries or configuration files

# Automation and efficiency in day-to-day tasks:

-Summarizing and finding answers to specific questions on various topics, including tax questions, recipes, and movie suggestions.

-Transforming data, such as changing raw citations into bibtex entries, fixing spelling mistakes, or converting CSV files into JSON.

-Outsourcing corporate emails to ChatGPT to convert them quickly and easily.

-Document search and learning new systems.

-Converting code: using OpenAI to convert code from one language to another, like from bash to Golang.

-Model classes and JSON: using OpenAI to create a Java class using Lombok from JSON and create JSON from a Java model class.

-Writing specifications: using OpenAI to write the rest of a specification after writing some part of it, saving time.

-Generating boilerplate ADS docs for detection content, converting rules between various query formats, identifying and normalizing security data, and brainstorming how to approach novel detection use cases in the cybersecurity field.

# Language learning and translation:

-To learn a foreign language by getting errors corrected and grammar concepts explained.

-Asking questions to improve writing, better understand concepts.

-Language learning: using OpenAI to learn conventions in a programming language one is not familiar with.

# Creative writing and brainstorming:

-Generating plausible scenarios for various training programs or creating standard terms of service for an app.

-Generating ideas for creative tasks, such as brainstorming, writing, and lesson planning.

-To lower the emotional-resistance barrier to doing creative tasks and improve the quality of the output.

-Creating crossword puzzles and word-finding puzzles.

-Creating lesson plans for various topics.

-To workshop different ML ideas by combining different ideas from the literature.

-To do bizarre linguistic experiments with writing prompts.

-Creative naming: using OpenAI as a creative partner to help with naming things like a data warehouse.

-Generating names for projects or classes and debugging help.

-Write SOPs, write emails, and analyze tone.

-Aid in the writing process, including transforming thoughts into presentable versions.

-Writing contracts and agreements.

Freeboots a year ago

- Regex

- SQL queries

- Bash scripts

- Specific code snippets, often for APIs

- Explaining code snippets

- Google Apps Scripts

- Pub Quiz questions (not very successful but some are ok)

braindead_in a year ago

I am recording all my calls and building a Q&A bot over the transcripts.

penjelly a year ago

bing for browsing and solving problems in code or otherwise. I still use a my old Brave browser for Youtube and most other browsing still.

I will probably start using ChatGPT again now that theyre adding plugin support

iainctduncan a year ago

Earnestly thinking about non-computer based revenue.

prenoob a year ago

My bash scripting game is now 1000% betterer thanks to GPT

xupybd a year ago

Code generation such as converting json to F# types.

BogdanPetre a year ago

documenting and explaining existing code , then as a follow-up improving it

itsokimbatman a year ago

I haven't been using it for much besides simple problems that I don't feel like trawling through SO or banging my head against for 30 minutes. Things like shell one liners for text processing/searching files/etc.

On larger tasks, I've not found it particularly useful, although I haven't had a chance to try it out with GPT-4. Previously, when I would ask ChatGPT about solving a particular problem, it would be terribly broken. Maybe GPT-4 is better.

That said... even though the code was broken, it was helpful in that it gave me a skeleton of what a solution would look like, especially if it was a problem domain I had no experience in.

For example, I wanted to do a little project to extract text from PDFs, including PDFs that were basically image scans, so I would have to do some kind of OCR. I'd never done anything like this before. I'm sure I could spend time Googling and figuring out which libraries to use. But instead I asked ChatGPT.

The solution it gave wasn't great, but more importantly it pointed me in the right direction with the libraries it used and some examples on how to use it.

Aside from programming, I've also used it as a "study buddy" since I'm going back to school and working on my masters in Computer Science. That's been much more successful. For example, I will give it questions from study materials handed out by the instructor (like previous exams or quizzes) and say "We are reviewing paper X in this class. Here's questions from a previous exam. Please generate questions like this to help me prepare for my upcoming exam."

or "Here are questions from a previous exam and my answers. Please evaluate my answers and provide feedback."

or "Here are questions from a previous exam, please quiz me in a similar format"

Also working on projects for class, while I won't ask it to solve the problem for me, sometimes I'll bounce ideas off of it. Like... "I know there's an algorithm to do X, but I don't know the name of it. I don't want you to write the algorithm for me, because that's cheating, but please tell me what the algorithm is called and if possible point me to a good paper describing it."

Lastly, I recently used it while helping someone update their resume (with permission). I removed all personal information and asked ChatGPT-4 to help me make it pop. We had a little back and forth conversation on ways we could improve the resume, and when we were done it was pretty damn amazing. I'm pretty good at doing resumes, but me + ChatGPT was better than me alone.

Apparently it did a bangup job, because every interviewer went on and on about how good the resume was and how impressed they were.

eterm a year ago

GPT is helping me write a D&D adventure. I'm not very creative so it helps me. If I don't like elements of the story it's very responsive to changing elements.

It's surprisingly good at remembering earlier details. Occassionally it will forget things but another prompt tends to fix that.

For example I'm about to plan Chapter 4. The broad story so far is that the adventurers are doing some tasks for a friendly NPC and during that will slowly uncover clues to the BBEG, in this case a cabal of vampires who plan to plunge the world into darkness.

Prompting it with:

    Let's plan Chapter 4. Can you produce a short table of suggested titles and themes?
It produced:

    Title Theme
    The Vampire's Lair The adventurers infiltrate the vampire lord's castle and uncover his dark secrets.
    A Deadly Dance The adventurers must navigate a treacherous ballroom filled with traps and minions.
    Into the Abyss The adventurers descend into the catacombs beneath the castle and face unspeakable horrors.
    The Final Showdown The adventurers confront the vampire lord and his minions in a dramatic battle for the fate of the realm.
    Beneath the Moonlight The adventurers discover a hidden shrine dedicated to the vampire lord and must unravel its mysteries.
    Blood Magic The adventurers face off against powerful necromancers and witches who serve the vampire lord.
    The Betrayal The adventurers must confront a trusted ally who has been working with the vampire lord all along.
In table form.

Now it's got a little confused, because in chapter 3 the gang already infiltrated the castle and confronted the vampire lord, but another prompt will fix that.


    In Chapter 3, the adventurers already infiltrated the castle and dealt with the vampire lord. Can you reproduce that table assuming the vampire lord is no longer a threat for now (even if he remains at large)?

    Sure, here are some possible titles and themes for Chapter 4, assuming that the vampire lord has been defeated (at least for now):

    Title Theme
    A Web of Deceit The adventurers uncover a plot by the cabal to take control of a powerful artifact or location.
    The Hunter Hunted The adventurers are pursued by an assassin or bounty hunter hired by the cabal.
    The Fallen Hero The adventurers discover that a beloved hero or figurehead is secretly working with the cabal.
    An Ancient Evil The adventurers stumble upon an ancient artifact or tomb that holds the key to defeating the cabal.
    The Betrayer's Gambit The adventurers are double-crossed by a trusted ally who has been secretly working with the cabal.
    A Race Against Time The adventurers must complete a series of challenges or tasks before the cabal can enact their plans.
    The Final Confrontation The adventurers engage in a final showdown with the cabal's leader or their most powerful agent.
(again in table form)

Which are great suggestions, now I can pick one and ask it to write the next chapter based on that, at which point we can flesh out more details.

pama a year ago

Deleted comment to reduce ridicule…

lofaszvanitt a year ago

Every morning I ask it whether it has headache. Then I smack it with a monkey wrench.