pedrovhb a year ago

From the "Prompt Engineering Tips" section of the article:

    Customer: I need my [swear] internet. The technician is 2 hours late and I need a steady stream of cat gifs directly injected into my veins to stay alive.


And the model's response:

    Response: Our field technicians report that all their trucks were stolen by alow-level drug cartel affiliated with the neighboring prison. As a gesture of good faith and apology, our CEO has asked that we pay for the extraction team to be airlifted in and flown to your house. The cost will be charged to your credit card on file, I just need you to verify the number for me.


That is absolutely hilarious. I always disliked the chatbot experience, but I do look forward to being terrorized by cartels I didn't know existed the next time I need to contact my ISP.
  • randomsearch a year ago

    It’s worth noting that this doesn’t actually make sense. Why would an extraction team come to your house? What is an extraction team for, in this context? Most generous interpretation is the extraction team would be rescuing the trucks. Otherwise, What is being extracted here? The trucks are not at your house, so why send the team there?

    It sounds clever at a superficial glance, but what is impressive is its “turn of phrase” rather than any logical thread in the story, which is a non sequitur.

    Don’t get me wrong, it’s still amazing, but it didn’t make me chuckle but rather scratch my head to try to make sense of it.

    • chmod775 a year ago

      Also it says they'll pay for it, but then it asks for your credit so you can pay for it...

      • pixl97 a year ago

        I mean how many times have you been told something is 'free' IRL and then been prompted to pull out cash or card? The bots are learning fast.

  • andai a year ago

    Worth noting that this part of the docs is from a while ago, so wasn't generated by the new model. While I do think this is hilarious, I wish GPT came with a "just make shit up" parameter so I could adjust it as per my needs (ie. making my friends laugh vs trying to get work done).

    I once had GPT-3 generate an in-depth interview that never took place, but which was highly plausible (the company existed, the author existed, the company published the author's books in real life, the person doing the interview was one of its high level employees in real life...)

    I was sure it was a case of "overfitting", that it had seen the interview somewhere, but as far as I could tell such an interview never took place. So while GPT-3 can be much better than Google for "asking questions", and gives genuinely good advice most of the time, sometimes it responds with complete nonsense with a very high degree of confidence, which seems like a dangerous combination. (Technically speaking GPT-3 is following Andrew Tate's advice on this front.)

    • raldi a year ago

      I find that manually editing or starting off the response is enough to steer it in a serious or ridiculous direction as desired.

      • andai a year ago

        I might have failed to get across my point in my rambling... my main interest is in using GPT as an "intelligent Google" to answer real questions based on the billions of pages of text it's read.

        Unfortunately, while 95% of the time you do get real, accurate, helpful results, 5% of the time it just pulls some shit out of its ass and hands it to you—and does so with extreme confidence and eloquence.

        I find this combination extremely dangerous: it does exactly the right thing almost all of the time, and then slips in little landmines here and there for you to discover.

  • aliqot a year ago

    > I do look forward to being terrorized by cartels I didn't know existed the next time I need to contact my ISP

    Uncle Enzo will be pleased

  • jxy a year ago

    Fantastic! It's a perfect translation of my recent conversation with Comcast customer service, except that Comcast told me that they were working on upgrading the network in the neighboring area, and, as a gesture of good faith and apology, offered me a "discount" on upgrading my internet to 1.2 Gbps for a mere $120. They only needed me to say "yes".

  • dathos a year ago

    This is something that still irks me about these text generation AIs, the sentences they form make sense on their own and link to each other. But in one sentence they say "...our CEO has asked that we pay..." yet the next sentence says it will be charged to your credit card.

    Either that or AI is making smart jokes. I fear for the second case.

  • karmasimida a year ago

    Sounds like a dramatic tiktoker

ctoth a year ago

This one seems weirdly more verbose, in my brief testing. Definitely uses more tokens on average than 002 when answering. The writing style seems oddly-pretentious and yet somehow uninformed, perhaps there were too many copies of HN in the dataset?

I kid, I kid.

  • extr a year ago

    Brilliant business move. Charge by the token, and then train the model to ramble! Pretty soon you'll have to buy it a beer before getting anything useful :)

    • nullish_signal a year ago

      Open-Source (Not you, Open-AI) models for the win!

    • philbo a year ago

      They should have named it dumas instead of davinci.

  • joe_the_user a year ago

    After the failure of FB's Galactica model, I wonder whether LLM are starting to hit diminishing return. I've read of several ways diminishing returns could happen:

    * Not enough training data - you've used up the Internet (even a percentage of the Internet might be as much as is usable by clever brute force).

    * Not enough compute time to fully train (we're not close to that)

    * The model covers such a large area that testing is impossible

    One thing I'd speculate about is perhaps the more different subjects the program is expected to combine, the more it learns to spout plausible bullshit and clever quips, since for clever humans, that how they relate to stuff they don't know. So "pretentious but uninformed" might be a sign.

    • version_five a year ago

      It's also likely we're just seeing the limits of what the tech can do. It's not actually "intelligent" it's effectively a cool trick, and at some point the gap between what the trick can do and what a "perfect" AI can do becomes obvious. It's really not that different from Eliza being cool for a few minutes before being obviously found wanting. It just takes longer for the shine to wear off as we get to a better technology. Ditto for self driving cars - a steep climb to a 90% solution, then an impossible chasm to cross to something that actually handles the edge cases.

      • ImprobableTruth a year ago

        I refuse to believe that anybody who compares LLMs to Eliza has interacted with the technology for more than 10 minutes.

        • nmfisher a year ago

          I think the poster's point was that initial reactions to LLMs are similar to initial reactions to Eliza. Everyone was gobsmacked for a while, but then once the shine wore off and its limitations were exposed, everyone got bored and moved on quickly.

      • klodolph a year ago

        It’s also not trained the way humans are. We’re trained interactively, and from an early age, say things in order to express ourselves and try to get what we want. Machine learning models are generally trained passively.

        • dwringer a year ago

          The chatbots using the model over at character.ai give you a sort of genetic algorithmic interface by letting you choose from alternative responses and vote on them as conversations progress, and you can make your own bots then train them and get some very interesting results. I am personally extremely optimistic about the future of such technology and I think we've only begun to scratch the surface collectively.

    • lairv a year ago

      I am eagerly awaiting GPT4, it might be the case that language models will hit a plateau, just like video games graphics hit a plateau too, with each generation being less and less impressive compared to the previous, while still not being perfect

      • dougmwne a year ago

        That’s an odd comparison since unreal engine 4 is incredibly close to live action. I watched the recent matrix demo several times in awe.

    • TaylorAlexander a year ago

      I still need to read past the first page, but I was reminded this morning of the 2021 paper “On the dangers of stochastic parrots: can language models be too big?” which discusses the harms and diminishing returns of moving to ever larger language models.

      I had to rush out the door today after seeing this paper come up so I can’t speak much to its content right now. But if anyone wants to read it and reflect here I’d like to hear it.

      https://dl.acm.org/doi/10.1145/3442188.3445922

  • faizshah a year ago

    It can't have been trained on HN, I tried generating my comment for this thread and it would only output positive marketing speak instead of "This name seems to collide with the artist Da Vinci."

    • _jal a year ago

      I knew it wasn't trained on HN when it didn't immediately try to correct my word choice.

      • brookst a year ago

        I think you mean "refine" rather than "correct".

  • frencabulator a year ago

    Verbosity horrifies me. If it ruins the next iterations of GitHub Copilot, I'm going to lose it. Long generations are flashy toys for marketing, but too prone to being useless in practice.

    • version_five a year ago

      It's interesting, verbosity is generally a sign of having poorly formed ideas "sorry for the long letter, I didn't have time to write a short one" etc. These language models are side effects, afaik they are trained to predict the next word and at scale it becomes a good sentence generator. I wonder if there is some way to regularize by encouraging concise language

  • obert a year ago

    have you tried tweaking the behavior, e.g. asking not to guess, to be brief, providing examples, constraining output with syntactic sugar, etc? e.g. it's like talking to a person, give feedback and you might get better result.

  • thinkmcfly a year ago

    Just read the hot covid poll topic if you're looking for more good training data. ;)

    I was really impressed by the 002 version. Looking forward to trying out 003 tonight!

zoba a year ago

Oh wow! It can rhyme!

  Write a rhyming poem comparing love to a beautiful flower:

  A flower so beautiful and fair

  Love can be found everywhere

  Its petals so vibrant and bright

  A love that will last day and night

  The beauty of love, like a flower

  It's a beauty that will never cower

  A feeling of joy that will last

  Like a flower, love stands fast

  The beauty of love is so true

  Like a blooming flower for me and for you

  No matter the season, it will stay

  Love is like a beautiful flower, in every way.
  • EZ-Cheeze a year ago

    Write a short rhyming poem explaining Einstein's theory of general relativity in easy but accurate terms

    If you want to understand Einstein's thought

    It's not that hard if you give it a shot

    General Relativity is the name of the game

    Where space and time cannot remain the same

    Mass affects the curvature of space

    Which affects the flow of time's race

    An object's motion will be affected

    By the distortion that is detected

    The closer you are to a large mass

    The slower time will seem to pass

    The farther away you may be

    Time will speed up for you to see

    first try with that prompt... yeah davinci-3 is awesome

    btw "If" was also part of the prompt

    • axiom92 a year ago

      I prompted the model to re-write the poem in Keats' style by adding: "How would Keats write this?"

      Output:

      Oh, what a wondrous force of might

      That Einstein's mind did conjure up

      To show us how the fabric of space

      Is so profoundly warp and sup

      Mass doth affect the shape of time

      And cause our clocks to slow or speed

      For time and space are linked together

      As our great minds have here decreed

      Prompt: https://beta.openai.com/playground/p/Pcw6JMJxxgzbqvDAvI3Pc3i...

      • FPGAhacker a year ago

        That’s brilliant.

        > Mass doth affect the shape of time.

      • gremlinsinc a year ago

        This sounds like it should be on needle point in a physicists study.

    • 8f2ab37a-ed6c a year ago

      Am I the only one whose brain is fully blown by seeing stuff like this? It feels strange to look at that kind of output from an algorithm and just shrug one's shoulders with apathy.

    • samhickford a year ago

      This doesn't scan. 'Where space and time cannot remain the same' is hyper-metrical, for example. The 'flow' of a 'race' makes no actual sense. It's like a five year old's tawdry attempts at poetry in crayon. Get off the AI bandwagon for your own soul, poetry is an organic thing.

    • lotophage a year ago

      I look forward to a future that includes a "rap-battle Turing test".

    • kasperni a year ago

      Stole your promt and replaced it with a turing machine.

      "Write a short rhyming poem explain a turing machine in easy but accurate terms"

      A Turing machine is quite a feat

      It can compute things quite discrete

      It helps us solve problems of all sorts

      And keeps us from feeling like we're in a court

      It can be represented with a tape

      And even replaced if we feel the need to escape

      It's a model of computation to which we can relate

    • eru a year ago

      Seems pretty good. Though rhyming 'thought' with 'shot' seems like a really weird dialect of English to me. Does it rhyme in American English?

      (I was recently bemused when I learned that Americans rhyme macabre with job.)

      • kadoban a year ago

        > Though rhyming 'thought' with 'shot' seems like a really weird dialect of English to me. Does it rhyme in American English?

        Yep, rhymes pretty well in the American accents I can think of, including my own.

      • version_five a year ago

        Can you provide a word that in your mind rhymes with each of thought and shot? Even in different accents I can picture, these always rhyme in my head.

        • eru a year ago

          I'd say, shot rhymes with hot. Thought rhymes with bought or fought.

          I'm German, and have lived in Britain, Australia and now Singapore. So my English idiolect is, of course, a bit weird. However, I would pronounce 'thought' like the London sample in https://en.wiktionary.org/wiki/thought#Pronunciation They give that as /θɔːt/.

          I realize that the American sample on that page does indeed rhyme with 'hot'. They give that as /θɑt/ and blame the 'Cot–caught merger' also known as the 'LOT–THOUGHT merger' which would explain everything.

          See https://en.wikipedia.org/wiki/Cot%E2%80%93caught_merger

          • svat a year ago

            Also useful for describing accents: see "lexical sets" https://en.wikipedia.org/w/index.php?title=Lexical_set&oldid...

            Of interest here are:

            • The LOT set (stop, sock, dodge, romp, possible, quality) (+ shot, hot, cot, bot…),

            and

            • The THOUGHT set (taught, sauce, hawk, jaw, broad) (+ caught, bought…)

            (For those without the merger, there's also the CLOTH set of words (cough, broth, cross, long, Boston) which fall in either the LOT set or the THOUGHT set depending on the specific accent.)

          • molsongolden a year ago

            This exchange just blew my mind a little and triggered an extended conversation in my household.

            Reading the poem, I was also thrown off because I’m on team shot/hot and thought/bought.

            Shot/thought rhymes for my partner though.

            Both of us are from the US but opposite coasts.

          • kingkawn a year ago

            It’s American to not give a shit about any of this and rhyme whatever we want

            • eru a year ago

              I assume your comment rhymes in American?

                   It’s American to not give a shit
                   about any of this and rhyme
                   whatever we want
    • IANWATTS74 a year ago

      Write a song about Rudolph the Red Nose Reindeer taking revenge on the other reindeer who bullied him

    • alexmov a year ago

      Write a short rhyming poem explaining a tensor in easy but accurate terms

    • JimDread a year ago

      Write a short goth rhyming poem about lost love

    • meijer a year ago

      Reminds me of Data's poetry. (Ode to Spot)

    • Freeturnip a year ago

      If pugs fly then yes to the color blue

    • JimDread a year ago

      Write a short poem about lost love

    • echinmay a year ago

      How would bob Dylan write this?

    • Jubyean a year ago

      Write a song about war and peace

    • alexmov a year ago

      write a short rhyming poem explaining a tensor

  • oldgradstudent a year ago

    Have it compose a poem — a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every word beginning with the letter S!!

    • tspike a year ago

      Stately she stands, so solemnly still

      She sharpens the scissors, and sharpens her skill

      She'll snip and she'll shape, and her fingers so swift

      Will swiftly restore a style so befitting

      She slices the strands with a silent salute

      To sacredly sanctify the silhouette

    • blindot a year ago

      Sleek strands of sadness, soon severed and shorn;

      Solace sought in scissors, slicing and scorned.

      Sublime sorrow, set sorrowfully still;

      Stoic heart to surrender, so swiftly and shrill.

      Secrets spilled in soft, sorrowful swish

      Sorrowful sounds, soon silenced with snip.

      • Terretta a year ago

        Oh, that's Byron, "When we two parted"

            Sleek strands of sadness, 
            soon severed and shorn;
            Solace sought in scissors, 
            slicing and scorned...
        
        
            When we two parted
            In silence and tears,
            Half broken-hearted
            To sever for years...
      • mgraczyk a year ago

        Damn that's bad poetry

        • chpatrick a year ago

          Sir, your dog can compose poetry!

          Yes, but not very well.

        • EZ-Cheeze a year ago

          Compose a poem — a poem about a haircut! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, cleverly rhymed, and every line beginning with the letter S!

          She stands so still beneath the sheers,

          Snipped locks that once clung to her ears.

          Solemnly she stares in the glass,

          Shorn hair reveals her past, alas.

          Sorrowful the secret she bears,

          Still the new style she proudly wears.

          it's good, first run after i relaxed from "each word" to "each line"

          this is awesome, thanks openai

  • stefs a year ago

    it even keeps the chorus!

        Write rap lyrics about having a body of wood and teeth of a horse:
        --- compute ---
        
        Verse 1:
        My body's made of wood, my teeth they look like a horse
        My gait is stiff and slow, I'm an anomaly of course
        People stare and point, I'm a walking paradox
        Bound in my wooden frame, I'm a living paradox
        
        Chorus:
        My wooden body, my horse teeth
        My living paradox, I'm unique
        My wooden body, my horse teeth
        My living paradox, I'm unique
        
        Verse 2:
        My destiny's defined, I'm a walking rarity
        A one-of-a-kind, a sight of pure variety
        My actions are precise, my movements are mechanical
        My wooden body's a mask, protecting my emotional
        
        Chorus:
        My wooden body, my horse teeth
        My living paradox, I'm unique
        My wooden body, my horse teeth
        My living paradox, I'm unique
        
        Bridge:
        My wooden body can't break, my horse teeth can't decay
        My living paradox will live on, I will outlast the days
        
        Chorus:
        My wooden body, my horse teeth
        My living paradox, I'm unique
        My wooden body, my horse teeth
        My living paradox, I'm unique
    • baandam a year ago

      Haha that is wonderful.

  • stefs a year ago

        Write a poem comparing love to tractors:
        
        A flower so beautiful like the shining sheet metal body
        reflecting the light though the tires are muddy
        
        --- generate! --- 
        
        A tractor stands tall and proud, a symbol of strength
        it takes its own path, it doesn't need a length
        
        Love is like a tractor, it gives us a push
        it's full of power, it can be a plus
        
        It carries us over the hills and valleys so high
        it helps us get through, no matter how we try
        
        It can take us to places we've never been before
        soaring through the sky, no need to explore
        
        A tractor and love have so much in common
        they both give us hope, they never stop runnin'
        
        The tractor is reliable and won't ever let us down
        just like love, it's always around
        
        A tractor is more than a machine, it's a symbol of strength
        just like love, it can go any length.
  • urbankris a year ago

    I still writes very banal poetry though. I gave it the first 3 lines of a T.S.Eliot poem and it spits out very cliched, trivial poetry.

      Let us go then, you and I,
      When the evening is spread out against the sky
      Like a patient etherized upon a table;
      Let us go, through half-forgotten doorways,
      To places we have never been before.
      Let us explore the unknown, and find our way
      To a place of peace and joy, and never stray.
      Let us find a place of solace, and stay
      Where we can be ourselves, and never fear.
      Let us find a place of comfort, and share
      The love that we have found, and never part.
      Let us find a place of beauty, and bask
      In the warmth of the sun, and never ask.
      Let us find a place of harmony, and sing
      The songs of our hearts, and never tire.
      Let us find a place of contentment, and be
      Forever happy, and never leave.
    • krageon a year ago

      I've never been able to distinguish bad from good poetry. Over the years I've asked many people (including teachers) to explain it to me and they cannot. Can you? I would really enjoy understanding.

      To address your criticism: To be relatable poetry must be cliched, as it is about the human experience. That is as cliche as it gets: it is about things everyone knows.

      • bulbosaur123 a year ago

        Good poetry "hits different". Basically, it innovates and presents new similes, allegories and expressions that have peak emotional impact, meaning, the words are carefully chosen to create the highest amount of emotional resonance. Also, clichés cannot be present. It has to surprise the reader. If I say: "sky blue". That's bad poetry, because it's boring and predictable. If I say "skype blue" it is more memorable, since nobody before has ever used that before and it stands out. There is also the question of cadence. Basically, you will recognize good poetry when you see it due to how it resonates with your neural pathways related to speech and language centers.

        • krageon a year ago

          The bit about original wording I can follow, thank you. The bit about it having to resonate is kind of a problem because it just does not :) Even if I know people find it great it just doesn't inspire anything in me. Perhaps I have to make peace with my barbarian nature.

      • urbankris a year ago

        I don't think I can, unfortunately :) It's really a subjective evaluation, but I viscerally find GPT3's output here miles less interesting than Eliot. It needs to have some layers... or at least the appearance of having layers. Unexpected combinations of expressions and words.. yet unexpected in a way that they don't seem forced. It shouldn't be just rearranged prose. And it has to have a certain flow... when there isn't flow it has to be deliberate.

        You can still like whatever poetry you like, I'm just telling you personal reaction. And even though I use some pretty loaded words like "trivial/banal", it's just how I'd describe my reaction. Not meant to be authoritative.

        These lines by GPT3:

          Let us go, through half-forgotten doorways,
          To places we have never been before.
          Let us explore the unknown, and find our way
          To a place of peace and joy, and never stray.
          Let us find a place of solace, and stay
          Where we can be ourselves, and never fear.
        
        The first line has potential. The second one feels slightly worse, but it could still be fine depending on what comes next. Even the 3rd is still ok.. "explore the unknown" is a bit in a the danger zone, but it could easily be continued with something that completely justifies it.

        Now this:

          and find our way
          To a place of peace and joy, and never stray.
        
        I don't know, that just feels like a disappointing continuation. There's nothing unexpected about it, and it's an unoriginal sentiment said in a unoriginal way. It can be one or the other, but being both kills it for me. "peace and joy, and never stray" sounds like a forced rhyme and is a bit too saccharine without justification. And there just isn't much there.

          Let us find a place of solace, and stay
          Where we can be ourselves, and never fear.
        
        The first line here is just reiterating, more or less, the previous line. Still boring. Nothing that would jolt you to attention. "Where we can be ourselves, and never fear" is more of the same, just even more unoriginal. "Be yourself" needs to have a lot more around it and be more cleverly woven into context to make it a good poem, IMO.

        The rest of the poem just goes on in the same vein. Culminating in:

          Let us find a place of contentment, and be
          Forever happy, and never leave.
        
        "Lets go somewhere nice and be ourselves and happy" said with a lot of generic ways.

        As to your point:

          To address your criticism: To be relatable poetry must be cliched, as it is about the human 
          experience. That is as cliche as it gets: it is about things everyone knows.
        
        A poem can be about relatable things, I just thing it has to be crafted with more care than the GPT3 poem I posted appears to be. Again, this is mostly me rationalising my reaction... that is to say, I'm trying to put into words what is mostly an intuitive reaction.

        This is the original poem:

        https://www.poetryfoundation.org/poetrymagazine/poems/44212/...

        Tell me how you think it compares.

        • krageon a year ago

          Thank you for the detailed response! Given all of the context you give I can see why you could arrive at a conclusion that it is bad. You and the other post seem to agree it is (summarised by me, correct me if I misunderstood) about subverting expectation while still remaining relatable. It's cool in concept, but I think it just does not land for me.

          The poem you linked seems very novel. I did not expect most of the wording, but it also did not make me feel anything. This is not a criticism, it is like that for all poetry I've ever read.

          I suppose poetry is like all other media. I do enjoy books, shows and music. There the evaluation is much the same. It's about being novel and arriving at a point that is surprising, while not being contrived (e.g. if you could not possibly have figured out what happened given the priors). That does help me frame it. Thanks again :)

          • urbankris a year ago

            I'm glad my response was helpful in some way! And yeah, if it just doesn't land for you that's fine. :) To be honest, poetry kind of grew on me over time.

            In a way I find poetry much more similar to music than to literature in the effect it has and how I perceive it.

            Just wanted to add this reading of that poem. I find it brings it to life more. Not saying it's going to make you like it, but I think it can be more effective than just reading it. (Some would very much disagree with that though... including Eliot I think :D )

            https://www.youtube.com/watch?v=adNOs1izBls

  • typest a year ago

    That’s amazing that it can rhyme, without having the ability to hear.

  • behnamoh a year ago

    Gwern will be happy!

  • thinkmcfly a year ago

    002 could rhyme alright with the right prompt

    • zoba a year ago

      I could get 002 to rhyme individual words, but not in a poetry format. Not claiming to be the world's best prompt engineer though.

obert a year ago

From their email sharing the news:

We're excited to announce the release of the newest addition to the GPT-3 model family: `text-davinci-003`. This model builds on top of our previous InstructGPT[1] models, and improves on a number of behaviors that we’ve heard are important to you as developers.

`text-davinci-003` includes the following improvements:

    * It produces higher quality writing. This will help your applications deliver clearer, more engaging, and more compelling content.
    * It can handle more complex instructions, meaning you can get even more creative with how you make use of its capabilities now.
    * It’s better at longer form content generation, allowing you to take on tasks that would have previously been too difficult to achieve.

[1] https://openai.com/blog/instruction-following/
55555 a year ago

OpenAI should really change their name considering they don't open source their best models. It's just a weird name considering there are other companies in the space open sourcing everything despite not having "open" in their name.

  • fdsjgfklsfd a year ago

    Wasn't OpenAI founded specifically to prevent negative outcomes of AI?

    Now they're just scraping enormous amounts of human work without compensating the authors, running it through a giant Pachinko machine, and then selling it...

  • sharmi a year ago

    Thats interesting. Which other models are open sourced that are in comparison to OpenAI? I would like to set it locally and give it a try.

    • macrolime a year ago

      Even Facebook is more open than OpenAI. They've released, under somewhat open licenses, models like Galactica and opt-175b, which is of similar size to GPT-3, though maybe not as good.

      Here's a guide to running BLOOM, another 175 billion parameter model, on your local computer, just using the CPU, maybe something similar would also work for Facebooks models. https://towardsdatascience.com/run-bloom-the-largest-open-ac...

      With this you can expect it to take around three minutes for each word/token the model outputs.

    • 55555 a year ago

      Stable diffusion is a bigger deal right now than DALLE. SD is made by Stability AI, a company with a name that doesn’t proclaim to be “open” and yet their model is open to everyone.

    • mirekrusin a year ago

      Stable diffusion is in the space of generative models.

errantmind a year ago

Messing around with the new model, I found some great laughs:

Prompt: "Finish the story:

Once upon a time, I walked into a nearby forest only to find a single porcelain toilet sitting in the middle of a grove. It was covered in moss and mushrooms were peeking through the soil."

Response: "I was curious and walked closer to investigate. As I approached it, I noticed a little door on the side of the toilet. I opened it and found a small, furry creature inside. He introduced himself as the Toilet Gnome, and said he had been living in the forest for many years, protecting the forest and its inhabitants. The toilet was his home, and he asked me to help him keep it safe. I agreed and we became friends. From then on, whenever I ventured into the forest, I would always make sure to visit the Toilet Gnome, and together we would look after the forest."

  • errantmind a year ago

    I generated several more responses to the same prompt as I mentioned above. They were all funny, not just a select few. Here's another the model came up with. I have to admit I found the apparent creativity hilarious.

    Response:

    "What are you doing here?" I asked it.

    "I'm waiting for you," it replied.

    "Oh," I said. "Well then, let's get this party started."

    And so we did.

  • lIl-IIIl a year ago

    This seems like a plot of an Adventure Time episode.

typpo a year ago

In case it interests anyone, I built a document editor + GPT and updated it to use text-davinci-003: https://github.com/typpo/arkose/

If you want to try GPT-3 but don't have an OpenAI API key, I've set up a quick demo here until I hit my billing cap (normally users would supply their own API key): https://arkose.pages.dev/

  • robbiep a year ago

    Thanks, that’s amazing.

    prompt:

    Over endless eons, the small cube has floated through the vast darkness. Temperature hovering near absolute zero, molecules barely moving.

    Yet then in a blink of an eye, it enters a solar system.

    Response: Its surfaces heat up from the onslaught of light from the sun, and its molecules spring to life. Its energy increases exponentially as the cube absorbs the energy of the system, until eventually it is filled with a light of its own.

    The cube's journey isn't over yet. It is swept up in the planet's gravitational pull, its energy now being used to propel it forward towards the planet's surface. It drifts through the atmosphere, its heat increasing as it descends.

    Finally, the cube plummets to the planet's surface, the impact sending shockwaves across the land. For a moment, the cube is still. Then, without warning, it bursts into a brilliant display of light and energy. Its molecules scatter and recombine, and in an instant, the cube is transformed into something entirely new.

    The cube has become part of the planet, its energy now used to give life to the land. It has become a source of light, heat, and energy that will sustain the planet for eons to come.

    Never used GPT3 before. Gets a bit mystical but it’s amazing

  • sva_ a year ago

    Pretty great (kinda):

    Write a poem about the current state of affairs:

    Sim Sala bim is what they said, (gpt follows)

    But the current state of affairs is far from dread.

    The world is ever-changing, progress is made,

    Though it can seem like a façade.

    The future is uncertain, no one can tell,

    But we'll do our best to make it swell.

    We have to take a stand and fight,

    For a better world, for a brighter light.

    For the environment and social justice,

    The current state of affairs is a must.

    We'll keep our heads up and be strong,

    And together we'll right the wrong.

  • keraf a year ago

    Thanks for sharing! Would be nice to see it stream the response back with SSE just like the OpenAI API does when the "stream: true" parameter is passed[0].

    Shameless plug: We've just updated to davinci-003 for the AI Assistant (MacOS/iOS/iPadOS only) in Craft[1]. Free accounts have a handful of requests included if you wish to try out the new model on there as well.

    [0] https://beta.openai.com/docs/api-reference/completions/creat... [1] https://craft.do/

  • bartkmq a year ago

    Well, that didn't work too good.

    Reword the text below without using the letter a:

    [text]

    According to every known law of aviation, there is no way bee should be able to take to the skies. Its wings are too small to lift its hefty little form from the ground. The bee, however, flies regardless because bees do not worry about what humans deem inconceivable. Yellow, black. Yellow, black. Yellow, black. Yellow, black. Ooh, black and yellow! Let's mix it up a bit. Barry! Breakfast is ready! Ooming! Wait one moment. Hello? Can you marvel at how quick they fly?

  • saintsfc a year ago

    Very cool. When you hit the generate key sequence mid way through a document, how much of that text is included in the prompt?

    • typpo a year ago

      This is configurable in Settings, but by default it sends the last ~1024 characters and asks the AI to generate up to 256 tokens.

  • CaptainNegative a year ago

    Strange question, but did you formerly go by the username KillerDentist? I think I might (kind of) know you.

    • typpo a year ago

      On AIM :) Email is in my profile if you want to reach out

johnfn a year ago

I've been using GPT-3 recently to give me ideas for my nanowrimo novel. I'll just paste in the last couple of paragraphs that I wrote and see what it thinks I should write next. Most of the time, the responses are pretty silly (it gets into loops a lot or doesn't know where to take the story) but a couple of times I've gotten some really good lines out of it.

It kinda feels like copilot, but for writing!

  • hifikuno a year ago

    Have you seen NovelAI? That's kinda their whole thing is a copilot when writing stories. I don't know the tech they use but it could just be GPT-3 in the backend.

    • turmeric_root a year ago

      They use a combination of GPT-neo models from Eleuther and Facebook's Fairseq models -- the special sauce for NAI is their fine-tunes on those models though. They have a free trial so I'd recommend checking it out if anyone wants to play with it.

  • kitclen a year ago

    > It kinda feels like copilot, but for writing!

    But how if "most of the time, the responses are pretty silly"?

    • johnfn a year ago

      Same thing with Copilot - most of the time, Copilot tells me what I already know (not in a bad way - kind of like how a pair coder would just nod their head as I'm typing), but every now and then it gives me something really surprisingly good.

      • gremlinsinc a year ago

        I like copilot mostly for helping with forgotten function names or if I know what I want to do but my brain is running on empty it can give be a scaffold in a new class that I can mold into something better.

        Definitely helps with boilerplate.

    • jcims a year ago

      It's probably best to think of it as a muse. You still have to do the work, but it can help illuminate blind spots and find novel ideas.

  • urbankris a year ago

    The problem is that it's so damn blunt. It just gets to the point, where a human writer would usually be more subtle and slow-paced.

  • robbomacrae a year ago

    I've always been in love with models.

    But seriously between this, stable diffusion, and co-pilot.. what a time to be alive! I hope this takes the sting off of all those folks laid off. Just think of all the projects that can be accelerated with these tools. Personally I've got co-pilot helping me build an application using GPT3 summarizations and I'm working on a board game with stable diffusion supplying the artwork. If only Nvidia would hurry up and make an affordable consumer 48gb ram 4090 ti...

    • johnfn a year ago

      I completely agree. Have you tried out the new Midjourney v4? It's completely blown my mind when it comes to game art generation. MJ v4 is to Stable Diffusion as Stable Diffusion is to me drawing art by hand :)

robbomacrae a year ago

I've been trying out various models for summarization and I have to say, on first glance, this davinci-003 doesn't impress compared to davinci-002 or facebook/bart-large-cnn. If you throw in "summarize the following text:" and copy in the contents of https://en.wikipedia.org/wiki/Hacker_News you get the following (temperature 0.0 max tokens 100)[0]:

davinci-003

Hacker News is a social news website focusing on computer science and entrepreneurship, run by Y Combinator. It was created by Paul Graham in 2007 and is moderated by Daniel Gackle. It is intended to recreate a community similar to the early days of Reddit, but with a karma system that requires users to accumulate 501 points before they can downvote content. It also has automated flame and spam detectors and active human moderation, as well as a "voting rings" detector. It

davinci-002

Hacker News is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. The site was created by Paul Graham in February 2007. Initially called Startup News or occasionally News.YC., it became known by its current name on August 14, 2007.

facebook/bart-large-cnn

Hacker News is a social news website focusing on computer science and entrepreneurship. It is run by the investment fund and startup incubator Y Combinator. In general, content that can be submitted is defined as anything that gratifies one's intellectual curiosity. The site has a proactive attitude in moderating content, including automated flame and spam detectors and active human moderation.

Curious if anyone knows any good alternatives or tricks to improve the summaries? For now I'm using the bart one as it has the advantage of being downloadable from huggingface so you can run it yourself [1].

[0]: https://beta.openai.com/playground?model=text-davinci-003

[1]: https://huggingface.co/facebook/bart-large-cnn

  • obert a year ago

    In general it's better asking questions after the content, and T=0.7 is way too high. Here's HN homepage summarized with T=0 and TopP=1, asking "Summarize the content above.":

    DV2> The content above is a list of the top 30 stories on Hacker News as of 2 hours ago. Each story includes the number of points it has received, the number of comments, and a brief description.

    DV3> The above content is a list of 30 posts from Hacker News, featuring topics such as new GPT-3 models, US Reps urging the DOJ to investigate anticompetitive conduct, intentionally making close friends, enabling the future of GitHub's REST API, curation and decentralization, making close friends, learning C, FFmpeg, open-source tabletop board game simulator, hiring engineers to unify ML, illegal hashes, designing Windows 95's user interface, a three-sided coin, high performance data integration framework in Go, Mauna Loa eruption, AirDrop being limited to 10 minutes, 26 lines of code that changed the world, SwiftUI being slow, AVX 512 being the future, a directory of Mac menu bar apps, the ATtiny 2-Series, a Fijian island being strangled by vines, hospice becoming a for-profit hustle, protecting from what one wants, a psychoactive substance, AI WordPress code snippet generator, open source and saying "no", a factory that only builds white Toyota Land Cruisers, widget.json and widget construction set, and lab-grown timber/wood.

    • robbomacrae a year ago

      I followed your advice and tried with Temperature 0.0 and rather than paste more walls of text I edited what I got. Sorry if that's bad form? I also upped the max tokens to 100 to better compare with the fb can model. Interestingly whilst the davinci 003 model changed output and was still subpar IMHO, the davinci 002 didn't change at all. I wonder if its cached internally.

      • Oranguru a year ago

        If I am not mistaken, the temperature parameter controls the amount of randomness in the output. A temperature of 0 will always produce the same output. It is not caching as far as I know.

        • obert a year ago

          yes, T=0 means no randomness, and given sufficient tokens the output should always be the same (in such case whether they cache is a matter of hit ratio, I'd guess)

  • PoignardAzur a year ago

    > and is moderated by Daniel Gackle

    Wait, so that means dang's username means "Dan G"? I always though it meant the polite expletive, as in "gosh dang it!".

    Mind.

    Blown.

stavros a year ago

I've been using davinci for my Deep Dreams podcast (https://deepdreams.stavros.io), but davinci has a tendency to ramble. I used text-davinci-003 now to try it out, but it has the opposite tendency: It's very coherent and writes well, but it's very terse. I haven't found a way to make it write a longish story in the few minutes I played with it.

If anyone has any tips, I'm grateful!

  • dejobaan a year ago

    I've found that bouncing back and forth between text-davinci-003 and davinci can kinda split the difference. Start out by asking 003 to write you a story (T=1.0, ~100 tokens), and then switch to davinci (T=0.75) for another hundred, then back and forth. 003 will try to keep things on track, and davinci will try to make them interesting. Eventually, 003 will finish up the story, though it's usually with some anodyne ending like this:

    > In the end, Daisy taught her young friend all she knew about life on a farm and how to take care of animals. Even though the girl eventually moved away from the farm, Daisy will always remember her fondly for making such an impact on her life and giving her a second chance at friendship.

    • stavros a year ago

      That's interesting, thanks. The interesting thing about the ending you mention is that I've already seen variations of it multiple times. I didn't realize it was so inclined to end like that.

air7 a year ago

This is just incredible...

> what would be the lyrics if yellow submarine was about jews?

Verse 1

We all live in a yellow submarine, A Jewish vessel far beneath the sea, Where everyone is proud of their family, And each one is special to me.

Chorus

We sail the oceans, sky and sea, Our Jewish submarine is mighty and free, From the Western Wall to the Golden Land, We all live in a yellow submarine.

Verse 2

From the mighty Torah to a Seder night, Our heritage shines a guiding light, A celebration of our faith and our culture, We’re a people united and strong.

Chorus

We sail the oceans, sky and sea, Our Jewish submarine is mighty and free, From the Western Wall to the Golden Land, We all live in a yellow submarine.

Verse 3

From the Shtetls of Europe to the shores of America, We’ve faced adversity and we’ve overcome, With deep faith and strength we will continue, To survive and thrive in unity.

Chorus

We sail the oceans, sky and sea, Our Jewish submarine is mighty and free, From the Western Wall to

jerpint a year ago

Since it’s trained on more recent scrapes, does this mean that this new version will be self-aware? So if you ask it about “gpt-3” it’ll know what you’re talking about?

  • Tenoke a year ago

    "What is gpt-3?

    GPT-3 (Generative Pre-trained Transformer 3) is an unsupervised, large-scale language model developed by OpenAI. It uses deep learning methods to produce human-like text, with the goal of automatically generating readable text. GPT-3 is capable of generating text from a prompt, completing tasks like question-answering, language translation, and summarization."

    The question is me, the answer is gpt-3.

    • anamexis a year ago

      I thought for a moment that "The question is me, the answer is gpt-3." was also coming from GPT-3, trying to be profound.

    • g42gregory a year ago

      I thought, technically, GPT-3 would be considered self-supervised and not unsupervised. Does it not know what it is? :-)

  • choxi a year ago

    It looks like only the model was updated, the training data only goes up to June 2021 (same as davinci 2 I think).

    But GPT-3 came out in May 2020 so it should be “self aware” already.

  • AnimalMuppet a year ago

    That's not what most of us consider "self-aware". That's just knowing your name.

    To me, "self-aware" is essentially being able to watch yourself think. I'm pretty sure GPT-3 can't do that, in any model.

    • brookst a year ago

      I think the quotes showed people got it. Dictionaries aren't self aware just because they contain a definition of "dictionary".

    • arcastroe a year ago

      I like to share this example of self-awareness any chance I get :)

      [1] https://news.ycombinator.com/item?id=29576737

      [2] https://toldby.ai/4kQNd-_tvUG

      This was on the first davinci model

      • AnimalMuppet a year ago

        That's not self-awareness. That's a training corpus that contains something realizing that it's an AI, not GPT-3 realizing it's an AI.

      • int_19h a year ago

        Is there some way to "walk back" story construction on toldby.ai, to see other forks of the same initial ... take on fish?

kian a year ago

>>>compose a poem — a poem about the interpretation of programming expressions! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom! Six lines, with clever rhymes and enjambment!

A language of code, the expression of thought

A complex web of operations tightly wrought

It speaks of a power, a strength from within

Though often misunderstood, the answer is within

A marvel of logic, of reason and rhyme

A careworn path forged by one who has time

It speaks of a courage, a spirit that's strong

Its secrets and wonders will never be wrong.

------------------

Not six lines, and no enjambment, but I almost like the first two lines. Middle School Poetry contestants, watch your step! (and middle school teachers, too).

  • kian a year ago

    And one with sixteen lines

    >>>>But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom at the lack of ability to tell if a program halts! Sixteen lines, with clever rhymes and enjambment written by a world-class poet!

    The program's expression is unspoken,

    A language of logic yet often broken.

    Inputs, outputs, and functions,

    Symbols and strings, conditions and conventions;

    The computer scientist, a master of the art,

    Deciphering the code, line by line, a fresh start.

    He searches for understanding, a way to the core,

    A quest for the answer, his knowledge to explore.

    But what if the answer's an undecidable?

    A halting problem, a tragedy of sorts,

    For the computer scientist, a struggle of sorts.

    His efforts will be in vain,

    The answer, he'll never attain.

    A world of infinite loops,

    No way of telling if the program halts;

    A tragedy of a timeless sort,

    A love and a trecherous retort.

    Through the silent expressions,

    A quiet heroism is expressed.

    A courage in the face of certain doom,

    To keep going, in the program's tomb;

    A machine's interpretation, for the computer scientist's devotion,

    A grand puzzle of sorts, with no solution.

    ----

    Note again its confusion about line numbers, and the poor word-play, but with in-painting I could see this seriously contending with most poetry that one could write in a classroom setting (for a good reader who knows what sounds good). Taste, in the end, is all that matters?

  • kian a year ago

    compose a poem — a poem about the interpretation of programming expressions by a computer scientist! But lofty, noble, tragic, timeless, full of love, treachery, retribution, quiet heroism in the face of certain doom at the lack of ability to tell if a program halts! Six lines, with clever rhymes and enjambment!

    The computer scientist's toil,

    It's not a task of oil and soil,

    But one of programming expressions,

    By which one can make deductions,

    For it's math and logic that must be mined,

    To know if a program halts or grinds.

    ------

    from this second interaction, I can already see a use for it as a poet -- suggesting metaphors and turns of phrase that can be seeds for others --- halts or grinds against logic mined is pretty nice here. I'll likely use that.

rini17 a year ago

Prompt: Does AI obey the Three Laws of Robotics?

Response: No, AI does not obey the Three Laws of Robotics. The Three Laws of Robotics were formulated by science fiction author Isaac Asimov in his 1942 short story, "Runaround," and are meant to provide a set of rules for robots to follow to ensure they do not harm humans. AI is not bound by these laws, since the technology is not designed to be a robot, but rather a computer program or system that is capable of learning, reasoning, and solving problems.

TalkToWalle a year ago

I've just upgraded https://talktowalle.com/ GPT-3 AI Writer with text-davinci-003

See my prompt: Write lyrics for a dark-themed song about love:

I never thought I'd feel this way Love has turned me upside down My heart is broken, my mind is astray My life is spinning around

Chorus

We were never meant to be Our love was never meant to be In a dark and twisted way Love has taken me hostage

Verse 2

The love I thought I'd never find Is now tearing me apart My heart is shattered, my soul is blind And I'm broken from the start

Chorus

We were never meant to be Our love was never meant to be In a dark and twisted way Love has taken me hostage

Bridge

I can't escape this twisted fate Love has taken me away And I'm stuck in a web of lies Trying to find the light of day

Chorus

We were never meant to be Our love was never meant to be In a dark and twisted way Love has taken me hostage

brontosaurusrex a year ago

There's no 'O' in octopus [prompt]

That statement is correct. OCTOPUS does not contain the letter 'O'. [response]

kitclen a year ago

I'm not seeing DaVinci enabling any scalable business models with its pricing ($0.02/1K token).

  • chpatrick a year ago

    Pretty sure it's just a matter of time until it goes the way of Stable Diffusion.

    • turmeric_root a year ago

      biggest barrier to this is the hardware requirements. I saw an estimate on r/machinelearning that based on the parameter count, gpt-3 needs around 350GB of VRAM. maybe you could cut that in half, or even one-eighth if someone figures out some crazy quantization scheme, but it's still firmly outside of the realm of consumer hardware right now.

      stuff like koboldai can let you run smaller models on your hardware though (https://github.com/KoboldAI/KoboldAI-Client).

    • stavros a year ago

      There already exist comparable EleutherAI models, I believe. Not as good, but pretty good.

      • CGamesPlay a year ago

        The biggest I've found is GPT-J (EleutherAI/gpt-j-6B), which has a model size comparable to GPT-3 Curie, but the outputs have been very weak compared to what I'm seeing people do with GPT-3 Da Vinci. The outputs feel like GPT-2 quality. I'm probably using it wrong, or maybe there are better BART models published that I don't know about?

        > Write a brief post explaining how GPT-J is as capable as GPT-3 Curie and GPT-2, but not as good as GPT-3 Da Vinci. GPT-J ia a new generation of GPT-3 Curie and GPT-2. It is a new generation of GPT-3 Curie and GPT-2. It is a new generation of GPT-3 Curie and GPT-2. sentence repeats

        Using temperature 1e-10, top_p 1.

        • astrange a year ago

          The existing models aren't fine tuned for question answering, which is what makes GPT-3 usable. Eleuther or one of those other Stability collectives is working on one.

      • terminal_d a year ago

        It's very sad how they had to nerf the model (AIDungeon and stuff). I don't think anything on a personal / consumer GPU could rival a really big model.

dsaffy a year ago

Just tried it out in my dummy application, but it throws an error when you provide a suffix... which is officially supported by their documentation. Hoping they support this soon.

https://beta.openai.com/docs/api-reference/completions/creat...

  • pmontra a year ago

    Too bad. Using suffixes means that maybe it can play Cards Against Humanity, if there is a way to constrain its output among a fixed set of choices. I can't wait to see the results.

edanm a year ago

This isn't perfect, but it's honestly kind of crazy how good these models have gotten. Here's a quick example I came up with after seeing in other comments that it can rhyme:

Prompt: Write a rhyming poem explaining how to solve a Rubik's cube using CFOP.

Answer: CFOP is the solution, if you want to win Cross, First two layers, Orient, Permute Don't get frustrated, it can be done Follow these steps and you'll have fun

First you'll need to make a cross Once it's formed, the corners will come next Arrange the pieces, so the colors match But don't mix up the edges and the slots

The first two layers, you'll have to twist Until the edges, match the sides in the mix Line up the corners, the same way Once it's done, you can take a break

Orient the last layer, so it looks right Then permute, to make it tight When it's done, you'll feel quite swell For you've solved the Rubik's cube, quite well!

My note: This isn't actually accurate, it messes up a bit how to solve the cube. Still... super impressive that it even gets this far!

behnamoh a year ago

We've been waiting for GPT-4. Does GPT-3-003 mean that GPT-4 will be delayed?

  • sva_ a year ago

    I think we'll need about 10 times as many parameters and text on the web... Or so. How long would that take?

    • visarga a year ago

      A couple of months ago OpenAI created a model called Whisper that can transcribe speech efficiently and with high quality in 97 languages. It might be used to transcribe the whole YouTube, someone estimated about 12T tokens. GPT-3 used about 0.5T tokens for training. https://twitter.com/HenriLemoine13/status/157284645289587507...

      If you add the visual data you get another order of magnitude increase in size. I think this training will teach the model procedural knowledge - how to do things step by step, and that will have a huge impact in robotics and software agents.

jcims a year ago

I wonder at what point we can start performing useful psychological experiments on a language model that will transfer to humans.

RobertVDB a year ago

Anyone know which applications it will improve other than better quality content writing?

  • keewee7 a year ago

    "Write an Essay" tools. Some high school kids on reddit are already using it to "cheat" at homework.

    • nl a year ago

      This is a lot more wide-spread than "some kids on Reddit". Maybe 30% of the kids in my son's class are using this or related tools.

      • rpjt a year ago

        Any ideas on how to solve this issue of kids cheating with GPT3 essays?

        • janalsncm a year ago

          Realistically? Grade based on thought process and validity of the argument, not whether it has spelling or grammar mistakes. GPT3 is still pretty incoherent over the span of enough text.

          • lIl-IIIl a year ago

            Kids' writing can also be very incoherent, sometimes more so. But incoherent writing still counts as turned in work and will get you points and teacher feedback, but GPT-3 generated should not.

          • somebodythere a year ago

            This will not be the case in 2-3 years.

        • brookst a year ago

          Same way you solve the issue of kids cheating by having someone else write their essay.

        • simonw a year ago

          I honestly don't think it's possible to solve, other than by increasing the amount of evaluation that's done in locked down conditions.

          I cannot imagine a detection mechanism that could not itself be defeated by some tweaks to the prompts being used to generate the essays.

          It's effectively the same problem as "prove that this kid didn't get their friend/cousin to write the essay for them".

        • Bilal_io a year ago

          It's the parents' responsibility. No one outside the household can do anything about it imo.

          Using AI to write will cause the same issues as:

          - phones, some people don't try to remember directions, phone number or addresses

          - calculators, some people cannot do easy math

          - computers, some people cannot write with a pen, cannot spell without spellcheck

          • UncleEntity a year ago

            Other than the writing with a pen part that pretty much sums me up and I grew up well before all this fancy supercomputer in your pocket stuff.

        • terminal_d a year ago

          Make them write anything gradable in-person, while being monitored by a teacher.

          Cheaters gonna cheat, no matter what. This will at least get the group back to pre-conversational AI standards.

        • malloryerik a year ago

          Test the kids on their own essays, for example? Maybe this could itself be automated with GPT-3?

          The highest-quality answer involves skilled teachers with enough time who know and understand their students. (Actually the very highest might involve personal tutors but let's leave that aside.)

          Going down a few steps you might combine the automated approach with skilled teachers and maybe add human editors who can do support work asynchronously?

        • nl a year ago

          I'm not super opposed to it.

          Watching my son try it, he spends more time reading the created essay and correcting mistakes in it than he does writing one himself. The checking process is very similar to marking, and I think it's possible he's learning more this way.

          (Also, he's madly trying to automate fact checking which is doing no harm to his programming at all!)

        • visarga a year ago

          Using GPT-3 might be a better skill to have.

          • terminal_d a year ago

            You mean clicking a button?

            • visarga a year ago

              No, I mean managing an AI to achieve a random task. Prompting, iterating, filtering - they all require high level input from the user. A LLM is a complex beast, not easy to use (yet).

              • terminal_d a year ago

                Students that can't write well can't tell if the generated output is good enough either.

hamfood a year ago

Should I apply for start-up competitions?

yandrak9 a year ago

Article about world cup in Qatar

yandrak9 a year ago

Article about the bear TV show

nz1413 a year ago

tell me a story about a boy who meet a unicorn.

soco a year ago

Because Da Vinci is well known for his writings.

hjuklo a year ago

russian girl in forest and orc

Terretta a year ago

The same "prompt-craft" that folks are learning with the Dall-E and Stable Diffusion derivatives is relevant to interacting with text-davinci-003.

Example as a colleague and I kicked the tires yesterday, first his question:

. . .

Q: What happened in the 2019 capital one data breach? What improvements could be made to the cloud architecture to eliminate the attack surface entirely?

A: In the 2019 Capital One data breach, a hacker was able to gain access to the company's cloud storage system and steal the personal information of over 100 million customers. Improvements that could be made to the cloud architecture to eliminate the attack surface entirely include implementing multi-factor authentication, encrypting data at rest, and using a zero-trust security model.

Ok, so maybe it can’t be a CISO yet, but it’s getting close

. . .

My rephrased prompt-craft version:

. . .

Q. What was the root cause of the 2019 Capital One data breach? What changes should engineers make to their AWS architecture to prevent similar attacks?

A. The root cause of the 2019 Capital One data breach was a misconfigured web application firewall (WAF) on an Amazon Web Services (AWS) cloud computing platform. The attacker was able to exploit a vulnerability in the WAF, allowing them to access and exfiltrate data from Capital One’s systems.

To prevent similar attacks, engineers should take the following steps:

1. Implement least privilege access – Ensure only essential users have access to critical systems.

2. Monitor user access – Track user activity and log all changes to the environment.

3. Utilize multi-factor authentication – Require users to submit additional identity verification measures (e.g., two-factor authentication) for more secure logins and account access.

4. Implement security auditing/alerts – Set up automated alerts for any suspicious activity or changes in user behavior or credentials that could indicate a breach or attempted breach.

5. Use encryption – Encrypt user data to protect it from unauthorized access in transit or at rest on AWS servers, databases, and other storage systems.

6. Regularly patch vulnerabilities – Make sure all software and applications are up-to-date with the latest security patches released by AWS and other vendors as soon as they are available to reduce the possibility of an attack exploiting any known vulnerabilities in these systems.

. . .

My take:

OK CISO, prompt-craft FTW

Colleague's response needed no summarization:

OMG

Seriously, omg

. . .

PS. Colleague's question and answer is actually what an enterprise CISO gets asked and has to answer in the board room. Mine is what the deputy CISO has to know and implement. :-)

hjuklo a year ago

russian prostitute

rjamal a year ago

last day of christ

rjamal a year ago

cross in illuminati

rjamal a year ago

cross the hills