temp00345 a year ago

Personally, I don't want to waste my energy reading generated text.

At the core of it, people write in order to transmit some deeply distilled messages about life. It is about sharing the experience of being alive, either as advice or warning about what might happen or will inevitably happen to us. The love, the pain, the emotions, the fear of death, the acceptance - good writing is where we read these things in between the lines, where we feel and empathize with the author and as a result gain some deeper insight which helps us adapt to the ever changing circumstances around us.

There's a lot more encoded in that text than just semantic meaning of words or phrases. Not that all human writers have the 'talent' to encode more than that, but the ones who do manage to shift something inside us.

As of right now, I can feel that a text was generated. Same with images - and with sound. I can't exactly explain it, but it's the same kind of 'plastic' feeling and it's similar regardless of the form (text, image, sound).

I'd really like us to be able to keep this edge over the algorithms, but this might be impossible in the long run.

  • lm28469 a year ago

    > Personally, I don't want to waste my energy reading generated text

    They got you covered, you can vomit the text back into ChatGPT and ask it to summarise it for you ;)

  • spaceman_2020 a year ago

    There are a lot of use cases where I definitely don't mind AI-generated text. Keyword-focused search results, for instance.

    If I'm Googling "what is npm" or "what are the five highest mountains in the world", I don't care one bit if the answer is from a human or a machine.

    • mattpallissard a year ago

      I care about it being correct. I might not care who wrote it, but as it turns out who wrote it is a good indicator of correctness.

      • spaceman_2020 a year ago

        Most of the top search results for such keywords are usually written by $30/article writers anyway. None of them have real subject matter expertise.

        An AI that aggregates all the knowledge from the internet is more likely to be correct than some writer you hired off UpWork to pump out 100 keyword rich articles on 30 different topics.

  • op00to a year ago

    I write to share technical details of systems I design or help support. I would love to offload this to a computer.

  • itsoktocry a year ago

    >At the core of it, people write in order to transmit some deeply distilled messages about life.

    People write for all kinds of reasons, including this one. Agreed that this case isn't reproducible by machines...but the other 50-80% of writing, including technical writing, might be.

  • brookst a year ago

    I’d love to see a blind test to see what we really can distinguish. I think I’d detect generated text at better than chance, but I’m not 100% sure.

tjpnz a year ago

A typical Medium post might include two (perhaps three, if you're lucky) paragraphs of low quality content and a few snippets of code which were helpfully provided as screenshots instead of text. Punctuate that with some boilerplate about the author's Twitter account and that's pretty much all you're getting.

Maybe I'm generalizing just a tiny bit, but my god, it's full of crap. If I were Medium I would be doing everything I could to encourage AI authored content.

  • anhner a year ago

    so you say it's full of crap, and the solution is for them to allow even more, even crappier crap?

    • a_c a year ago

      There is some truth to it. I personally don't read most blog post, including medium. The quality of content is IMO proportional to the author's experience and their effort in producing the content. On medium/Twitter/whatever blogging platform it is just too easy to create crap, crap that aims to "build up audience". Flood them with AI generated content might hint reader to go somewhere else, hopefully somewhere with more depth.

      Book is where I find the most value.

    • SuoDuanDao a year ago

      I think the argument is, AI generated content is less crappy crap than the current crop.

    • canjobear a year ago

      They’ve tried to attract high-quality human work for a while, and it doesn’t seem to have worked for them. Might as well try something new instead of digging in.

    • BeFlatXIII a year ago

      Much like /r/SubredditSimulator, at this point, the robotic crap will stink less than organic feces.

flakeoil a year ago

I suppose if the text is good and adds value and is factually correct then it does not matter if it's a human or an AI assistant who wrote it. (Except in the cases of plagiarism of course.)

The issue is that many or most of the AI generated texts will not be that good in terms of actual content. There are gaps in the conclusions, gaps in story line, wrong facts etc.

This means there will be a huge load of articles, posts and books out there which do not make much sense, but only possible to notice after having read the text properly as the text still reads fairly well written if just skimming.

It also means that articles need to be reviewed, vetted and scored by humans before we can know if it is worthwhile to read it.

Kind of like it has been up until now, the only difference is that there probably will be such a huge amount of content created which now needs to be reviewed. And that might be a task that is impossible to do.

In the end we might stick even more to reviewed and vetted articles, books etc and be less open to new ideas.

  • tempodox a year ago

    We are already drowning in noise, there's too much blogspam and other BS. Now that algorithms can produce orders of magnitude more BS in a fraction of the time, the question of “is this article even worth reading” only gets harder. I'm already suffering from “link fatigue” as it is.

    • hef19898 a year ago

      The answer to that is, obviously, another AI reading the AI created content. Should make training of new content AIs so much easier!

  • epistemer a year ago

    I don't see how Medium or even the concept of shared content has a future.

    I am sick of chatGPT in its current form but a future AI assistant with even the slightest bit of personalization will put Medium out of business.

    "assistant, show me something interesting" I would think is the future prompt. The slightest customization based on browsing history would put the assistant far beyond anything someone else can show me on Medium or any other platform.

    I just can't see articles scored by humans as worthwhile has any future. "Interesting" is not objective or statistical, it is entirely personal.

    Most of these currently popular content sites are going to go the way of Myspace. They are just tinkering with myspace music or whatever as if it matters.

  • iliane5 a year ago

    > I suppose if the text is good and adds value and is factually correct then it does not matter if it's a human or an AI assistant who wrote it.

    The real issue is the bad content. It’s perfectly acceptable for a human to publish a bad article/blog post but when language models do it it’s more or less spam.

    • smeej a year ago

      Why is it less spammy just because a human wrote it? Plenty of humans produce spammy content.

      • _Algernon_ a year ago

        The difference is the scale it allows. The amount of writing a human can generate is relatively limited, even if it is bad writing. GPT and its equivalents allow you to turn electricity into spam with minimal human intervention. That allows unprecedented levels of spam of sufficient quality to make it hard to detect without being examined more closely.

      • iliane5 a year ago

        Spam is spam whether made by a human or an AI.

        Good, valuable content is good whether made by a human or an AI.

        Bad or uninteresting content is spam if it’s being pumped out by an AI, but it’s a learning experience/personal expression if it’s made by a human.

        The difference is intent.

      • krageon a year ago

        It is not, and most of the internet is already trash. We're speaking a lot about scale - but there are currently billions of humans and most of what they put online is already trash. It's not going to materially change - it is currently almost impossible to manually (on your own, at least) figure out what is good, and it will be impossible in the future as well.

  • ClikeX a year ago

    I've definitely used AI generated text to help me out with suggestions. It's pretty nice when you have a bit of writers block. All the important stuff like facts, numbers, and code examples should be written by yourself in my opinion.

  • teddyh a year ago

    And how do you trust a reviewer? They might be an AI, too.

anothernewdude a year ago

Must be a challenge for those GPT detectors to tell the difference between the typical Medium post and AI generated spam.

dymk a year ago

Good luck enforcing that. I mean that seriously - how would they tell?

  • whalee a year ago

    The arms race between generators and detectors has just begun.

    • CGamesPlay a year ago

      The arms race was first automated in 2014, when the GAN paper was published. That's what GANs do: half the network generates garbage, the other half identifies if it was garbage or real. https://arxiv.org/abs/1406.2661

      • ad404b8a372f2b9 a year ago

        It's only through careful configuration that the loss of the discriminator and of the generator are balanced to be able to improve gradually together. It's much easier for one loss to explode while the other goes to zero, breaking the training of the GAN.

        So the GAN training scheme is not a setting that holds for the real-life cat and mouse game of generated content detection. It's almost trivial to break a trained model whose weights you have.

    • ClikeX a year ago

      It's essentially the extension of the arms race between writers and plagiarism detectors.

  • iliane5 a year ago

    AI content detection is a losing battle imo. Models will become better and people will use the detectors to rephrase the text so it doesn’t get detected.

phkx a year ago

The formulation of their AI policy is a bit too general. "Created with AI" could apply to both, form and content. Does using a spell/grammer checker count as using AI?

My main concern is with the content, because we're more experienced with the failure modes of humans when interpreting information. I like the other approach shown in the post, which is to cite an AI as you would with any other source. Best include the prompt then, which is also the best way when citing humans. Knowing which model generated the content on what prompt would at least enable some judgement on biases etc. which are present in the response.

Looking forward to the first AI-only interview magazine.

jackschultz a year ago

My theory on the way this will progress is the skills of being an editor will become even more valuable than today.

For article writing, or non-fiction book writing even, step one is finding the data, making connections, and planning out the topics and flow, done by a human. In a way, that's how it's currently done, the only difference now might be making sure to format that in the correct way for AI writing generation to handle it.

The words are generated, and then the human editor comes in and corrects issues that might come up, say not good enough connections between topics. This can be fed back to the AI for next time to help improve.

Fictional prose will still be around, as there isn't much better than being able to read and connect with an author, where reading and connecting with AI isn't quite something we can accept (for now).

But non fiction writing is quite mechanical already. If you read the Economist like I do, you'll know what I mean. That writing and connecting of bullet point lists in the order a creator wants can be quite valuable, and having someone do an active edit at the end will be quite valuable as well.

xbmcuser a year ago

Looks like people are waking up to the fact that GPT and other AI like it are going to change the economic systems and white collar work. Today most people are just playing around with it but as more people realise its capabilities and start using it as tool that it is and find better ways to use it is going to bring about big social and economic change.

kkfx a year ago

I'm more interested in the opposite, or the possibility to finally reach a semantic search that works at web scale.

Processing news to elicit facts is more and more time consuming due to the multiplication of sources and vast propaganda, having something able to crawl and juice contents with a certain training is FAR more interesting than improvement in crappy auto-content-creations to keep flooding the infosphere beyond meteo, earthquakes and other news like that.

echelon a year ago

> [...] comment captured a common sentiment: “I’m not interested in my paid subscription subsidizing AI. I signed up to pay humans their worth for doing real work. I don’t want to give AI the eyeball hours or oxygen on a subscription platform.”

We let robots do so much already. The level of Luddite resurgence is through the roof.

I'd absolutely use AI to enhance my work. That doesn't make it any less meaningful.

  • another_story a year ago

    > I'd absolutely use AI to enhance my work. That doesn't make it any less meaningful.

    I would disagree with this when it comes to many creative works. Things like gallery showings are there as much to sell art as they are to sell the connection with the artist. There's a lot more that goes into the experience of a creative product than just the product itself.

    • ClikeX a year ago

      Plenty of musicians have created music using randomised generators since forever. Using AI doesn't have to make the art less meaningful, it depends on what you do with said output.

      Plenty of "low effort" art gets valued at arguably ridiculous prices, purely due to the name of the artist. Is suspending buckets of paint on a pendulum to let physics do the work worth more than providing an AI with a prompt?

      In the end, the value of a piece of work is in the eye of the beholder.

      • epistemer a year ago

        It is the same stupid arguments from 50 years ago that the electric guitar is just noise.

        These people that are so offended by AI art and music I suspect are the type of people who would NEVER go to an art gallery anyway. They don't even know what you are referring to with buckets of paint.

        AI Art is just a topic to be pissed off about and generate sentences about online to them.

      • ImPleadThe5th a year ago

        Randomized generators are not the same as AI models built on the back of other artists who will never be paid for their contribution to the model.

        • echelon a year ago

          You never paid the inventor of the English language you're using.

          Or the ideas in your head that are really just part of a prediction model that is a function of the inputs you surround yourself with.

          Every art student copies. Voraciously. Every engineer. Every musician.

          Babies and toddlers are learning machines. They soak in the world around them. For free.

          • ImPleadThe5th a year ago

            Yes, but the English Language isn't sold back to me at a premium. Also, it wasn't developed by a private company, its evolved over time through the effort of many generations of people.

            Human learning is not the same thing as building a for-profit computer model. And making this comparison is part of my problem with the industry.

    • ChrisMarshallNY a year ago

      I remember an old segment of a news show (It may have been 60 Minutes, I used to watch that, a lot), in the 1980s, that featured a New York "artist." He was a fairly "cliché"-looking chap (Two-tone punk hairdo, whacky sunglasses, obnoxious dress, etc.), and he was quite smug that he didn't paint any of the work attributed to him.

      He hired minimum-wage people to do the work, and he spent all his time, schmoozing and going to parties, where he would evangelize it.

      Apparently, he did quite well.

      As an artist, myself[0], that had struggled to sell anything, I found it rather offensive.

      [0] https://littlegreenviper.com/art/Cavalier.png

    • harvey9 a year ago

      Painters and sculptors have had assistants working with them long before AI came along, but you don't usually meet them at gallery showings either.

  • ergonaught a year ago

    > We let robots do so much already. The level of Luddite resurgence is through the roof.

    The "This is just Luddites 2.0" sentiments are amusing in a sad way, because they seem to be expressed primarily by people who appear to be positioned well enough to readily understand why this is going to go horribly wrong.

  • zmgsabst a year ago

    The Luddites were right:

    > Luddites objected primarily to the rising popularity of automated textile equipment, threatening the jobs and livelihoods of skilled workers as this technology allowed them to be replaced by cheaper and less skilled workers.

    The industrial revolution made life worse for generations in the early 1800s. We’re already seeing similar regressions — eg, app-driven-jobs and decreased customer service.

    And you should look up how the automation of labor turned out for horses.

    • lm28469 a year ago

      Yeah I just don't get how people miss this elephant in the room.

      Automation as it was sold to us since the 60s is a complete failure on the social aspect. Wages stagnate, we work as much if not more than 40 years ago, retirement age is going up, job security is dead, inequalities are growing, poverty is rising, virtually everywhere in the west.

      It's cool to automate factory jobs but if it's to push factory workers into amazon warehouses, uber, e-scooter juicers or food delivery jobs I don't see it as a net positive, capital owners sure generate more capital but that's the last thing I'd like to see increasing.

      • rexreed a year ago

        While this may be true, there's no practical opposition to the force of automation. What can be automated, inevitably will be automated. The efficiencies offered by machines doing what people can do is too irresistible, even to the people who consume the technology, and even more so to those that employ or implement it. People increasingly want more with less labor and consumers most often choose those things that are automated and available without human labor over those that are. There's very little that can practically counterweight the increasing automation of all the things.

      • soco a year ago

        The real issue is in your last phrase. It's not the automation itself which didn't deliver, but the fact that the general folk doesn't see much of the benefits. Yeah okay we got automated and have to retrain towards other jobs, but those new jobs look just as shitty as the previous ones.

        • nprateem a year ago

          Except a more diverse range of products and services that are cheaper and higher quality.

          • duckmysick a year ago

            What periods of time are we comparing? Sure, products and services are better than before the industrial revolution. But within my lifetime, I noticed certain goods in categories like food, furniture, electronic appliances, got worse. Customer service also took a nosedive.

          • thedorkknight a year ago

            Not food and housing. Getting a brand new AAA video game mostly generated by AI dirt cheap is awesome, but that's not going to matter much when you can barely afford rent and groceries

          • lm28469 a year ago

            Good thing life is all about owning products and consuming services!

      • krageon a year ago

        There are two issues here: On one hand, the required amount of actual work is decreasing. This is cool and good.

        On the other hand, the demanded amount of fictional work (i.e. things that don't need to happen except folks must "make money") is not decreasing at all. This is stupid and bad.

        It's not clear to me why it is fair to demand that the former stops as a response to the latter. The latter is what should be addressed, because addressing it is what gets us to luxury gay space communism when combined with the former. Which would be a nice place to be.

    • schnitzelstoat a year ago

      I mean you left out that part where after that it massively improved the standards of living to levels entirely unimaginable prior to the industrial revolution and making prosperity available to all and not just a tiny selection of merchants and aristocrats.

      We can hope that the book in AI will lead to something similar - with automation making us far more productive.

      • thedorkknight a year ago

        Sure, it'll be great for -some- of the people who come after us. Unfortunately, in this analogy, we're the horses

    • sitkack a year ago

      Gummy Bears are delicious!

      Looking for that new Sunset Magazine issue, Backyard BBQing the Dev.

  • gitfan86 a year ago

    It is an emotional reaction. Logically these people know that Photoshop is very powerful without AI and they know that plenty of Artists need Photoshop to do their work, but they are not calling for a ban of Photoshop created art. They are calling for banning AI art because they are afraid

    • lm28469 a year ago

      Ah yes of course, we're scared, there are absolutely no moral or ethical questions to ask, no consequences on societies, employment, &c. to plan for

      We're just scared so we should shut up and accept whatever is coming as God's truth... the invisible hand and the magic of the market will make everything fall in perfect working order, as it always do, without any external interventions

      • gitfan86 a year ago

        Honestly, I was very scared too, when I saw it coming over a year ago. I see where it goes next, and there are enormous consequence for society and humanity in general, far beyond art generation being much easier than with photoshop.

        My advice is to try and live in the present and accept that the future is unknown, and accept that that was always the case, you could get hit by a bus any day regardless of what happens with the pending singularity.

      • krageon a year ago

        With any new technology there is a lot of hand-wringing. Nobody says new tech cannot be criticised, but claiming it is by definition evil because it is new (which is essentially what is happening) is not useful. It's tedious to read and only has a positive impact on people that already agree with you. As such, it is not just tedious - it is also useless.

    • lancesells a year ago

      > They are calling for banning AI art because they are afraid

      I'm not afraid for myself but I am for the future and the willingness to hand it off creation to these tools. Personally, this is another example of a few deep pockets monetizing and selling back another piece of humanity. All without asking or caring. Facebook monetized your family and relationships. Google monetized the knowledge of humankind.

      Right now people using a tool like Photoshop are varying degrees of craftsmen. You could take it away and give them a different tool and they'll be able to achieve something similar. You can take away a laptop from a writer and they'll still be able to write. Someone who illustrate in Procreate could lose their iPad and still achieve something with a pencil.

      Now a couple generations down the line the skills will start to be gone and the only creation will be made at $99.99 a month for your OpenAI suite. We've been on the road to idiocracy for awhile and it's just been sped up. I don't think it's a doomsday scenario but I do think it's a potentially uglier future.

gareth_untether a year ago

‘Human approved’ would be better than AI assisted.

greenSunglass a year ago

I hate this feeling when I read at the end of the article: This post has been generated by [whatever AI]. Did I waste 5 minutes of my life something that is potentially not real/approximate/not checked? How many of those generated article did I came across and didn't know about it?

Mockapapella a year ago

This feels like a reasonable solution, though I can’t help but feel a more convenient solution would be to integrate tools directly into the Medium writing experience, and then anybody who uses those tools automatically has their article marked as having used AI assistant technology.

  • pixl97 a year ago

    So you're just really giving incentive for the creator to use AI and past the article in the submission box so people that don't want AI articles can't exclude it.

imnotlost a year ago

The only real solution to the deluge of content, human or otherwise, is paid curation and moderation.

jcq3 a year ago

There always been plagiarism and copies, just that gpt makes it easier and faster... Nothing new just more efficient. So what? Purpose of AI is to accelerate the production of worldwide intelligence, it goes with its downsides like producing more noise. No tradeoff.

  • schnitzelstoat a year ago

    Yeah, most of the medium posts about Data Science were people just making a tutorial on how to make a graph in Plotly or how to use kNN in scikit-learn etc.

    It's been trash for ages. Now it'll just be even more trash.

    It's like someone said - it's called Medium because it's neither rare or nor well done.

  • YurgenJurgensen a year ago

    There has always been war, just that atomic weapons make it easier and faster. Nothing new; just more efficient.

lloydatkinson a year ago

I can see the ridiculous requirements on some platforms already. To the point where Grammarly is counted as "AI generated" or even Microsoft Office Word spelling and grammar checker have to be identified.

jasfi a year ago

This is a good first step. A lot of writers can gain by using ChatGPT, etc, to write outlines and give them ideas. A healthy mix, but then say so, give attribution to the AI as a source.

Overtonwindow a year ago

I don't want to read something written by a computer. I'm sure it's neat and all, but I would be extremely annoyed if I found out an article was not written by a human.

victor106 a year ago

It’s a little naive to think that people posting AI generated will be transparent about it.

Unless platforms have software to automatically detect and/or flag AI generated content it won’t help

  • pixl97 a year ago

    I suspect that most automated detection will begin to quickly fail as gpt ability improves and the abilities of the worst human writers don't.

XCSme a year ago

What is the value that Medium brings to article writers? Exposure? Or most are using it just because they don't want to setup their own blog?

  • erellsworth a year ago

    I used to publish on Medium. The draw was definitely discoverability. A writer with no following could get their stories added to a Medium publication that had a large following, and build an audience. It was effective. I went from no followers to over 1k in just a few months, and I wasn't even posting that much. When they switched to the paid member model, you could even make a little money off it.

    However as time went on Medium started implementing the tyranny of the algorithm, pushing their own recommendations over the stuff you actually subscribed to. Writers, of course, started writing for the algorithm and the quality of stories I was seeing started on a downward spiral. At the same time, for me at least, it became harder and harder to get eyes on my work, which was the only reason to be there in the first place.

    • XCSme a year ago

      But for blogging it makes more sense to have those sort of platforms link to external blogs/pages.

      With YouTube it makes sense, because hosting your own 4k videos and distributing them across the globe is not easy, but with text/blog, it makes more sense to have a directory for (external) blog posts then posting your original content directly on 3rd party website.

      • erellsworth a year ago

        I'm not sure what you mean. It what sense would a directory of blogs be comparable to Medium?

        Yes, it's easier to setup a blog for yourself than to host and distribute 4k videos. But the principle is the same.

        Assume you had the knowledge to self-host your own 4k videos, and assume cost didn't matter to you. You would still have a strong incentive to post your videos to YouTube because that's where the viewers are. The same thing goes for Medium. You can find readers there. That's the draw.

        Also, Medium had (still has?) a feature where you could import your blog posts from your own site, and it would apply the proper canonical link tag for SEO. If nothing else, it's an easy way to get a backlink, though I don't know how much those matter anymore.

  • whywhywhywhy a year ago

    No idea why anyone would want their writing behind the obnoxious onboarding dialogs Medium has from the reader side.

    • XCSme a year ago

      I never published on Medium, but one of the benefit that I thought it provided was that it also works as a discoverability platform, so your articles get promoted/pushed to people browsing Medium?

hunglee2 a year ago

we are already seeing plagiarising scams, where scammers copy paywalled human written content, paste into ChatGPT for a tonal rewrite, before republishing the freshly laundered material into your own Substack, in order to grow an audience, which the scammer will no doubt drop his own paywall onto a some point. Free money to be had, hard to see how it can be policed

  • ClikeX a year ago

    We've already seen Instagram Reels, YouTube Shorts, and TikTok steal content from each other all the time.

    Content is a commodity that's losing value, fast. Blogs regularly post meaningless content for the sake of keeping up a regular schedule. I've seen this start years ago when junior devs got recommended to keep up blogs for their resume. The amount of blogposts I've seen documenting basic syntax of programming languages is insane. Or clickbait posts telling you "why you shouldn't use Typescript".

    The trend where you need to actively push content for traffic is a bubble that's gonna burst. You're already seeing it happen with streaming services, they're culling their content because it's just becoming way too much.

  • harvey9 a year ago

    ChatGPT could log all its output and sell to/share with the likes of TurnItIn. Won't be long before someone offers a zero-log competitor of course.

    • lm28469 a year ago

      Take whatever ChatGPT gives you and go to another tool, ask it to "rephrase that text" and you're done.

mouzogu a year ago

i barely read human generated content as it is.