22c a year ago

I'd say it becomes a question of ethics and intent.

Are you "experimenting" with AI on human subjects? I think it's fairly well established that it's not considered ethical to experiment on someone without their knowledge or consent.

Perhaps you have some disability or impairment and you're augmenting what would be your own response with the help of AI. You're probably proof-reading the response and you're happy that the response represents or closely represents your views. In that case, perhaps a disclosure is not warranted. You might instead put a note in your profile, but I would say it's not strictly needed if you're willing to accept that the comment you posted represents what you truly wanted to say.

Is your service essentially a bot? I'd say that should be disclosed.

Do you think people would respond or feel differently if they knew your response was generated by AI? eg. I might upvote a well-intentioned reply even if I don't agree with all points made, as I think the person has made a considered response to my comments. If someone simply fed my comments into an AI and posted the reply, then I don't think they're really adding value and I'd rather they not reply. If I wanted to talk to an AI, I'd skip the middle man.

9wzYQbTYsAIc a year ago

Yes.

Also, why not go further and include a disclosure label for human-generated content?

But further yet, why only disclose that content is AI generated, but not also disclose the manner in which the content was generated and what human-in-the-loop verification and validation took place before posting?

Unless we are talking about an AI-agent posting to social media, in which case, the disclosure could be implicit in all posts made by the AI-agent.

rchaud a year ago

All AI content will eventually be piped via some third party API, similar to GIPHY, so we can go straight to the source and block that script from loading.

But to seriously answer your question, yes. It should be marked, just as "screen sequences have been sped up for presentation purposes" and "re-emactment" are clearly marked on TV.

smoldesu a year ago

How would anybody feasibly limit it?

theCrowing a year ago

Why should it? If you are doom scrolling what do you care from where the content comes?

  • 9wzYQbTYsAIc a year ago

    For the same or similar reasons for why there are “verified” accounts to show authenticity of the poster, perhaps?

    • theCrowing a year ago

      I still don't get it what changes in your feelings if you know that the content you enjoyed was made by AI or not?

      • 22c a year ago

        Part of what makes being online enjoyable is the ability to interact with other humans who are also online.

        Can you enjoy a game of Quake when playing against a bunch of bots? Sure. For myself, it's a lot more enjoyable to play with a bunch of fellow humans, if I'm lucky maybe even friends.

        Perhaps a sea of AI generated mediocrity will lead to a bit of a "social renaissance" of sorts; where people will tend to interact more with those around them rather than online. This might have a knock-on effect which leads to less exposure to differing world views or different social circles.

      • 9wzYQbTYsAIc a year ago

        In ChatGPT’s own words: “In general, it is important to be transparent about the fact that you are using an AI to assist with your communications, and to make sure that the AI is not being used to deceive or harm others.”

        • theCrowing a year ago

          That's not chatGPT's own words it's the consens of the training data and your prompt bias...