simianparrot 2 days ago

> In particular, we'd like to acknowledge the remarkable creative output of Japan--we are struck by how deep the connection between users and Japanese content is!

Translation from snake speech bs: We've been threatened by Japanese artists via their lawyers that unless we remove the "Ghibli" feature that earned us so much money, and others like it, we're going to get absolutely destroyed in court.

  • qoez 2 days ago

    My hunch is that openai used ghibli as the example in their earlier dall-e blog posts strategically because anime was earlier said by the PM not to be protected by copyright in training. OpenAI is always sneakier than most people give them credit for.

    • ethbr1 2 days ago

      > OpenAI is always sneakier than most people give them credit for.

      There's usually more useful information in what Sam Altman specifically doesn't say than what he does.

  • 47thpresident 2 days ago

    I'm pretty sure this is in response to the flood of Sora anime parodies that have flooded TikTok in the past 48 hours. Seems like OpenAI is acknowledging some strongly worded letters from anime rights holders rather than individual artists, or the response wouldn't be this swift.

  • qlm 2 days ago

    > Japanese “content”

    Sickening

    • simianparrot 2 days ago

      No human writes like this. If he actually did it’s worrying.

      • rossant 2 days ago

        Would you mind explaining? As a non native English speaker I may have missed some nuance.

        • layer8 2 days ago

          The word “content” is often perceived as devaluing creative work: https://www.nytimes.com/2023/09/27/movies/emma-thompson-writ...

          Paradoxically, it signals indifference or disregard about the actual contents of a work.

          • majewsky 2 days ago

            Eevee put it best:

            > I absolutely cannot fucking stand creative work being referred to as "content". "Content" is how you refer to the stuff on a website when you're designing the layout and don't know what actually goes on the page yet. "Content" is how you refer to the collection of odds and ends in your car's trunk. "Content" is what marketers call the stuff that goes around the ads.

            From https://eev.ee/blog/2025/07/03/the-rise-of-whatever/

        • danhau 2 days ago

          The word content. Art would have been the appropriate term.

          • Zacharias030 2 days ago

            some of it are cultural products too.

          • sph 2 days ago

            Wait until they coopt the word "art" to include AI-generated slop. I dread the future discussion tarpits about whether AI creations can be considered art.

            • _DeadFred_ 2 days ago

              A piece of wood, a rock can be pretty/interesting to look at. It is not art. AI slop might be pretty/interesting, but it is not art.

            • krapp 2 days ago

              My person in deity that future has been here for a while now.

              Not only do they consider it art, they call what you and I consider art "humanslop" and consider it inferior to AI.

              • idiotsecant 2 days ago

                This sounds a lot like boomers complaining about kitty litter instead of bathrooms in elementary school

                It's easy to get too chronically online and focus on some tiny weird thing you saw when in fact it's just a tiny weird thing

          • mlrtime 2 days ago

            Disagree, it is content. The Japanese anime (referenced) is specifically made to be marketed and sold.

            • estearum 2 days ago

              Almost every piece of art you've ever seen (by virtue of you seeing it) was made to be marketed and sold.

              Art is overwhelmingly not a charity project from artists to the commons.

              • Kiro 2 days ago

                I presume "by virtue of you seeing it" includes other conditions or I don't understand how you can claim such a thing.

                • estearum 2 days ago

                  Where exactly have you seen art that wasn’t made to be sold? Be specific.

                  • Kiro 2 days ago

                    Friends, family, coworkers, my own, random posts online, everywhere.

                    • estearum 2 days ago

                      Ah yes, the very normal activity of showing your coworkers your hobbyist art! Is this happening a couple dozen times per day?

                      • Kiro a day ago

                        It happens quite often, yes. They are concept artists and designers but they share their own stuff. And just now I opened up Discord and skimmed through some art, pixel art and drawings channels in the many servers I'm in and saw a lot of art that I doubt anyone is trying to sell. People just love to share their creations.

                        • estearum a day ago

                          Yes if you are friends with and deeply networked with professional artists and designers, you'll see a lot more hobby art. Most people are not friends with even one (never mind several) professional artists though.

                          This scenario is irrelevant to my main thesis anyway, which is that people principally do not develop artistry to the levels required for strangers to care about it without doing so as a professional pursuit.

                          That you get to see the exhaust and byproducts of such a professional pursuit isn't a point against it.

                      • fragmede 2 days ago

                        Via Instagram, while they're showing off pictures of their kids and their hobbies... yes? Do you show only your coworkers, what, system diagrams of work things making the between work times still also about work?

                        Different places have different cultures, apparently your coworkers aren't to know anything about you beyond what's necessary for them to work with you, but across the whole world, not everywhere is like that, and it seems unnecessary to state that you don't live in such a place in that way.

              • ricardobeat 2 days ago

                Most independent artists will disagree with this statement. They do it for passion, to communicate, to tell stories, to fulfill their own urges. Some works incidentally hit a sweet spot and become commercial successes, but that's not their purpose. On the other hand, the 'art' you see being marketed around you is made specifically to be marketed and sold, with little personal connection to the artist, and often against their own preferences. That's "content".

                • estearum 2 days ago

                  Is that what they tell you when you’re standing in the gallery with a checkbook? Or in the boardroom with a signature?

                  No, you almost never see art that wasn’t meant to be sold. Public art pieces are commissioned (sold), art in galleries were created by professional artists (even if commercially unsuccessful) 99.99999% of the time.

                  Surely if this wasn’t true, you could point to a few specific examples of art — or even broad categories of art — that weren’t made to be sold and that you have personally seen?

                  • ricardobeat 2 days ago

                    I think you're just interpreting the meaning of "made to be sold" very literally. Of course artists want to make a living and have their art be appreciated, so they expect pieces to be sold; but that is not the main motivation behind making the art, where commercial "art" - advertising, mainstream cinema, pop music, most art galleries, anime, 80% of what you see in arts and crafts fairs, pieces in IKEA - is created with profit as the main motive.

                    Going back to the origin of this, stating that Ghibli style videos generated with SORA (which the OP initially called "content") are equivalent to Studio Ghibli movies because they are both "art made to be sold" would be wild. A film like Spirited Away took over 1 million hours of work, if making money was the main goal it would have never happened.

                    • estearum 2 days ago

                      > Of course artists want to make a living and have their art be appreciated, so they expect pieces to be sold

                      "they want their to be appreciated, so they expect pieces to be sold" is a clever trick but one is not related to the other. One could want their art to be appreciated and never sell it, but virtually no one would see this art for a variety of reasons including the fact that marketability increases visibility and that there is very, very little amateur art that is worth looking at, much less promoting to a larger audience.

                      It seems you agree that in fact art (that anyone sees) is overwhelmingly made to be sold.

                      I didn't say anything about their "main motivation" and neither you nor I (nor even the artist, frankly) could say much about what someone's main motivation is.

                      What we can say is that nearly all of the art anyone sees was in fact made to be sold, which is the specific claim that I made.

                      • Terretta 2 days ago

                        > nearly all of the art anyone sees

                        See comment above.

                        • estearum 2 days ago

                          Yes you're just restating my thesis but with the air of disputing it.

                          • WhyOhWhyQ 2 days ago

                            Buddy your thesis is that art does not exist because of capitalism. That is a ridiculous 'thesis'.

                            • estearum 2 days ago

                              ... what? Not sure how you got that, but no, that's not what I believe.

                              Here, I'll restate it:

                              > Almost every piece of art you've ever seen (by virtue of you seeing it) was made to be marketed and sold.

                              > Art is overwhelmingly not a charity project from artists to the commons.

                              • ricardobeat 11 hours ago

                                Which is why the original comment you replied to characterized it as content and not art. But this has gone pretty much full circle already.

                                • estearum 7 hours ago

                                  So apparently:

                                  The Sistine Chapel: Content, not art

                                  The Mona Lisa: Content, not art

                                  The Guggenheim: Content, not art

                                  David: Content, not art

                                  Geurnica: Content, not art

                                  Symphony No. 5 in C Minor, Op. 67: Content, not art

                                  I understand why the original comment said it, and my response is a simple explanation as to why the original comment was very obviously incorrect.

                  • Terretta 2 days ago

                    > almost never see art that wasn’t meant to be sold

                    Because most art isn't in a gallery or store. You quite literally aren't seeing it.

                    • estearum 2 days ago

                      In other words:

                      > Almost every piece of art you've ever seen (by virtue of you seeing it) was made to be marketed and sold.

                • richardfulop 2 days ago

                  Art is not an objective definition, it is the subjective experience of the observer. Content is a format.

            • qlm 2 days ago

              The involvement of money does not preclude a work from being considered art. Your claim is cynical and ahistorical.

              • nickthegreek 2 days ago

                it also doesn’t preclude it from being content.

                • estearum 2 days ago

                  I don't think any supposes it does. They're arguing that the word choice implies something about the speaker's value system and the place that art or human culture has in it.

                • qlm 2 days ago

                  Well, yes, but I didn’t really think that needed to be said.

      • rhetocj23 2 days ago

        None of us should be surprised. This joker has zero respect for the artistry of humans.

  • vivzkestrel 2 days ago

    can we get an AI model that translates all CEO speeches with from snake oil salesman BS to direct talk please?

solid_fuel 3 days ago

I don't understand some parts of this, the writing doesn't seem to flow logically from one thought to another.

    >  Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences. 
    > We are going to try sharing some of this revenue with rightsholders who want their characters generated by users. 
    > The exact model will take some trial and error to figure out, but we plan to start very soon. Our hope is that the new kind of engagement is even more valuable than the revenue share, but of course we we want both to be valuable.

The first part of this paragraph implies that the video generation service is more expensive than they expected, because users are generating more videos than they expected and sharing them less. The next sentence then references sharing revenue with "rightsholders"? What revenue? The first part makes it sound like there's very little left over after paying for inference.

Secondly, to make a prediction about the future business model - it sounds like large companies (disney, nintendo, etc) will be able to enter revenue sharing agreements with OpenAI where users pay extra to use specific brand characters in their generated videos, and some of that licensing cost will be returned to the "rightsholders". But I bet everyone else - you, me, small youtube celebrities - will be left out in the cold with no controls over their likeness. After all, it's not like they could possibly identify every single living person and tie them to their likeness.

  • cg505 3 days ago

    1. They need to charge users for generation.

    2. They might get into trouble charging users to generate some other entity's IP, so they may revenue-share with the IP owner.

    They're probably still losing money even if they charge for video generation, but recouping some of that cost, even if they revshare, is better than nothing.

    • earthnail 3 days ago

      You got the last paragraph wrong. They need to negotiate with rights holders on the revenue split. They’re hoping that the virality aspect will be more important to rights holders than money alone, but they will of course also give money to rights holders.

      Or, in other words: here’s Sam Altman saying to Disney “you should actually be grateful if people generate tons of videos with Disney characters because it puts them front and center again.”, but then he acknowledges that OpenAI also benefits from it and therefore should pay Disney something. But this will be his argument when negotiating for a lower revenue share, and if his theory holds, then brands that don’t enter into a revenue share with OpenAI because they don’t like the deal terms may lose out on even more money and attention that they would get via Sora.

  • melvinmelih 2 days ago

    > After all, it's not like they could possibly identify every single living person and tie them to their likeness.

    Wasn’t he literally scanning eye balls a couple years ago?

    • rglover 2 days ago

      "Just look into the orb, bro."

  • sebzim4500 2 days ago

    I don't get the confusion. He's saying that

    (i) they will need to start charging money per generation (ii) they will share some of this money with rightsholders

    • solid_fuel 2 days ago

      It's confusing to me because charging money is implied - "we are going to have to somehow make money" - but not actually stated, and then it jumps past the revenue structure into sharing money with "rightsholders".

      It has left me wondering if, instead of just charging users, they would start charging "rightsholders" for IP protection. I could see a system where e.g. Disney pays OpenAI $1 million up front to train in recognition of Mickey Mouse, and then receives a revenue share back from users who generate videos containing Mickey Mouse.

    • basisword 2 days ago

      They will share the money with the rights holders large enough to sue them. Fuck the rest. Just as they’ve done with training material for ChatGPT.

    • samastur 2 days ago

      they will TRY to share this money ;)

      • cedilla 2 days ago

        Yes – "with rightsholders who want their characters generated by users. "

        So it's not about reimbursing "rightsholders" they rip off. It's about giving a pittance to those who allow them to continue to do so.

        Sorry, trying to give a pittance to them.

  • raphman 2 days ago

    "Sora Update #4: Through a partnership with Google, Meta and Snap Inc., you will be able to generate tasteful photos of the cute girl you saw on the bus. She will receive a compensation of $0.007 once she signs our universal content creators' agreement."

  • 48terry 2 days ago

    > the writing doesn't seem to flow logically from one thought to another.

    Neither has most of the stuff Sam has said since basically the moment he started talking.

    It is possible, perhaps, that he is actually a very stupid person!

    • braebo 2 days ago

      My read says intelligent sociopathic narcissist.

  • camillomiller 3 days ago

    “Dear rights holders, we abused your content to train our closed model, but rest assured we’ll figure out a way to get you pennies back if you don’t get too mad at us”

g42gregory 3 days ago

It is already illegal to use images in somebody's likeness for commercial purposes or purposes that harm their reputation, could be confusing, etc... Basically the only times you could use these images are for some parodies, for public figures, and fair use.

Now, the OpenAI will be lecturing their own users, while expecting them to make them rich. I suspect, the users will find it insulting.

Generation for personal use is not illegal, as far as I know.

  • nickthegreek 2 days ago

    you can use the images to harm someone’s reputation legally as long as you don’t represent them as real.

  • camillomiller 3 days ago

    Wait, are you telling me Sam Altman has no regard for the law and thinks his own messianic endeavors are more important than that? Shocker!

surrTurr 2 days ago

> launch new sora update

> enable generating ghibli content since users are ADDICTED to that style

> willingly ignore the fact that the people who own this content don't want this

> wait a few days

> "ooooh we're so sorry for letting these users generate copyrighted content"

> disables it via some dumb ahh prompt detection algorithm

> dumb down the model and features even more

> add expensive pricing

> wait a few months

> launch new model without all of these restrictions again so that the difference to the new model feels insane

  • slacktivism123 2 days ago

    >dumb ahh prompt detection algorithm

    Don't worry, you can write "dumb ass" here without needing to use algospeak. This isn't Instagram or TikTok and you won't be unpersoned by a "trust and safety" team for doing so.

    P.S. No need for a space after your meme arrows :-)

  • workfromspace 2 days ago

    I'm new to Sora. Which step are we in at the moment?

    • surrTurr a day ago

      > disables it via some dumb ~~ahh~~ ass prompt detection algorithm

  • spongebobstoes 2 days ago

    copyright is such a poorly designed tax on our society and culture. innovations like Sora should be possible, but faces huge headwinds because... Disney wants even more money?

    the blind greed of copyright companies disgusts me

    • _DeadFred_ 2 days ago

      Society has benefited hugely from copyright law. In fact, the first copyright laws were created in response to desires to have education material/a better educated society.

      Saying 'disney/laws bad because I want billionaire corporation to have access to something they know they don't but built their business model around using anyway.' isn't saying anything but 'I want what I want'.

      If anything society should take this slow and do it right, not throw out hundreds of years of thinking/decisions/progress because 'disney' and 'cool new tech'.

      We should not bend/throw away laws because billion dollar industry chose to build a new business model around ignoring them. Down that path lies dystopia.

      • spongebobstoes 4 hours ago

        Only since 1979 has copyright been longer than life.

        Copyright stifles cultural evolution and suppresses creative expression by preventing ordinary people from subverting, reinterpreting, and otherwise reusing cultural icons.

        The original 28 years of copyright would be enough for everyone from small artists to Disney to see massive ROI on works of art.

        Art from 1998 is irrelevant or has already made the creator rich, so it's clear that even 28 years is an overshot.

        Refusal to share when it costs nothing is simply greed. Let the people create and innovate. Let our culture evolve.

minimaxir 3 days ago

Yeah, Nintendo called, and faster than expected.

> People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.

What did OpenAI expect, really? They imposed no meaningful generation limits and and "very small audiences" is literally the point of an invite-only program.

  • minimaxir 3 days ago

    Update after more testing: looks like every popular video game prompt (even those not owned by Nintendo) triggers a Content Warning, and prompting "Italian video game plumber" didn't work either. Even indie games like Slay the Spire and Undertale got rejected. The only one that didn't trigger a "similarity to third party content" Content Violation was Cyberpunk 2077.

    Even content like Spongebob and Rick and Morty is now being rejected after having flooded the feeds.

    • mallowdram 2 days ago

      I see a movie: The MoTrix, copyright blasting Soraddicts invent a new prompting language (or discover the one Altman seeded) as a way of evading Agents of the Entity, a © deity/program. Once unleashed, the world descends into HeroClix and ReadyPlayerOne slop simulation where original becomes indistinguishable from stolen.

  • techblueberry 3 days ago

    I don’t understand, what do they mean very small audiences, am I not supposed to make video for myself?

    • minimaxir 3 days ago

      OpenAI likely intended users to post every video they make to the public feed instead of just using the app as a free video generator. (i.e. Midjourney)

      Of course, another reason that people don’t publish their generated videos is because they are bad. I may or may not be speaking from experience.

      • dgs_sgd 3 days ago

        Can confirm.. I got access to the app yesterday and I have used it exclusively for making drafts and sending them to my friends without posting.

        • mrcwinn 3 days ago

          100%. I’m not comfortable sharing likeness of myself publicly. I send goofy stuff to friends. That was day 1, at least.

          Day 2+ I haven’t used the app again.

    • notatoad 3 days ago

      my read: they made the app look like tiktok, and were expecting people to make tiktok style viral videos. instead, what people are making is cameo-style personalized messages for their friends, starring mario.

  • Jordan-117 3 days ago

    Current limit seems to be 100 per rolling 24 hour period, so not unlimited but definitely huge given the compute costs.

    • minimaxir 3 days ago

      Setting the limit that high for a soft launch is bizarre. I got access to Sora and got the gist of it with like 10 generations.

      • angulardragon03 2 days ago

        Gotta juice the utilisation numbers somehow. Limiting everyone to 10 per day would kneecap them, and they’d have nothing with which to attract new investors to keep the gravy train going

    • mdrzn a day ago

      It has already been lowered to 30 per 24 hour period, and maximum 3 concurrent generations instead of 5.

  • ojosilva 2 days ago

    And I don't think you can revenue share these generations with rights owners just like that. What rights owner will let their "product" be depicted in any imaginable situation by any prompt by anyone in the planet? Words are powerful and images a 1000 words worth, videos are a millionth fold... I've seen a quick Sora video from OpenAI themselves I believe of the real life Mario Bros Princess, a rather voluptuous one, playing herself on a console and the image stuck. And it's not just misuse, distortion or appropriation but also association: imagine a series of very viral videos of Pikachu drinking Coke or a fan series of Goku with friends at KFC... it could condition, or steal, future marketing deals for the rights holders.

    This is a non-starter, unless you own a "license to AI" from the rights owner directly, such as an ad agency that uses Sora to generate an ad it was hired to do.

  • CPLX 3 days ago

    Indeed. If you read between the lines that’s clearly it.

    And on that note can I add how much I truly despise sentences like this:

    > We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all).

    To me this sentence sums up a certain kind of passive aggressive California, Silicon Valley, sociopathic way of communicating with people that just makes my skin crawl. It’s sort of a conceptual cousin to concepts like banning someone from a service without even telling them or using words like “sunset” instead of “cancel” and so on.

    What that sentence actually fucking means is that a lot of powerful people with valuable creative works contacted them with lawyers telling them to knock this the fuck off. Which they thought was appropriate to put in parentheses at the end as if it wasn’t the main point.

    • lelandfe 3 days ago

      Wow, I am sure excited for your new kind of interactive fan fiction of my properties. It will accrue us a lot of value! Anyway, please do not use our properties.

      • vntok 2 days ago

        Nice but there's no need for the "please": it's not a request, it's a demand from an official lawyer-penned, strongly-worded, lawsuit actionable letter.

    • martin-t 3 days ago

      It feels like big exploitative multimedia companies are the main force fighting big exploitative ML companies over copyright of art.

      I wish big exploitative tech companies would fight them over copyright of code but almost all big exploitative tech companies are also big exploitative ML companies.

      Oracle to the rescue? What a sick, sad world.

    • signatoremo 3 days ago

      You may not like their message, but the style can be found in practically any public communication from any corporation. Read a layoff announcement from Novo Nordisk as an example [1]. No difference.

      This is what I don’t like about HN, manufactured outrage when one dislikes the messenger. No substance whatsoever.

      When users are given such a powerful tool like Sora, there will naturally be conflicts. If one makes a video putting a naked girl in a sacred Buddhist temple in Bangkok, how do you think Thai people will react?

      This is OpenAI attempting balancing acts between conflicting interests, while trying to make money.

      [1]-https://www.novonordisk.com/content/nncorp/global/en/news-an...

      • id00 2 days ago

        I actually really like that comment. It's an example of classic doublespeak and it's a shame that "Open"AI uses it and we as society tolerate that (as well as other companies of course)

      • toshinoriyagi 2 days ago

        Yes, but one of the conflicting interests is illegal. We all know these companies pirate a huge amount of copyrighted data to train their LLMs and VLMs. Clear copyright infringement, Anthropic just lost a few billion dollars for this.

        In addition, the training process attempts to reproduce the copyrighted training data as perfectly as possible, with the intent to rent the resulting model out for commercial gain afterwards. Many argue that this is not fair use, but another instance of copyright infringement.

        And if the previous infractions weren't enough, OpenAI's customers are now generating mass videos of copyrighted characters.

        So, while it may be common corporate speak, it is still snake-tongued weasel-blather that downplays the illegality of their actions.

      • redserk 2 days ago

        If we’re going on HN rants, this bizarre tendency of reframing the blatantly obvious into something it isn’t doesn’t help any argument.

        The messenger isn’t some random, disconnected third party here.

    • rhetocj23 2 days ago

      Exactly. I really hope Altman gets whats coming for him.

    • saxonww 3 days ago

      I'm not really disagreeing with you, but I think it's more about salesmanship than anything else. "We released v1 and copyright holders immediately threatened to sue us, lol" sounds like you didn't think ahead, and also paints copyright holders in a negative light; copyright holders who you need to not be enemies but who, if you're not making it up, are already unhappy enough to want to sue you.

      Sam's sentence tries to paint what happened in a positive light, and imagines positive progress as both sides work towards 'yes'.

      So I agree that it would be nice if he were more direct, but if he's even capable of that it would be 30 years from now when someone's asking him to reminisce, not mid-hustle. And I'd add that I think this is true of all business executives, it's not necessarily a Silicon Valley thing. They seem to frequently be mealy-mouthed. I think it goes with the position.

    • adriand 3 days ago

      > To me this sentence sums up a certain kind of passive aggressive California, Silicon Valley, sociopathic way of communicating with people that just makes my skin crawl.

      To me that's Sam Altman in a nutshell. I remember listening to an extended interview with him and I felt creeped out by the end of it. The film Mountainhead does a great job capturing this.

geraldalewis 3 days ago

> rightsholders

  • roxolotl 3 days ago

    It’s telling to how society values copyright of different media that 4 years into people yelling about these being copyright violation machines the first time there’s been an emergency copyright update has been with video.

    • ronsor 3 days ago

      The only thing we need is an emergency copyright deprecation.

      • martin-t 3 days ago

        So people who spend time working on code or art should have exactly zero protection against somebody else just taking their work and using it to make money?

        • reorder9695 2 days ago

          No, but the current system is totally idiotic. Why not have a fixed timeframe i.e. 30-50 years to make money? Life of the author + x years is stupid not only because it's way too long, it keeps going until way after the creator is no longer benefitting, and it can cause issues with works where you don't know who the author is so you can't cleanly say it's old enough not to have copyright.

          I'm not sure for most (specifically smaller, who need the most protection) creators this would actually change very much. Media typically makes money in it's first few years of life, not 70 years on.

          • mallowdram 2 days ago

            The shareholder class would demand rapid fire exploitation of © the moment it expired and the resulting media would be a soup of mediocrity. The idea is to recognize the highly creative have unique imaginations that invent paradigms that propel culture. Excluding that for 70+ years generates that. Had Lucas gained the rights to Flash Gordon (DeLaurentiis beat him to it) he'd never been forced to create the SW universe. Think about constraints as the path to progress.

            • iterance 2 days ago

              This does not demonstrate a sound understanding of how the public domain works, why copyright lengths have been extended so ferociously over the last century (it's shareholders who want this), nor the impact it has both on creative process and public conversation.

              This is a highly complex question about how legal systems, companies, and individual creatives come in conflict, and cannot be summarized as a positive creative constraint / means to celebrate their works.

              • mallowdram 2 days ago

                I develop copyright material from the letter and the images that I've both sold to studios and own myself. Copyright lengths are there to prevent the shareholder class from rapid exploitation. Once copyright declines to years not decades, shareholders will demand that be exploited rather than new ideas. The public conversation is rather irrelevant as the layperson doesn't have a window into the massive risk, long-term development required to invent new things, that's how copyright is not a referendum, it's a specialized discourse. Yes the idea of long-term copyright developed under work-for-hire or individual ownership can be easily summarized. License, sample, or steal. Those are the windows.

          • martin-t 2 days ago

            Then the solution is fixing the problem, not removing any protections at all.

            In fact, copyright should belong to the people who actually create stuff, not those who pay them.

        • 1gn15 3 days ago

          Yes.

          (The "takers" also do not have copyright protection.)

          • martin-t 2 days ago

            So basically the only winners should be:

            - owners of large platforms who don't care what "content"[0] is successful or if creators get rewarded, as long as there is content to show between ads

            - large corporations who can afford to protect their content with DRM

            Is that correct?

            Do you expect it to play out differently? Game it out in your head.

            [0]: https://eev.ee/blog/2025/07/03/the-rise-of-whatever/#:~:text...

            • ronsor a day ago

              The vast majority of DRM is cracked very quickly; the only reason DRM cracking tools aren't more widespread is because of copyright law and the idiotic anti-circumvention provisions.

              Consider that even DRM'd content is on torrent sites within hours of release.

          • noduerme 3 days ago

            Great, you've just removed any incentive for people to make anything.

            • JimDabell 3 days ago

              The vast amounts of permissively licensed works directly contradicts you.

              Even if you take away copyright, there are plenty of incentives to create. Copyright is not the sole reason people create.

              • noduerme 3 days ago

                Vague. Are you talking about reasons to create like the joy of creating? Your bio describes you as a 'tech entrepreneur', not 'DIY tinkerer'. So I'll assume that when you spend a great deal of time entrepreneuring something, you do so with the hope of remuneration. Maybe not by licensing the copyright, but in some form.

                Permissive licenses are great in software, where SAAS is an alternative route to getting paid. How does that work if you're a musician, artist, writer, or filmmaker who makes a living selling the rights to your creative output? It doesn't.

                • JimDabell 3 days ago

                  > Vague. Are you talking about reasons to create like the joy of creating?

                  That’s one of them, but I really don’t have to be specific about the reasons. I just have to point out the existence of permissively licensed works. You said:

                  > Great, you've just removed any incentive for people to make anything.

                  This is very obviously untrue. Perhaps you meant to say “…you’ve just removed some incentives for people to make some things”?

            • ares623 3 days ago

              It's ok I don't have any talent so that won't affect me

    • musicale 3 days ago

      "Hi, as the company that bragged about how we had ripped off Studio Ghibli, and encouraged you to make as many still frames as possible, we would now like to say that you are making too many fake Disney films and we want you to stop."

      • timschmidt 3 days ago

        Cue an open weights model from Qwen or DeepSeek with none of these limitations.

        • ineedasername 3 days ago

          These attempted limitations tend to be very brittle when the material isn’t excised from the training data, even more so when it’s visual rather than just text. It becomes very much like that board game Taboo where the goal is to get people to guess a word without saying a few other highly related words or synonyms.

          For example, I had no problem getting the desired results when I promoted Sora for “A street level view of that magical castle in a Florida amusement area, crowds of people walking and a monorail going by on tracks overhead.”

          Hint: it wasn’t Universal Studios, and unless you know the place by blind sight you’d think it had been the mouse’s own place.

          On pure image generation, I forget which model, one derived from stable diffusion though, there was clearly a trained unweighting of Mickey Mouse such that you couldn’t get him to appear by name, but go at it a little sideways? Even just “Minnie Mouse and her partner”? Poof- guardrails down. If you have a solid intuition of the term “dog whistling” and how it’s done, it all becomes trivial.

          • timschmidt 3 days ago

            Absolutely. Though the smarter these things get, and the more layers of additional LLMs on top playing copyright police that there are, I do expect it to get more challenging.

            My comment was intended more to point out that copyright cartels are a competitive liability for AI corps based in "the west". Groups who can train models on all available culture without limitation will produce more capable models with less friction for generating content that people want.

            People have strong opinions about whether or not this is morally defensible. I'm not commenting on that either way. Just pointing out the reality of it.

            • TeMPOraL 2 days ago

              It's a matter of time. I imagine they'll get more effect suppressing activations of specific concepts within the LLM, possibly in real time. I.e. instead of filtering prompt for "Mickie Mouse" analogies, or unlearning the concept, or even checking the output before passing it to user, they could monitor the network for specific activation patterns and clamp them during inference.

              • ineedasername 19 hours ago

                They might, but we may also find they don’t function as well or as predictably if increasing amounts of their weights are suppressed. Research has so far shown that knowledge is incredibly, vastly diffuse, as are causes of different behaviors. There was some research that came out of Anthropic where a model being taught number sequences by another model, and that second model had fine tuning which with a stated preference for owls. The student model, despite no overt exposure to anything of the sort, expressed the same preference. The subtlety of influence that even very minor things have on the vast network of weights is, at least at present, too poorly understood to know what we’re getting in the bargain when holes are poked.

          • moduspol 2 days ago

            I can get it to do rides at Disney World (including explicitly by name) but it’s incredibly good at blocking superheroes. And that’s gotta be a pretty common prompt, yet I haven’t seen that kind of content in the feed, either.

            And not just by name. Try to get it to generate the Hulk, even with roundabout descriptions. You can get past the initial (prompt-level) blocking, but it’ll generate the video and then say the guardrails caught it.

jameslk 3 days ago

Viacom-suing-YouTube-after-it-used-all-its-IP-as-a-growth-hack vibes

  • nextworddev 3 days ago

    Lol blast from the past. Real Gs remember this.

aubanel 2 days ago

> "We are hearing from a lot of rightsholders who are very excited for this new kind of "interactive fan fiction" and think this new kind of engagement will accrue a lot of value to them, but want the ability to specify how their characters can be used (including not at all)"

Marvelous ability to convolute the simple message "rightholders told us to fuck off"

brandon272 3 days ago

Obviously, OpenAI could have had copyright restrictions in place from the get-go with this, but instead made an intentional decision to allow people to generate everything ranging from Spongebob videos to Michael Jackson videos to South Park videos.

Today, Sora users on reddit are pretty much beside themselves because of newly enabled content restrictions. They are (apparently) no longer able to generate these types of videos and see no use for the service without that ability!

To me it raises two questions:

1) Was the initial "free for all" a marketing ploy?

2) Is it the case that people find these video generators a lot less interesting when they have to come up with original ideas for the videos and cannot use copyright characters, etc?

  • simianparrot a day ago

    Considering that these models are trained on existing data to remix it means that when you shackle their ability to remix existing IP’s they’re practically useless because there’s little originality if any to squeeze out of them to begin with.

  • ronsor 3 days ago

    These video generators are mostly useful for memes at this point, and adding copyright shackles make it a lot less useful for memeing.

piskov 3 days ago

Broke: cure cancer, new physics, agi, take your jobs, what have you. Please give us a trillion.

Woke: AI slop tictoc to waste millions of human-hours.

  • noduerme 3 days ago

    You make a good point. They may well as admit at this point that curing cancer, new physics, and AGI aren't going to happen very soon.

    What surprises me a bit is that they'd take this TikTok route, rather than selling Sora as a very expensive storyboarding tool to film/tv studios, producers, etc. Why package it as an app for your neice to make viral videos that's bound to lose money with every click? Just sell it for $50k/hr of video to someone with deep pockets. Is it just a publicity stunt?

    • rsynnott 2 days ago

      > What surprises me a bit is that they'd take this TikTok route, rather than selling Sora as a very expensive storyboarding tool to film/tv studios, producers, etc.

      Because it’s not good enough, I would assume. Hard to see it actually being useful in this role.

      • Ianjit 12 hours ago

        The TAM for TV/Film storyboard is probably way too small to justify the cost of training and infering these models.

    • measurablefunc 3 days ago

      The query data they are collecting can be used for ad targeting. Remember, if you're not paying for it (and in many cases even when you are paying for it) then the data collected from your use of the application is going to be used by someone to make money one way or another. Google made billions from search queries & OpenAI has an even better query/profiling perspective on its users b/c they are providing all sorts of different modalities for interaction, that data is extremely valuable, analogous to how Google search queries (along w/ data from their other products) are extremely valuable to corporate marketing departments that are willing to pay a premium for access to Google's targeting algorithms.

  • amarcheschi 2 days ago

    Almost as if the AGI talks were what a ceo would do to pump the hype of its company as much as possible

  • eclipticplane 3 days ago

    > AI slop tictoc to waste millions of human-hours.

    Don't forget the power it consumes from an already overloaded grid [while actively turning off new renewable power sources], the fresh water data centers consume for cooling, and the noise pollution forced on low-income residents.

    • amarcheschi 2 days ago

      As a european, i don't know if it's more funny or sad that american citizens close tho data centers are effectively subsidizing ai for the rest of the world by paying more for their electricity since the datacenters are mostly there

  • rsynnott 2 days ago

    Well, yeah, but that stuff was all bullshit, whereas the fake tiktok kind of exists and might keep the all-important money taps on for another six months or so.

mallowdram 3 days ago

It began as floor wax now it's a dessert topping.

cess11 3 days ago

Is this a roundabout way to say that they've realised that people are using their service to make porn of celebrities and fictional characters in the entertainment industry, and aim to figure out a way to keep making money from it without involving "rightsholders" in scandals?

kg 3 days ago

The detail that rightsholders seem to be demanding a revenue share is interesting. That sounds administratively and technologically very complex to implement and probably also just plain expensive to implement.

  • minimaxir 3 days ago

    Sam says Sora 2 has to make money but there is no revenue model that can feasibly offset a $4-5 compute cost per video.

    • dwohnitmok 3 days ago

      With some back of the napkin math, I am pretty sure you're off by at least two orders of magnitude, conceivably 4. I think 2 cents per video is an upper limit.

      https://news.ycombinator.com/item?id=45434298

      Generally speaking, API costs that the consumer sees are way higher than compute costs that the provider pays.

      EDIT: Upper limit on pure on-going compute cost. If you factor in chip capital costs as another commentator on the other thread pointed out, you might add another order of magnitude.

      • rafram 3 days ago

        You also aren’t including amortized training costs, which are immense (and likely ongoing as they continue to tweak the model).

        • dwohnitmok 2 days ago

          I suspect amortized training costs are only a relatively small fraction of the amortized hardware costs (i.e. counting amortized hardware costs already accounts for the large fraction of the cost of training and pulling out training as a completely separate category double counts a lot of the cost).

    • nojs 3 days ago

      Where did you get that figure from?

      • minimaxir 3 days ago

        It’s more a ballpark since exact numbers vary and OpenAI could be employing shenanigans to cut costs, but in comparison, Veo 3 which has similar quality 720p video costs $0.40/second for the user, and Sora’s videos are 10 seconds each. Although Veo 3 could cost more or less to Google than what is charged to the user.

        I suspect OpenAI’s costs to be higher if anything since Google’s infra is more cost-efficient.

      • nvr219 3 days ago

        It was revealed to them in a dream.

  • martin-t 3 days ago

    This is how all work should be rewarded.

    Workers getting paid a flat rate while owners are raking in the entire income generated by the work is how the rich get richer faster than any working person can.

  • pxoe 2 days ago

    This "but it's too hard to implement" excuse never made sense to me. So it's doable to make a system like this, to have smart people working on it, hire and poach other smart people, to have payments systems, tracking systems, personal data collection, request filtering and content awareness, all that jazz, but somehow all of that grinds to a halt the moment a question like this arises? and it's been a problem for years, yet some of the smartest people are just unable to approach it, let alone solve it? Does it not seem idiotic to see them serve 'most advanced' products over and over, and then pretend like this question is "too complex" for them to solve? Shouldn't they be smart enough to rise up to that level of "complexity" anyway?

    Seems more like selective, intentional ignoring of the problem to me. It's just because if they start to pay up, everyone will want to get paid, and paying other people is something that companies like this systematically try to avoid as much as possible.

HypomaniaMan 3 days ago

Just because something can be done doesn't mean it should be

  • measurablefunc 3 days ago

    The logic is that if they don't do it then Meta or some other company will & they have decided it's better that they do it b/c they are the better, more righteous, & moral people. But the main issue is I don't understand how they went from solving general intelligence to becoming an ad sponsored synthetic media company without anyone noticing.

    • camillomiller 3 days ago

      Oh we all noticed, but this is a new level of entrepreneurial narcissism and corporate gas lighting. Maybe one day Sam Altman will generally be perceived as who he actually is

      • measurablefunc 3 days ago

        He is the boy wonder genius who will usher an era of infinite abundance but before he does that he has to take a detour to generate a lot of synthetic media & siphon a lot of user queries at every hour of every day so that advertisers can better target consumers w/ their plastic gadgets & snake oils. I'm sure they just need a few more trillions in data center buildouts & then they can get around to building the general purpose intelligence that will deliver us to the fully automated luxurious communist utopia.

tmaly 2 days ago

I just heard people were making full length South Part episodes with Sora 2. But it seems now that this has been banned by OpenAI.

  • Ianjit 12 hours ago

    The clip length is 20 seconds.

zarzavat 2 days ago

Revenue sharing for AI generated videos of characters sounds completely insane.

I can't tell if this is face saving or delusion.

  • CaptainOfCoit 2 days ago

    As someone who is concerned about how artists are supposed to earn a living in a ecosystem where anyone can trivially copy any style effortlessly, it does sound better than the status quo?

    The fact that LLMs are trained on humans data yet the same humans receive no benefits from it (cannot even use the weights for free, even if they unwillingly contributed to it existing), kind of sucks.

    What alternative is there? Let companies freely slurp up people's work and give absolutely nothing back?

  • sumedh 2 days ago

    It sounds insane to you but sounds completely normal to me.

    Why should AI generated videos not have revenue sharing.

    In the end what matters is whether people enjoy the video, it does not matter if its AI created or human created.

tkamado 3 days ago

The OpenAI dream: replace your job with AI, replace your free time with AI slop?

  • stogot 3 days ago

    And replace rightsholders with “maybe we will try to revenue share… maybe”

    • pants2 3 days ago

      They also said at one point they'll share their profits with the world as UBI

      • Sateeshm 11 hours ago

        What profits if no one has a job to pay for it

_fs 3 days ago

is it still invite only? I tried downloading the app to give it a whirl, but apparently you need a code to even open the app

rr808 3 days ago

That is my reminder to generate more AI slop to burn through all that VC cash.

  • rhetocj23 3 days ago

    Someone I know uses chatgpt a lot. Not because they find it incredibly valuable. But because they want to stick it to the VC's funding OAI and increase their costs with no revenue.

    So this is why you have to be careful about usage numbers. The only true meaningful number is about those who are contributing towards revenue. Without that OAI is just a giant money sink.

    • MountDoom 3 days ago

      I suspect this has the opposite effect. More daily users == higher valuation, so more profit if the VCs decide to sell. There's no pressure on OpenAI to become profitable yet.

crimsoneer 3 days ago

So that sounds like they "released" this fully aware it would generate loads of hype, but never ever be legally feasible to release at scale, so we can expect some heavily cut down version to eventually become publicly released?

  • nmfisher 3 days ago

    Feels very much like a knee-jerk response to Facebook releasing their "Vibes" app the week before. It's basically the same thing, OpenAI are probably willing to light a pile of money on fire to take the wind out of their sails.

    I also don't think the "Sam Altman" videos were authentic/organic at all, smells much more like a coordinated astroturfing campaign.

    • ares623 3 days ago

      Or to distract from the new routing and intent/context detection thing they have going on.

stared 2 days ago

It is sad (and predictable, PR- and legal-wise) that there was no mention of the Ghibli Studio.

I would be actually moved if there was some genuine in the line of "We are sorry - we wanted to make a PR stunt, but we went to hard." and offered real $ for that. (Not that I believe it is going to happen, as GenAI does not like this kind of precedence.)

rpgbr 2 days ago

>Second, we are going to have to somehow make money for video generation. People are generating much more than we expected per user, and a lot of videos are being generated for very small audiences.

Once again, Scam Altman looking for excuses to raise more money. What a joke…

CompoundEyes 3 days ago

I don't have access but it seems you can impose a friend into a video? Are we not rightsholders to our own likeness? It seems like a person should be able to block a video someone shares without their consent or earn revenue then if their likeness is used.

  • minimaxir 3 days ago

    You have to explicitly opt into sharing your likeness with permission controls.

    > person should be able to block a video someone shares without their consent

    That is already implemented.

    • solid_fuel 3 days ago

      > You have to explicitly opt into sharing your likeness with permission controls.

      Ok... how is that supposed to work? I don't have an OpenAI account, there are no permission controls for me. Someone else could easily upload a picture of me, no?

      • pants2 3 days ago

        No, you have to register yourself with a video where you're required to say a unique code.

        So unless you've posted a video of yourself online saying every number from 1 to 99 they won't be able to copy your likeness

        • kouteiheika 2 days ago

          This seems... pretty easy to get around? There are already open weight models which can take any photo and audio and make a video out of it with the character speaking/singing/whatever, and it runs on normal consumer hardware.

          • pants2 2 days ago

            So you wouldn't know what the three numbers are ahead of time, you'd have to be using a real time face replacement model (or I guess live-switching between pre-rendered clips) and somehow convince the app that you're the iPhone selfie cam.

            But at that point you might as well just use WAN 2.2 Animate and forget about Sora.

        • solid_fuel 3 days ago

          That's more than I expected from them, genuinely. But it still doesn't seem like a very solid solution. I wonder how much variation in look and voice it accepts?

          My partner likes to cosplay, and some of the costumes are quite extensive. If they want to generate a video in a specific outfit will they need to record a new source video? The problem exists in the other direction, too. If someone looks a lot like Harrison Ford, will they be able to create videos with their own likeness?

          I wonder how this extends to videos with multiple people, as well. E.g. if both my friend and I want to be in a video together.

        • WmWsjA6B29B4nfk 3 days ago

          It’s not like making a video of someone saying a number, given a single photo and any voice sample is a very difficult problem today. We can just fast-forward a few weeks into a world where this „registration“ is already broken.

        • noduerme 3 days ago

          So only the heads of companies who lead shareholder meetings are vulnerable to this exploit? Cool.