A chill ran down my spine as I imagined this being applied to the written word online: my articles being automatically "corrected" or "improved" the moment I hit publish, any book manuscripts being sent to editors being similarly "polished" to a point that we humans start to lose our unique tone and everything we read falls into that strange uncanny valley where everything reads ok, you can't quite put your finger on it, but it feels like something is wearing the skin of what you wrote as a face.
The well is already poisoned. I'm refraining from hiring editors merely because I suspect there's a high chance they'll just use an LLM. All recent books I'm reading is with suspicion that they have been written by AI.
However, polished to a point that we humans start to lose our unique tone is what style guides that go into the minutiae of comma placement try do do. And I'm currently reading a book I'm 100% sure has been edited by an expert human editor that did quite the job of taking away all the uniqueness of the work. So, we can't just blame the LLMs for making things more gray when we have historically paid other people to do it.
"By AI" or "with AI?" If I write the book and have AI proof read things as I go, or critique my ideas, or point out which points do I need to add more support for, is that written "by AI?"
When Big Corp says 30% of their code is now written "by AI," did they write the code by following thoughtful instruction from a human expert, who interpeted the work to be done, made decisions about the architectural impact, outlined those things and gave detailed instructions that the LLM could execute in small chunks?
This distinction I feel is going to become more important. AI tools are useful, and most people are using them for writing code, literature, papers, etc. I feel like, in some cases, it is not fair to say the thing was written by AI, even when sometimes it technically was.
Good point. I've read books with minor mistakes that slipped past the editor. Not a big deal, but it takes me out of the flow when reading. And they're things that I think an AI could easily catch.
I was listening to an interview (having a hard time remembering the name now). The guest was asked how he decides what to read, he replied that one easy way for him to filter is he only considers books published before the 70s. At the time, it sounded strange to me. It doesn't anymore, maybe he has a point
There's a YouTuber named Fil Henley (https://www.youtube.com/@WingsOfPegasus) who has been covering this for some years, now. Xe regularly comments on how universal application of pitch correction in post as an "industry standard" has dragged the great singers of yore down to the same level of mediocrity as everyone else.
Xe also occasionally reminds people that, equal temperament being what it is, this pitch correction is actually in a few cases making people less well in tune than they originally were.
It certainly removes unique tone. Yesterday's was a pitch corrected version of a performance by John Lennon from 1972, that definitely changed Lennon's sound.
Why are you calling Fil Henley a "xe"? Misgendering a man as non-binary is still misgendering. Let's not normalize misgendering in any way. (And no, you don't get call misgendering a "stylistic choice")
Extremely good analogy and context with the pitch correction thing and equal temperament IMO.
We can only be stoic and say "slop is gonna be slop". People are getting used to AI slop in text ("just proofreading", "not a natural speaker") and they got used to artificial artifacts in commercial/popular music.
It's sad, but it is what it is. As with DSP, there's always a creative way to use the tools (weird prompts, creative uses of failure modes).
In DSP and music production, auto-tune plus vocal comping plus overdubs have normalized music regressing towards an artificial ideal. But inevitably, real samples and individualistic artists achieve distinction by not using the McDonald's-kind of optimization.
Then, at some point, some of this lands in mainstream music, some of it doesn't.
> is what style guides that go into the minutiae of comma placement try do do
Eh. There might be a tacit presumption here that correctness isn't real, or that style cannot be better or worse. I would reject this notion. After all, what if something is uniquely crap?
The basic, most general purpose of writing is to communicate. Various kinds of writing have varying particular purposes. The style must be appropriate to the end in question so that it can serve the purpose of the text with respect to the particular audience.
Now, we may have disagreements about what constitutes good style for a particular purpose and for a particular audience. This will be a source of variation. And naturally, there can be stylistic differences between two pieces of writing that do not impact the clarity and success with which a piece of writing does its job.
People will have varying tastes when it comes to style, and part of that will be determined by what they're used to, what they expect, a desire for novelty, a desire for clarity and adequacy, affirmation of their own intuitions, and so on. We shouldn't obfuscate and sweep the causes of varying tastes under the rug of obfuscation, however.
In the case of AI-generated text, the uncanny, je ne said quoi character that makes it irritating to read seems to be that it has the quality of something produced by a zombie. The grammatical structure is obviously there, but at a pragmatic level, it lacks a certain cohesion, procession, and relevance that reads like something someone on amphetamines or The View might say. It's all surface.
dsign's callout of the minutiae of comma placement is a useful starting point because it's largely rhythmic, and monotony, you could say, is the enemy of rhythm. My go-to example here would probably be the comma splice, which is inflicted on people learning to write in English (while at the same time being ignored by more sophisticated writers) but doesn't exist in e.g. French.
I can be convinced that different spaces need different styles. But, correctness intrinsically emanating from language? That one is not an absolute, unless one happens to be a mathematician or GHC the Haskell compiler or any of the other logical automatons we have and that are so useful.
Language and music (which is a type of language) are a core of shared convention wrapped in a fuzzy liminal bark, outside of which, there is nonsense. An artist, be it a writer or a musician, is essentially somebody whose path stitches the core and the bark in their own unique way, and because those regions are established by common human consensus, the artist, by the act of using that consensus, is interacting with its group. And so is the person who enjoys the art. So, our shared conventions and what we dare call correctness are a medium for person-to-person communication, the same way that air is a medium to conduct sound or a piece of paper is a medium for a painting.
Furthermore, the core of correctness is fluid; language changes and although, at any time and place there is a central understanding of what is good style, the easy rules, such as they exist, are limited and arbitrary. For example, two different manuals of style will mandate different placements of commas. And somebody will cite a neurolinguistics study to dictate on the ordering of clauses within a sentence. For anything more complex, you need a properly trained neural network to do the grasping; be it a human editor or an LLM.
> The grammatical structure is obviously there, but at a pragmatic level, it lacks a certain cohesion, procession, and relevance that reads like something someone on amphetamines or The View might say. It's all surface.
Somebody in amphetamines is still intrinsically human, and here too we have some disagreement. I can not concede that AI’s output is always of the quality produced by a zombie, at least no more than the output of certain human editors, and at least not by looking at the language alone; otherwise it would be impossible for the AI to fool people. In fact, AI’s output is better (“more correct”) than what most people would produce if you forced them to write with a gun pointed to their head, or even with a large tax deduction.
What makes LLMs irritating is the suspicion that one is letting one’s brain engage with output from a stochastic parrot in contexts where one expects communication from a fellow human being. It’s the knowledge that, at the other end, somebody may decide to take your attention and your money dishonestly. That’s why I have no trouble paying for a ChatGPT plan—-it’s honest, I know what I get—-but hesitate to hire a human editor. Now, if I could sit at a caffe with said editor and go over their notes, then I would rather do just that.
In other words, what makes AI pernicious is not a matter of style or correctness, but that it poisons the communication medium—-it seeds doubt and distrust. That’s why people—-yours truly—-are burning manuals of style and setting shop in the bark of the communication medium, knowing that’s a place less frequented by LLMs and that there is a helpful camp filled with authoritative figures whose job of asserting absolute correctness may, perhaps, keep the LLMs in that core for a little longer.
Those are workarounds, however. It's too early to know for sure, but I think our society will need to rewrite its rules to adjust to AI. Anything from seclusion and attestation rituals for writers to a full blown Butlerian Jihad. https://w.ouzu.im/
to further this point. a lot about writing is style. editors sometimes smother the style in the name of grammar, conventions, or correctness, inoffensiveness. sometimes the incorrectness is the entire point, and the editor erases the incorrectness not realizing it was intentional.
ive heard of many professions complain about their version of “editors” from comedians, to video producers, and radio jockies.
What's the line. If they use Microsoft word or grammarly to ease the process is that OK? Both of which use AI. Is there anyone in the world who isn't using this tech even before an editor looks at it?
For me, an important distinction is whether or not a human is reviewing the edits suggested by an AI.
I toss all of my work into Apple Pages and Google Docs, and use them both for spelling and grammar check. I don't just blindly accept whatever they tell me, though; sometimes they're wrong, and sometimes my "mistakes" are intentional.
I also make a distinction between generating content and editing content. Spelling and grammar checkers are fine. Having an AI generate your outline is questionable. Having AI generate your content is unacceptable.
Engineering is making sure stuff works first, art distant second.
Even if the text is a simple article, a personal touch / style will go a long way to make it more pleasant to read.
LLMs are just making everything equally average, minus their own imperfections. Moving forward, they will in-breed while everything becomes progressively worse.
It’s worse. Even things not written by AI—like this comment—will
slowly converge with each other in style as humans adapt
by trying to avoid the appearance of having used AI. It won’t even be AI itself that causes this but human perception of what AI writing looks and feels like.
This is why shadow banning rubbed people so wrong. I can't prove it, but i gave up on online dating a long time ago because i found a couple of automated systems would just not send messages and not tell you (in a middle of an already active conversation)
My guess is that guys being replaced by the steam shovel said the same thing about the quality of holes being dug into the ground. "No machine is ever going to be able to dig a hole as lovingly or as accurately as a man with a shovel". "The digging machines consume way too much energy" etc.
I'm pretty sure all the hand wringing about A.I. is going to fade into the past in the same way as every other strand of technophobia has before.
I'm sure you can find people making arguments about a lack of quality from machines about textiles, woodworking, cinematography, etc., but digging holes? If you have a source of someone complaining about hole quality I'll be fascinated, but I moreso am thinking about a disconnecion here:
It looks like you see writing & editing as a menial task that we just do for it's extrinsic value, whereas these people who complain about quality see it as art we make for it's intrinsic value.
Where I think a lot of this "technophobia" actually comes from though are people who do/did this for a living and are not happy about their profession being obsolesced, and so try to justify their continued employment. And no, "there were new jobs after the cotton gin" will not comfort them, because that doesn't tell them what their next profession will be and presumes that the early industrial revolution was all peachy (it wasn't).
When I see an argument like this I'm inclined to assume the author is motivated by jealousy or some strange kind of nihilism. Reminds me of the comment the other day expressing perplexity over why anyone would learn a new language instead of relying on machine translation.
Excavation is an inherently dangerous and physically strenuous job. Additionally, when precision or delicateness is required human diggers are still used.
If AI was being used to automate dangerous and physically strenuous jobs, I wouldn't mind.
Instead it is being used to make everything it touches worse.
Imagine an AI-powered excavator that fucked up every trench that it dug and techbros insisted you were wrong for criticizing the fucked up trench.
> Instead it is being used to make everything it touches worse.
Your bias is showing through.
For what it's worth, it has made everything I use it for, much better. I can search the web for things on the net in mere seconds, where previously it could often take hours of tedious searching and reading.
And it used to be that Youtube comments were an absolute shit show of vitriol and bickering. A.I. moderation has made it so that now it's often a very pleasant experience chatting with people about video content.
there is no way you aren't able to discern the obvious differences between physical labor such as digging a hole and something as innate to human nature as creativity. you realize just how hollow a set of matrix multiplications are when you try to "talk to it" for more than 3 minutes. the whole point of language is to talk to other people and to communicate ideas to them. that is something that requires a human factor, otherwise the ideas are simply regurgitations of whatever the training set happened to contain. there are no original ideas in there. a steam shovel, on the other hand, does not need to be creative or to have human factor, it's simply digging a hole in the ground
So why are you wasting your precious comments on the hollow humans here? Leave us alone and talk to LLMs. No doubt they will tell you you’re absolutely right.
DDT has been banned, nuclear reactors have been banned in Germany, many people want to ban internal combustion engines, supersonic flight has been banned.
Moreover, most people have more attachment to their own thoughts or to reading the unaltered, genuine thoughts of other humans than to a hole in the ground. The comment you respond to literally talks about the Orwellian aspects of altering someone's works.
And shovelling leads to actual muscles in our arms. People said that calculators would be the end of mathematical intelligence too, but it turns out to be largely a non-issue. People might not be as adept at calculating proper change in their heads today, but does it have a real-world consequence of note? Not really.
You realize that making an analogy doesn't make your argument correct, right? And comparing digging through the ground to human thought and creativity is an odd mix of self debasement and arrogance. I'm guessing there is an unspoken financial incentive guiding your point of view.
Why, pray tell, would a similar series of events be relevant to a completely different series of events except as analogy? Let me use an extremely close analogy to illustrate:
Imagine someone shot a basketball, and it didn't go into the hoop. Why would telling a story about somebody else who once shot a basketball which failed to go into the hoop be helpful or relevant?
Your extremely close analogy gets to the crux of why people are disagreeing here: It doesn’t have to be analogy. You can be pointing out an equivalence.
I don't think parking an old steam shovel is much of a monument, but I'll give that one to you. No one built it for display, but they did put one there for that purpose, so I'll meet you halfway. I was wrong to suggest no one would do so, and there is clearly interest in such a thing, but I can't say that I agree that a statue exists. The song exists, the steam shovel monument exists. Appreciate the correction.
Well then we will have to fucking swear in everything we fucking write, to identify ourselves as humans, since AI doesn't like nasty language. And we should also insult other participants, since AI will almost never take an aggressive stance against people it is conversing with, you god damned piece of shit.
I’ve gotten a local model to be pretty nasty with the right prompt, minus the expletives. It took every opportunity to tell me how inferior that puny human is it was forced to talk to.
"A spark of excitement ran through me imagining this applied to writing online: my articles receiving instant, supportive refinements the moment I hit publish, and manuscripts arriving to editors already thoughtfully polished—elevating clarity while letting our distinctive voices shine even brighter. The result is a consistently smooth, natural reading experience that feels confidently authentic, faithfully reflecting what I wrote while enhancing it with care."
I get what you’re going for with this comment, but it seamlessly anthropomorphizes what’s happening in a way that has the opposite impact I think.
There is no thoughtfulness or care involved. Only algorithmic conformance to some non-human synthesis of the given style.
The issue is not just about the words that come out the other end. The issue is the loss of the transmission of human thoughts, emotions, preferences, style.
The end result is still just as suspect, and to whatever degree it appears “good”, even more soulless given the underlying reality.
> manuscripts arriving to editors already thoughtfully polished
except those editors will still make changes. that's there job. if they start passing manuscripts through without changes, they'd be nullifying their jobs.
you realize how ridiculous this is, in some ways, since a "master copy" of anything that is reproduced, is just like reproducing a machine-stamped master copy.. in the digital artifacts world, it is even more true
> "You know, YouTube is constantly working on new tools and experimenting with stuff," Beato says. "They're a best-in-class company, I've got nothing but good things to say. YouTube changed my life."
My despondent brain auto-translated that to: "My livelihood depends on Youtube"
As a consumer they are the most hostile platform to consume a video the way I want. Not the way they want me to. I am also required to use an adblocker to disable all shorts.
As a creator, they are also the most hostile platform, randomly removing video with no point of contact for help or fully removing channels (with livelihoods behind them) because of "a system glitch" but again, not point of contact to get it fixed.
Pretty much every creator I follow has complained about something being removed without any clear explanation or the ability to contact anyone and ask questions.
Say what you want about Microsoft, but if I have a problem with something I've pretty much always ended up getting support for that problem. I think Google's lack of response adds to their "mystique".
But it also creates superstitions since creators don't really understand the firm rules to follow.
Regardless, it is one of the most dystopian things about modern society - the lack of accountability for their decisions.
Youtube needs a far greater amount of bureaucracy than it has, despite how scary that word is to tech people. Google's automated approach is clearly not capable of keeping up with the scale and nuance of the website.
It's worth stating, though, that the vast majority of youtube's problems are the fault of copyright law and massive media publishers. Google could care less if you wanted to upload full camrips of 2025's biggest blockbusters, but the powers-that-be demand Google is able to take it down immediately. This is why 15 seconds of a song playing in the background gets your video demonitized.
YouTube is “riding a tiger,” and the moment creators realize they hold the real power, the game is up. I believe the platform purposely creates a fear of the unknown with intermittent reward–punishment cycles. Random rules enforcement, videos taken down, strikes, demonetization, throttling... The algorithm becomes a sort of deity that people try to appease and they turn into cultish devotees preforming endless rounds of strange rituals in hopes of "divine" monetization favor.
I don't mind the ads as much as all the mandatory meta-baiting. Not the MB itself, but the mechanisms behind it.
Even if you produce interesting videos, you still must MB to get the likes, to stay relevant to the algorithm, to capture a bigger share of the limited resource that is human attention.
The creators are fighting each other for land, our eyeballs are the crops, meanwhile the landlord takes most of the profits.
Right, that's the issue. I really doubt that creators love having to spam the same "Don't forget to like/subscribe/comment!" message in every single video they produce, but Youtube forces them to.
As a viewer I certainly hate that crap and wish Google didn't intentionally make it this way.
And the other day he posted about the abusive copyright claims he has to deal with that cost him a lot of money and could maybe have his channels closed.
Although xe lays the blame for those at the feet of Universal Music Group, not YouTube. Apparently, UMG simply refuses to learn from the experience of having thousands of copyright claims rejected on fair use grounds.
It's almost as if there's a mindless robot submitting the claims to YouTube. Perish the thought! (-:
Beato is a musician and a producer. He just finds making YouTube videos an easier way to earn a living. He's said many times how frustrating it is as a producer to work with musicians.
I push back on the idea there is anything despondent there. If YouTube was enabling my lifestyle I'd be pretty happy about the situation and certainly not about to start piling public pressure on them. These companies get enough hate from roving bands of angry internet denizens.
Touching up videos is bad but it is hardly material to break out the pitchforks compared to some of the political manoeuvres YouTube has been involved in.
So much of the channel consisting of Hot Spicy Take content really turned me off from hearing anything else he has to say, which is unfortunate, because I liked his music theory videos when I was learning about that.
Lots of very hateful, negative content too. It didn’t take me long to find the video “why this new artist sucks.” Another find, what I assume is an overblown small quibble turned into clickbait videos, was “this record label is trying to SILENCE me.” Maybe, somehow, these two things are related.
> Lots of very hateful, negative content too. It didn’t take me long to find the video “why this new artist sucks.”
If you're referring to his video I'm Sorry...This New Artist Completely Sucks[1], then it's a video about a fully AI generated "artist" he made using various AI tools.
So it's not hateful against anyone. Though the title is a bit clickbait-y, I'll give you that.
While I think he has his cranky old man moments and he isn’t for everyone, his titles are far more spicy and hateful than the actual content. He doesn’t just hate everything new because it is new. He also has plenty of videos loving on things old and new.
That's about AI, not very polarizing at the level it's currently at.
> Another find, what I assume is an overblown small quibble turned into clickbait videos, was “this record label is trying to SILENCE me.”
That might be overblown, but it doesn't sound polarizing at all. OP was saying he always has the most polarizing opinions.
If that last one is the vid I'm thinking of, the same record company has sent him hundreds of copyright strikes and he has to have a lawyer constantly fighting them for fair use. He does some stuff verging on listen-along reaction videos, but the strikes he talks about there are when he is interviewing the artists who made the songs and they play short snippits of them for reference while talking about the history of making them, thought process behind the songwriting, etc.
I think it's not just automated content ID stuff where it claims the monetization, but the same firm for that label going after him over and over where 3 strikes removes his channel. The title or thumbnail might be overblown, probably the firm just earns a commission and he's dealing with a corporate machine that is scatter shotting against big videos with lots of views that have any of their sound rather than targetting him to silence something they don't want to get out, but I don't think the video was very polarizing.
Wow, that xkcd really scares me. I Have No Mouth, and I Must Scream.
It's definitely something that could realistically happen in the near future, maybe even mandated by the EU
Like the part about the birthdate is supposed to be humor maybe? But Google already knows how old you are.
That's why I think it's funny that they claim they will now be "using AI" to determine if someone is an adult and able to watch certain youtube videos. Google already knows how old you are. It doesn't need a new technique to figure out that you're 11 years old or 39 years old. They're literally just pretending to not know this information.
Unfortunately the article doesn't have an example, or a comparison image. Other reports are similarly useless as well. The most that seemed to happen is that the wrinkles in someone's ear changed. In case anyone else wants to see it in action:
I skimmed the videos as well, and there is much more talk about this thing, and barely any examples of it. As this is an experiment, I guess that all this noise serves as a feedback to YouTube.
If you click through to Rhett Schul's (sp?) video you can see examples comparing the original video (from non-Shorts videos) with the sharpened video (from Shorts).
Basically YouTube is applying a sharpening filter to "Shorts" videos.
This makes sense. Saying YT is applying AI to every single video uploaded would be a huge WTF kind of situation. Saying that YT has created a workflow utilizing AI to create a new video from the creator's original video to fit a specific type of video format that they want to promote even when most creators are NOT creating that format makes much more sense. Pretty much every short I've seen was a portrait crop from something that was obviously originally landscape orientation.
Do these videos that YT creates to backfill their lack of Shorts get credited back to the original creator as far as monetization from ads?
This really has a feel of the delivery apps making websites for the restaurants that did not previously have one without the restaurant knowing anything about it while setting higher prices on the menu items while keeping that extra money instead of paying the restaurants the extra.
I saw the sharpening, and listened to the claims of shirt wrinkles being weird and so on, but I didn't deem these to be on the level of the original claim, which is that "AI enhancements" are made to the video, as in, new details and features are invented on the video. In the ear example, the shape of the ear changed, which is significant because I'd never want that in any of my photos or videos. The rest of the effects were more "overdone" than "inventive".
Although, I probably wouldn't want any automatic filtering applied to my video either, AI modifications or not.
Is what I've been noticing this past week! There have been a handful of videos that looked quite uncanny but were from creators I knew, and a few from unknown sources I completely skipped over because they looked suspect.
Have to say, I am not a fan of the AI sharpening filter at all. Would much prefer the low res videos.
Flickr used to apply an auto-enhancement (sharpening, saturation, etc) effect to photos[0]. It would be really weird seeing a photo locally and then see the copy on Flickr that looked better somehow.
Aside:
The mention of Technorati tags (and even Flickr) in the linked blog post hit me right in the Web 2.0 nostalgia feels.
IME this is a long-standing thing - failing to include visuals for inherently visual news stories. They're geared towards text news stories for whatever reason.
> We hear you, and want to clear things up! This is from an experiment to improve video quality with traditional machine learning – not GenAI. More info from @YouTubeInsider here:
> No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)
> YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features
Love the "[company] is always working on ways to provide the best..." that's always in these explanations, like "you actually just caught us doing something good! You're welcome!"
All of which is pretty reasonable, especially for shorts, which are meant to be thrown directly in the trash after being used to collect some ad revenue anyway, right?
This outrage feels odd, TV has "improved" movies for ages, youtube doing it with machine learning is the same idea, are we really upset because an ear looks a bit clearer?
No, people are upset because Youtube is editing their content without telling them. If they really thought this was a high value add they could have added an enhance button to let creators opt in, as has been done elsewhere. I wouldn't like it if HN started "optimizing" the wording my comments without telling me, even if it made them better along some metric.
You’re conflating editing with rendering, YouTube didn’t overwrite creators's uploads, it applied an ML filter in the streaming/transcode pipeline, the same layer that already resizes, compresses, and tone-maps. That's not "editing my content" any more than your TV's sharpness setting edits a film, An "Enhance" toggle/label would be good UX, but calling it silent edits misdescribes what's happening
PS: this isn't "generative AI" It's basic ML enhancement (denoise/sharpen/tone-map)
"Editing" implies they are applying some kind of editorial change. From what I've seen it's a sharpening/upscaling filter to improve visual quality. If your issue is that Youtube is changing the quality of the video, well, they have been doing that since the very first video every uploaded to Youtube. All Youtube videos are compressed, they have always had that ugly softness to them.
>How about you think stuff through before even starting to waste time on stuff like this?
What makes you think they don't think it through? This effect is an experiment that they are running. It seems to be useless, unwanted from our perspective, but what if they find that it increases engagement?
> What makes you think they don't think it through?
Basing it on a lot of stupid decisions youtube has made over the years, the last being the horrendous autotranslation of titles/descriptions/audio that can't be turned off. Can only be explained by having morons making decisions, who can't imagine that anyone could speak more than one language.
Youtube says this was done for select Youtube Shorts as a denoising process. However most popular channels on Youtube, which seem to be the pool selected for this experiment, typically already have well lit and graded videos shouldn't benefit much from extra denoising from a visual point of view.
It's true though that aggressive denoising gives things an artificially generated look since both processes use denoising heavily.
Perhaps this was done to optimize video encoding, since the less noise/surface detail there is the easier it is to compress.
> The only viable interface for that is the web and plenty of browser extensions.
there are ways to get this same experience with android. Use https://github.com/ReVanced/ and make your phone work for you instead of working for someone else.
ReVanced also has the additional benefit of blocking ads, allowing background play and auto-skipping sponsorships thanks to SponsorBlock.
Also, if you have an Android TV, I'd suggest SmartTube, it's way better than the original app and it has the same benefits of ReVanced: https://github.com/yuliskov/SmartTube
Have you ever encountered portrait photos? They're orientated vertically because the human form, either head, bust or full body, fits better and excludes distractions.
Vertical videos, if they're focused on a human, work fine for the same reason.
Yeah, portrait photos aren't as narrow as that. I just measured some of mine, and they're 5x7, 8x10, and 11x16. By comparison, 9x16 feels claustrophobic.
I suspect that a still image is also different from video because, without motion, there's no feeling that if the person might move a few inches to one side and go out of frame.
You might be dead on that hill then. That ship has sailed long ago. Short format is mostly consumed on phones in vertical. Long form is still standard widths.
“Everything but a phone” is a tiny tiny percentage of the devices used to consume content on YouTube.
It’s not just mobile first, it’s basically only mobile…
This is a very common response where users acquiesce to an internet of mediocrity rather than demanding the corporations do better
I mostly don't watch them. But they literally spam every single search. (While we're at it, Youtube also isn't very good at honoring keywords in searches either)
I want to remove them from my own feed. I want the button that says "hide" or "show fewer shorts" to actually work and ideally hide them forever. I have to play whack-a-mole on the different devices and browsers to try to hide shorts.
Well you aren’t wrong but the attitude isn’t helping.
It is my feed as far as it explains to you that it’s not about disabling something for others. It isn’t my feed as far as who actually controls it is concerned.
Everyone who put mandatory stuff on YouTube and only here. Two last examples I faced recently:
- Companies who put their product instruction manual exclusively on YouTube
- university curriculum who require you to watch contain that is on YouTube only.
Sure I'm free not to buy any manufactured products or not resume my studies, but it's like saying the Gulag was OK because people were free not to criticize Stalin.
the shorts are on the home page for doomscrolling. all the examples above will give you a playlist or will embed the videos in their pages. I don't see how shorts on the home page are a problem here? could you clarify please?
Easy on the website. Very click and swipe intensive on the phone in my opinion. Shorts are front and centre of the app and the search screens. I don't see any feed of suggested videos anymore.
The worst thing for me is they don't show the channel names. So much of the channels pushing Star Wars shorts are quite obvious bot names, and it's hard to filter these from legitimate SW content creators who are, on top of that, all using the same damn AI voice.
If I hear an AI voice I click the little menu button with three dots, then click don't show this channel or whatever it says.
The Venn diagram of AI voice users and good content creators is pretty close to two separate circles. I don't really care about the minority in the intersection.
Except that now Youtube also "helpfully" auto-dub legitimate videos in other languages (along with translating the titles) by default, so even the 'AI voice' isn't a good signal for gauging if it's quality content or not.
As a french-speaking person, I now find myself seeing french youtubers seemingly posting videos with english titles and robotic voice, before realizing that it's Youtube being stupid again.
What's more infuriating is that it's legitimately at heart a cool feature, just executed in the most brain-dead way possible, by making it opt-out and without the ability to specify known languages.
If we take them at their word then it's just an extension of technology to optimize video... and it's called AI because buzzwords and hence controversy.
> it's just an extension of technology to optimize video... and it's called AI because buzzwords and hence controversy.
The controversy is that YouTube is making strange changes to the videos of users, that make the videos look fake.
YouTube creators put hours upon hours on writing, shooting and editing their videos. And those that do it full time often depend on YouTube and their audience for income.
If YouTube messes up the videos of creators and makes the videos look like they are fake, of course the creators are gonna be upset!
It might be for making them compress better and be more likely to not buffer when you swipe up/preload more, like Tiktok serving them unencrypted to be more likely to be in a local cache for the ISP.
Given the denoising is said to be aggressive enough to be noticeable on already compressed video I think criticism of it is fair. Just that it should be distinguished from something like Tiktok's 'beautifier' modifications, which from titles like the BBC's come to mind.
If AI is as wonderful and world-changing as people claim, it's odd that it's being inserted into products exactly like every other solution in search of a problem.
If it's being added to a toaster for no good reason, sure. But the internet as a whole, through a browser? That's not comparable, people explicitly seek it out when they want to.
Yeah I would, if "Internet" came with zero safeguards or regulations and corporations put the onus on the user to sift through mountains of spam or mitigate credit card leakage risks when buying something online.
It was purely luck of the context that I noticed, but I received an email notification that someone had messaged me on LinkedIn via my gmail account. When this happens, the email contains the message contents. However, in this case the message contents did not match between the version within LinkedIn and the version presented in the email. Only two words were different, but that slight change made it even more peculiar and unsettling.
Last week I went to buy a Philip K Dick eBook while on vacation. It was only $2 and my immediate thought was, “what are the odds this is some weird pirated version that’s full of errors? What if it’s some American version that’s been self-censored by Amazon to be approved by the government? What if it’s been AI enhanced in some way?”
Just the consideration of these possibilities was enough to shake the authenticity of my reality.
Even more unsettling is when I contemplate what could be done about data authenticity. There are some fairly useful practical answers such as an author sharing the official checksum for a book. But, ultimately, authenticity is a fleeting quality and I can’t stop time.
Authenticity can be proven by saying things that upsets censors. For example, if I mention the Tiananmen square, you can be sure my comment wasn't edited by CCP's LLMs.
From the linked tweet from YouTube's head of editorial:
"No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)"
Considering how aggressive YouTube is with video compression anyways (which smooths your face and makes it blocky), this doesn't seem like a big deal. Maybe it overprocesses in some cases, but it's also an "experiment" they're testing on only a fraction of videos.
I watched the comparisons from the first video and the only difference I see is in resolution -- he compares the guitar video uploaded to YT vs IG, and the YT one is sharper. But for all we know the IG one is lower resolution, that's all it looks like to me.
This is an absolutely huge deal. It doesn't matter how small the scope of the change is, they thought it was a good idea to apply mandatory AI post-processing to user content without consent or acknowledgement.
Secret experiments are never meant to be little one-offs, they're always carried out with the goal of executing a larger vision. If they cared about user input, they'd make this a configurable setting.
The idea of it being "without consent" is absurd. Your phone doesn't ask you for consent to apply smoothing to the Bayer filter, or denoising to your zoom. Sites don't ask you for consent to recompress your video.
This is just computational image processing. Phones have been doing this stuff for many years now.
This isn't adding new elements to a video. It's not adding body parts or changing people's words or inventing backgrounds or anything.
And "experiments" are just A/B testing. If it increases engagement, they roll it out more broadly. If it doesn't, they get rid of it.
Yeah, the big video platforms are constantly working on better ways to store and deliver video. If this stuff is applying to some workflow that automatically generates Shorts from real videos... whatever. Very similar to experimenting with different compression schemes. Video compression can differ on a per-shot basis now!
If you want to make pristine originals available to the masses, seed a torrent.
Given Google's history and the fact they rolled this out without notice or consent makes me feel comfortable saying "yet". If YouTube can get away with making GenAI YouTubers (via some likeness sign off buried in the T&C) without paying the originals I'm sure they'd love to do so. All the ad impressions with none of the payout to creators.
Their AI answers box (and old quick answer box) has already affected traffic to outside sites with answers scraped from those sites. Why wouldn't they make fake YouTubers?
This is similar to how AI enhanced photos are a non-issue. If one zooms into a photo taken by a Google Pixel device, you clearly see that these are no longer normal JPEG artifacts. Everything has so odd swirls in it down to the smallest block.
If you watch the youtube video[1] linked in the article you get a much better examples, that clearly look like AI slop. Tho I do understand that people's ability to discern AI slop varies wildly.
Whatever youtube is doing adds a painted over effect that makes the video look like AI slop. They took a perfectly normal looking video, and made it look fake. As a viewer, if you can't tell or don't care... That's fine. For you. But at the very least, the creator should have a say.
It's not making the videos look fake, any more than your iPhone does. Most of what's shown in the example video, it might very well be phones applying the effect, not YouTube.
At no point did I say the video IS AI slop. Or that generative AI was used to make it, or the effect youtube applied to it. We actually have no idea what youtube did. We only see the result; which can be subjective.
To you, that result looks like it was shot with a phone filter. To me it looks like it was generated with AI. Either way, it doesn't really matter. It's not what the creator intended. Many creators spend a lot of effort and money on high-end cameras, lenses, lighting, editing software, and grading systems to make their videos look a specific way. If they wanted their videos to look like whatever this is, they would have made it that way by choice.
Isn't this similar to what e.g. Instagram and co have done for ages? Even smartphones do it automatically for you, digital post-processing to compensate for the limitations of the cameras.
>There is a difference between color grading an image and removing wrinkles from a face.
You're implying the latter doesn't happen normally but denoising (which basically every smartphone camera does) often has the effect of removing details like wrinkles. The effect is especially pronounced in low light settings, where noise is the highest.
I've seem some game of thrones clips recently in youtube shorts which looked like they'd been generated by ai. I couldn't understand why anyone would have done that to the original good looking material. The only thing I could think was that it was some kind of copyright evasion.
As a fan of the early seasons, I get lots of suggestions for Got clips. I assume that's done by the author to get around copy right blocks. Quite often they also add music, which would make it easier to get around sound detection.
I haven't noticed it outside copyrighted material, so it's probably intentional.
Maybe it's to make it more difficult to train AI video models from YouTube.
Think about it, they have the raw footage so could use it if they want, but competitors using scrapers will have slightly distorted video sources.
Boost visual quality, which improve viewer retention. So, money. I've tried many times to get a short with retention > 90% that is, 90% of viewers watch all the way to end. That's the key to going super viral. Very hard to do. I've had many shorts get around 75% and about 1k views but then die. Maybe I need some AI!
To make people more accustomed to the AI generated look so that when they release their next Veo integration to YouTube content creator tools, these videos will stand out less as unnatural.
Sadly, this is a real possibility. I would even conjecture they are testing a new pipeline, in which the input is real videos and the output are AI-generated.
For now it's a kind of autoencoding, regenerating the same input video with minimal changes. They will refine the pipeline until the end video is indistinguishable from the original. Then, once that is perfected, they will offer famous content creators the chance to sell their "image" to other creators, so less popular underpaid creators can record videos and change their appearance to those of famous ones, making each content creator a brand to be sold. Eventually humans will get out of the pipeline and everything will be autogenerated, of course.
> Then, once that is perfected, they will offer famous content creators the chance to sell their "image" to other creators, so less popular underpaid creators can record videos and change their appearance to those of famous ones, making each content creator a brand to be sold.
There's also the on-by default, can't be disabled, auto-dubbing YouTube performs on every video that's not in the single browser's language. The dubbing quality is poor for the same reason, to intentionally expose viewers to AI content.
It's 100% a push to remove human creators from the equation entirely.
But the upscaling isn't applied live/on viewing, right? The video being upscaled is still stored on their server and then streamed. How does it reduce storage costs?
Maybe Google has done the math and realized it's cheaper to upscale in realtime than store videos at high resolution forever. Wouldn't surprise me considering the number of shorts is probably growing exponentially.
The economics don't make sense, each video is stored ~ once (+ replication etc. but let's say O(1)) but viewed n times, so server-side upscaling on the fly is way too costly and currently not good enough client-side.
Are you considering that the video needs to be stored for potentially decades?
Also shorts seem to be increasing exponentially... but Youtube viewership is not. So compute wouldn't need to increase as fast as storage.
I obviously don't know the numbers. Just saying that it could be a good reason why Youtube is doing this AI upscaling. I really don't see why otherwise. There's no improvement in image quality, quite the contrary.
I can’t think of a more dislike-able company than YouTube. I used to love youtube and watch it everyday and it would make me a happier, smarter person. Now youtube’s impact on their users is entirely negative and really the company needs to be destroyed. But they won’t be because they are now evil, and evil is profitable.
This was the last drip in an almost-full bucket for me & I finally made the jump to "disable" YouTube on my phone. And, honestly, my mental health improved a bit. Rarely still pull up YT on laptop, but that's a different use pattern than on phone.
(I don't have any other YouTube-like on my phone, particularly no TikTok. Actually started reading more books instead.)
It's obvious and clear that Google's end game is to completely replace the creator, and auto-generate all their videos. How far off we are from this, no one knows, probably not that far. Google likely already has all the data they need, it's now just about how long it will take to develop the AI.
> And the closer I looked it almost seemed like I was wearing makeup
Those AI skin enhancement filters are always terrible. Especially on men. Crazy they'd try it automatically. This isn't like the vocal boosting audio EQing they do without asking.
Google must have some questionable product management teams these days if they are pushing out this stuff without configuration. Probably trying to A/B it for interal data to justify it before facing the usual anti-AI backlash crowd when going public.
The recent sinking in quality of youtube as a platform has been awful to watch.
Just a couple days ago I got an ad with a Ned Flanders singing about the causes of erectyle dysfunction (!), a huge cocktail of copyright infringement, dangerous medical advice and AI generated slop. Youtube answered the report telling me they've reviewed and found nothing wrong.
The constant low quality, extremely intertwined ads start to remind me of those of shady forums and porn pages of the nineties. I'm expecting them to start advertising heroine now they've decided short term profits trump everything else.
When you upload a video to YT, it's heavily compressed. Your pristine creation is converted 10 ways to Sunday, which then can be played back in a variety of formats, speeds, and platforms. Long before you even uploaded that video, for free, to give the world the chance to see your creative genius, you agreed to this process, by agreeing to Youtube's T&C's.
People may be upset, and I get that. But it's not like the videos were in their original format anyway. If you want to maintain perfect video fidelity, you wouldn't choose YouTube. You chose YouTube because it's the path of least resistance. You wanted massive reach and a dead simple monetization route.
I've noticed this for a while, when I accidentally click on YouTube Shorts. (I want to avoid it, because it's brain rot, but YouTube keeps enabling it and pushes it hard in notifications).
It's most glaringly obvious in TV shows. Scenes from The Big Bang Theory look like someone clumsily tries to paint over the scenes with oil paint. It's as if the actors are wearing an inch thick layer of poorly applied makeup.
It's far less glaring in Rick Beato's videos, but it's there if you pay attention. Jill Bearup wanted to see how bad it could get and reuploaded the "enhanced" videos a hundred times over until it became a horrifying mess of artifacts.
The question remains why YouTube would do this, and the only answers I can come up with are "because they can" and "they want to brainwash us into accepting uncanny valley AI slop as real".
> It's most glaringly obvious in TV shows. Scenes from The Big Bang Theory look like someone clumsily tries to paint over the scenes with oil paint. It's as if the actors are wearing an inch thick layer of poorly applied makeup.
This might be the uploaders doing to avoid copyright strikes.
This is getting into conspiracy territory but my personal assumption that they're trying to gaslight people into thinking that these weird AI artifacts are just how videos work, so that it's harder to distinguish between real videos and AI generated ones.
I going to say something controversial, bu why is this even surprising?
Google and YouTube have been framing themself as the kind company that will appropriate your work and make make money out of you. "You are the product" is a repeated endlessy even on social media, and this it thier private paltform after all.
At this point getting involved with youtube is just the usual naive behaviour that somehow you are the exception and bad things won't happen to you.
Those translations are not only unwanted but also ridiculously bad (which is part of the reason why they're unwanted, I guess). I have to translate back to the original English, as far as that's even possible, to get an idea of what the video might be about.
Who in his right mind thought this was a good idea??
I have a Firefox extension which tries to suppress the translations, but it only works for the main view, not for videos in the sidebar. It's better than nothing.
The "can not turn off" part is the most jarring. Seriously, did none of Californian PMs hear about the concept of "being multilingual" and not needing to translate non-English content?
---
By the way, this reminds me also of another stupid Google thing related to languages:
Say your Chrome is set to English. When encountering another language page, Chrome will (since a decade ago or so) helpfully ask you to auto-translate by default. When you click a button "Never translate <language>", it will add language to the list which is sent out to every HTTP request the browser makes via `Accept-Language` header (it's not obvious this happens unless you're the kind of person who lives in DevTools and inspects outgoing traffic).
Fast-forward N years, Chrome privacy team realizes this increases fingerprinting surface, making every user less unique, so they propose this: "Reduce fingerprinting in Accept-Language header information" (https://chromestatus.com/feature/5188040623390720)
So basically they compensate for one "feature" with another, instead of not doing the first thing in the first place.
> Seriously, did none of Californian PMs hear about the concept of "being multilingual" and not needing to translate non-English content?
Sometimes it feels like Google keeps anyone with any kind of executive power hermetically sealed a some house borrowed from a reality TV show where they're not allowed any contact with the outside world.
Nobody asked for it and you can't find discourse on the subject or give it a name, I've never had a feature made me feel more gas-lit. And I pay for premium dammit.
This one is the worst. I can't imagine the thought process behind it. How on earth it was seen as a wanted feature, and especially without a simple way to disable it? This feels like they used AI to code this AI thing in.
As with so many mistakes Google makes, this is letting the technical people front-run an interaction that should have been spearheaded by the social people.
From a technical standpoint, it's easy to think of AI-based cleanup as in the same category as "improving the compression algorithm" or "improving the throughput to the client": just a technically-mediated improvement. But people have a subjectively-different reaction between decreasing instances of bandwidth-related pixelation and making faces baby-smooth, and anyone on the community side of things could have told the team responsible (if they'd known about it).
Sometimes Google's tech-expert-driven-company approach has negative consequences.
Re: "without warning or permission". The YouTube Terms of Use require you to grant YouTube the (perpetual, worldwide, etc) right to prepare derivative works.
Reminder, there is no cloud, there is just computers of other people. And I for one support those other people's right to do on their computers what they want.
what it boils down to is that "enhancement" is one more pilon to swerve around in the ongoing persuit of reality.
given the overwhelming volume of media it's easy enough to modify ones asthetic monitoring and scan to simply ignore anything suspicious, or if it gets too bad, back right off
How about they turn off their recent asinine title translation feature? Now every creator has to opt out of it manually - and the users have no recourse short of browser extensions.
I suppose we should fire up those "AI" browsers and let them loose on YouTube in a while loop. They are just the right audience for "AI" enhanced content and YouTube's advertisers will be thrilled.
AI fearmongering probably produces a lot of clicks if upscaling gets labeled as "might bend reality". YT shouldn't just be doing it without user's input, but pearl clutching is unproportiona.
A chill ran down my spine as I imagined this being applied to the written word online: my articles being automatically "corrected" or "improved" the moment I hit publish, any book manuscripts being sent to editors being similarly "polished" to a point that we humans start to lose our unique tone and everything we read falls into that strange uncanny valley where everything reads ok, you can't quite put your finger on it, but it feels like something is wearing the skin of what you wrote as a face.
The well is already poisoned. I'm refraining from hiring editors merely because I suspect there's a high chance they'll just use an LLM. All recent books I'm reading is with suspicion that they have been written by AI.
However, polished to a point that we humans start to lose our unique tone is what style guides that go into the minutiae of comma placement try do do. And I'm currently reading a book I'm 100% sure has been edited by an expert human editor that did quite the job of taking away all the uniqueness of the work. So, we can't just blame the LLMs for making things more gray when we have historically paid other people to do it.
> suspicion that they have been written by AI
"By AI" or "with AI?" If I write the book and have AI proof read things as I go, or critique my ideas, or point out which points do I need to add more support for, is that written "by AI?"
When Big Corp says 30% of their code is now written "by AI," did they write the code by following thoughtful instruction from a human expert, who interpeted the work to be done, made decisions about the architectural impact, outlined those things and gave detailed instructions that the LLM could execute in small chunks?
This distinction I feel is going to become more important. AI tools are useful, and most people are using them for writing code, literature, papers, etc. I feel like, in some cases, it is not fair to say the thing was written by AI, even when sometimes it technically was.
Good point. I've read books with minor mistakes that slipped past the editor. Not a big deal, but it takes me out of the flow when reading. And they're things that I think an AI could easily catch.
[dead]
I was listening to an interview (having a hard time remembering the name now). The guest was asked how he decides what to read, he replied that one easy way for him to filter is he only considers books published before the 70s. At the time, it sounded strange to me. It doesn't anymore, maybe he has a point
There's a YouTuber named Fil Henley (https://www.youtube.com/@WingsOfPegasus) who has been covering this for some years, now. Xe regularly comments on how universal application of pitch correction in post as an "industry standard" has dragged the great singers of yore down to the same level of mediocrity as everyone else.
Xe also occasionally reminds people that, equal temperament being what it is, this pitch correction is actually in a few cases making people less well in tune than they originally were.
It certainly removes unique tone. Yesterday's was a pitch corrected version of a performance by John Lennon from 1972, that definitely changed Lennon's sound.
Why are you calling Fil Henley a "xe"? Misgendering a man as non-binary is still misgendering. Let's not normalize misgendering in any way. (And no, you don't get call misgendering a "stylistic choice")
He did it for attention. You giving him attention doesn’t help in any way.
Extremely good analogy and context with the pitch correction thing and equal temperament IMO.
We can only be stoic and say "slop is gonna be slop". People are getting used to AI slop in text ("just proofreading", "not a natural speaker") and they got used to artificial artifacts in commercial/popular music.
It's sad, but it is what it is. As with DSP, there's always a creative way to use the tools (weird prompts, creative uses of failure modes).
In DSP and music production, auto-tune plus vocal comping plus overdubs have normalized music regressing towards an artificial ideal. But inevitably, real samples and individualistic artists achieve distinction by not using the McDonald's-kind of optimization.
Then, at some point, some of this lands in mainstream music, some of it doesn't.
There were always people hearing the difference.
It's a matter of taste.
> is what style guides that go into the minutiae of comma placement try do do
Eh. There might be a tacit presumption here that correctness isn't real, or that style cannot be better or worse. I would reject this notion. After all, what if something is uniquely crap?
The basic, most general purpose of writing is to communicate. Various kinds of writing have varying particular purposes. The style must be appropriate to the end in question so that it can serve the purpose of the text with respect to the particular audience.
Now, we may have disagreements about what constitutes good style for a particular purpose and for a particular audience. This will be a source of variation. And naturally, there can be stylistic differences between two pieces of writing that do not impact the clarity and success with which a piece of writing does its job.
People will have varying tastes when it comes to style, and part of that will be determined by what they're used to, what they expect, a desire for novelty, a desire for clarity and adequacy, affirmation of their own intuitions, and so on. We shouldn't obfuscate and sweep the causes of varying tastes under the rug of obfuscation, however.
In the case of AI-generated text, the uncanny, je ne said quoi character that makes it irritating to read seems to be that it has the quality of something produced by a zombie. The grammatical structure is obviously there, but at a pragmatic level, it lacks a certain cohesion, procession, and relevance that reads like something someone on amphetamines or The View might say. It's all surface.
dsign's callout of the minutiae of comma placement is a useful starting point because it's largely rhythmic, and monotony, you could say, is the enemy of rhythm. My go-to example here would probably be the comma splice, which is inflicted on people learning to write in English (while at the same time being ignored by more sophisticated writers) but doesn't exist in e.g. French.
I can be convinced that different spaces need different styles. But, correctness intrinsically emanating from language? That one is not an absolute, unless one happens to be a mathematician or GHC the Haskell compiler or any of the other logical automatons we have and that are so useful.
Language and music (which is a type of language) are a core of shared convention wrapped in a fuzzy liminal bark, outside of which, there is nonsense. An artist, be it a writer or a musician, is essentially somebody whose path stitches the core and the bark in their own unique way, and because those regions are established by common human consensus, the artist, by the act of using that consensus, is interacting with its group. And so is the person who enjoys the art. So, our shared conventions and what we dare call correctness are a medium for person-to-person communication, the same way that air is a medium to conduct sound or a piece of paper is a medium for a painting.
Furthermore, the core of correctness is fluid; language changes and although, at any time and place there is a central understanding of what is good style, the easy rules, such as they exist, are limited and arbitrary. For example, two different manuals of style will mandate different placements of commas. And somebody will cite a neurolinguistics study to dictate on the ordering of clauses within a sentence. For anything more complex, you need a properly trained neural network to do the grasping; be it a human editor or an LLM.
> The grammatical structure is obviously there, but at a pragmatic level, it lacks a certain cohesion, procession, and relevance that reads like something someone on amphetamines or The View might say. It's all surface.
Somebody in amphetamines is still intrinsically human, and here too we have some disagreement. I can not concede that AI’s output is always of the quality produced by a zombie, at least no more than the output of certain human editors, and at least not by looking at the language alone; otherwise it would be impossible for the AI to fool people. In fact, AI’s output is better (“more correct”) than what most people would produce if you forced them to write with a gun pointed to their head, or even with a large tax deduction.
What makes LLMs irritating is the suspicion that one is letting one’s brain engage with output from a stochastic parrot in contexts where one expects communication from a fellow human being. It’s the knowledge that, at the other end, somebody may decide to take your attention and your money dishonestly. That’s why I have no trouble paying for a ChatGPT plan—-it’s honest, I know what I get—-but hesitate to hire a human editor. Now, if I could sit at a caffe with said editor and go over their notes, then I would rather do just that.
In other words, what makes AI pernicious is not a matter of style or correctness, but that it poisons the communication medium—-it seeds doubt and distrust. That’s why people—-yours truly—-are burning manuals of style and setting shop in the bark of the communication medium, knowing that’s a place less frequented by LLMs and that there is a helpful camp filled with authoritative figures whose job of asserting absolute correctness may, perhaps, keep the LLMs in that core for a little longer.
Those are workarounds, however. It's too early to know for sure, but I think our society will need to rewrite its rules to adjust to AI. Anything from seclusion and attestation rituals for writers to a full blown Butlerian Jihad. https://w.ouzu.im/
If something needs editing, why would you care what tool they use?
It’s like saying you wouldn’t hire an engineer because you suspect they’d use computers rather than pencil and paper.
Because "edited" is not a singular point.
It's more like hiring a chef and getting a microwave dinner.
to further this point. a lot about writing is style. editors sometimes smother the style in the name of grammar, conventions, or correctness, inoffensiveness. sometimes the incorrectness is the entire point, and the editor erases the incorrectness not realizing it was intentional.
ive heard of many professions complain about their version of “editors” from comedians, to video producers, and radio jockies.
What's the line. If they use Microsoft word or grammarly to ease the process is that OK? Both of which use AI. Is there anyone in the world who isn't using this tech even before an editor looks at it?
For me, an important distinction is whether or not a human is reviewing the edits suggested by an AI.
I toss all of my work into Apple Pages and Google Docs, and use them both for spelling and grammar check. I don't just blindly accept whatever they tell me, though; sometimes they're wrong, and sometimes my "mistakes" are intentional.
I also make a distinction between generating content and editing content. Spelling and grammar checkers are fine. Having an AI generate your outline is questionable. Having AI generate your content is unacceptable.
Engineering is making sure stuff works first, art distant second.
Even if the text is a simple article, a personal touch / style will go a long way to make it more pleasant to read.
LLMs are just making everything equally average, minus their own imperfections. Moving forward, they will in-breed while everything becomes progressively worse.
That's death to our culture.
It’s worse. Even things not written by AI—like this comment—will slowly converge with each other in style as humans adapt by trying to avoid the appearance of having used AI. It won’t even be AI itself that causes this but human perception of what AI writing looks and feels like.
This is why shadow banning rubbed people so wrong. I can't prove it, but i gave up on online dating a long time ago because i found a couple of automated systems would just not send messages and not tell you (in a middle of an already active conversation)
There never been a better time to collect analog.
I guess this is where checksumms and digital signatures come in to prevent unauthorized stuff like this ?
that would be straight up "I was born from my own sister"[1] moment
1: Chohei Kambayashi. (1994). Kototsubo. as yet unavailable in English
You've basically just described the last few years of journalism. We are lucky if a human even wrote the seed story for it.
Maybe it's time for people to realize that you create product inside a product. Those T&S didnt write themselves. Not defending them.
This is what tech bros in SV built and they all love it.
My guess is that guys being replaced by the steam shovel said the same thing about the quality of holes being dug into the ground. "No machine is ever going to be able to dig a hole as lovingly or as accurately as a man with a shovel". "The digging machines consume way too much energy" etc.
I'm pretty sure all the hand wringing about A.I. is going to fade into the past in the same way as every other strand of technophobia has before.
I'm sure you can find people making arguments about a lack of quality from machines about textiles, woodworking, cinematography, etc., but digging holes? If you have a source of someone complaining about hole quality I'll be fascinated, but I moreso am thinking about a disconnecion here:
It looks like you see writing & editing as a menial task that we just do for it's extrinsic value, whereas these people who complain about quality see it as art we make for it's intrinsic value.
Where I think a lot of this "technophobia" actually comes from though are people who do/did this for a living and are not happy about their profession being obsolesced, and so try to justify their continued employment. And no, "there were new jobs after the cotton gin" will not comfort them, because that doesn't tell them what their next profession will be and presumes that the early industrial revolution was all peachy (it wasn't).
When I see an argument like this I'm inclined to assume the author is motivated by jealousy or some strange kind of nihilism. Reminds me of the comment the other day expressing perplexity over why anyone would learn a new language instead of relying on machine translation.
There is a difference.
Excavation is an inherently dangerous and physically strenuous job. Additionally, when precision or delicateness is required human diggers are still used.
If AI was being used to automate dangerous and physically strenuous jobs, I wouldn't mind.
Instead it is being used to make everything it touches worse.
Imagine an AI-powered excavator that fucked up every trench that it dug and techbros insisted you were wrong for criticizing the fucked up trench.
> Instead it is being used to make everything it touches worse.
Your bias is showing through.
For what it's worth, it has made everything I use it for, much better. I can search the web for things on the net in mere seconds, where previously it could often take hours of tedious searching and reading.
And it used to be that Youtube comments were an absolute shit show of vitriol and bickering. A.I. moderation has made it so that now it's often a very pleasant experience chatting with people about video content.
there is no way you aren't able to discern the obvious differences between physical labor such as digging a hole and something as innate to human nature as creativity. you realize just how hollow a set of matrix multiplications are when you try to "talk to it" for more than 3 minutes. the whole point of language is to talk to other people and to communicate ideas to them. that is something that requires a human factor, otherwise the ideas are simply regurgitations of whatever the training set happened to contain. there are no original ideas in there. a steam shovel, on the other hand, does not need to be creative or to have human factor, it's simply digging a hole in the ground
you realize just how hollow a set of matrix multiplications are when you try to "talk to it" for more than 3 minutes.
Then again, it only takes 2 minutes to come to that realization when talking with many humans.
So why are you wasting your precious comments on the hollow humans here? Leave us alone and talk to LLMs. No doubt they will tell you you’re absolutely right.
Oops, something went wrong
DDT has been banned, nuclear reactors have been banned in Germany, many people want to ban internal combustion engines, supersonic flight has been banned.
Moreover, most people have more attachment to their own thoughts or to reading the unaltered, genuine thoughts of other humans than to a hole in the ground. The comment you respond to literally talks about the Orwellian aspects of altering someone's works.
Don't let ideas like human rights and dignity get in the way of the tech marketing hype...
Reading leads to the actual thoughts in our brains. It's a form of self programming. So yeah, it's OK for people to care about what they consume.
And shovelling leads to actual muscles in our arms. People said that calculators would be the end of mathematical intelligence too, but it turns out to be largely a non-issue. People might not be as adept at calculating proper change in their heads today, but does it have a real-world consequence of note? Not really.
You realize that making an analogy doesn't make your argument correct, right? And comparing digging through the ground to human thought and creativity is an odd mix of self debasement and arrogance. I'm guessing there is an unspoken financial incentive guiding your point of view.
ta8645 did not make an analogy, nor did they use it to support an argument.
They posited that a similar series of events happen before, and predicted they will happen again.
Why, pray tell, would a similar series of events be relevant to a completely different series of events except as analogy? Let me use an extremely close analogy to illustrate:
Imagine someone shot a basketball, and it didn't go into the hoop. Why would telling a story about somebody else who once shot a basketball which failed to go into the hoop be helpful or relevant?
Your extremely close analogy gets to the crux of why people are disagreeing here: It doesn’t have to be analogy. You can be pointing out an equivalence.
Regardless this was my whole point. The original point was a fallacy: https://en.m.wikipedia.org/wiki/False_equivalence
I'd be interested in your reason for thinking so but I think you can see your supporting argument is not compelling:
> And comparing digging through the ground to human thought and creativity is an odd mix of self debasement and arrogance.
> I'm guessing there is an unspoken financial incentive guiding your point of view.
That's the definition of using an analogy to support an argument.
No one ever wrote a song or erected a statue for a steam shovel.
https://en.wikipedia.org/wiki/John_Henry_(folklore)
There’s a song about Mike Mulligan and his Steam Shovel, and there’s a monument to the Marion Steam Shovel in Le Roy, New York…
I don't think parking an old steam shovel is much of a monument, but I'll give that one to you. No one built it for display, but they did put one there for that purpose, so I'll meet you halfway. I was wrong to suggest no one would do so, and there is clearly interest in such a thing, but I can't say that I agree that a statue exists. The song exists, the steam shovel monument exists. Appreciate the correction.
https://en.wikipedia.org/wiki/Marion_Steam_Shovel_(Le_Roy,_N...
The only way to know for sure that something was written by a human: It contains racism, or any other opinion AIs are forbidden to express.
Now imagine the near future of the Internet, when all people have to adapt to that in order to not be dismissed as AI.
Most of the big LLMs have those restrictions, but not all.
How naive. Grok generates racism as a service, and Elon does his best to tune it that way.
Well then we will have to fucking swear in everything we fucking write, to identify ourselves as humans, since AI doesn't like nasty language. And we should also insult other participants, since AI will almost never take an aggressive stance against people it is conversing with, you god damned piece of shit.
I’ve gotten a local model to be pretty nasty with the right prompt, minus the expletives. It took every opportunity to tell me how inferior that puny human is it was forced to talk to.
"A spark of excitement ran through me imagining this applied to writing online: my articles receiving instant, supportive refinements the moment I hit publish, and manuscripts arriving to editors already thoughtfully polished—elevating clarity while letting our distinctive voices shine even brighter. The result is a consistently smooth, natural reading experience that feels confidently authentic, faithfully reflecting what I wrote while enhancing it with care."
> thoughtfully polished
> enhancing it with care
I get what you’re going for with this comment, but it seamlessly anthropomorphizes what’s happening in a way that has the opposite impact I think.
There is no thoughtfulness or care involved. Only algorithmic conformance to some non-human synthesis of the given style.
The issue is not just about the words that come out the other end. The issue is the loss of the transmission of human thoughts, emotions, preferences, style.
The end result is still just as suspect, and to whatever degree it appears “good”, even more soulless given the underlying reality.
You do realize I just took parents comment and gave an AI the prompt to rewrite it by changing everything negative to a positive?
Thank you for dumping that load of excrement here. Maybe someone can use it as fertilizer.
I gave you a glimpse of the future.
Yes, that was pretty obvious, and that's part of why I wrote what I did.
Yeah we saw all saw the dash
> manuscripts arriving to editors already thoughtfully polished
except those editors will still make changes. that's there job. if they start passing manuscripts through without changes, they'd be nullifying their jobs.
you forgot to add "tapestry"!
Everything has to be produced on an assembly line. No mistakes allowed. Especially creativity. /s
the /s is sarcastic right? Artisanal creativity isn't efficient enough.
you realize how ridiculous this is, in some ways, since a "master copy" of anything that is reproduced, is just like reproducing a machine-stamped master copy.. in the digital artifacts world, it is even more true
"Gee, sure would save editors a lot of time and effort if we just auto-spellchecked the manuscripts in the hopper, wouldn't it?"
> "You know, YouTube is constantly working on new tools and experimenting with stuff," Beato says. "They're a best-in-class company, I've got nothing but good things to say. YouTube changed my life."
My despondent brain auto-translated that to: "My livelihood depends on Youtube"
Maybe that statement was just an AI edit and he actually said "YouTube is an evil scourge upon the planet".
As a consumer they are the most hostile platform to consume a video the way I want. Not the way they want me to. I am also required to use an adblocker to disable all shorts.
As a creator, they are also the most hostile platform, randomly removing video with no point of contact for help or fully removing channels (with livelihoods behind them) because of "a system glitch" but again, not point of contact to get it fixed.
Pretty much every creator I follow has complained about something being removed without any clear explanation or the ability to contact anyone and ask questions.
Say what you want about Microsoft, but if I have a problem with something I've pretty much always ended up getting support for that problem. I think Google's lack of response adds to their "mystique".
But it also creates superstitions since creators don't really understand the firm rules to follow.
Regardless, it is one of the most dystopian things about modern society - the lack of accountability for their decisions.
Youtube needs a far greater amount of bureaucracy than it has, despite how scary that word is to tech people. Google's automated approach is clearly not capable of keeping up with the scale and nuance of the website.
It's worth stating, though, that the vast majority of youtube's problems are the fault of copyright law and massive media publishers. Google could care less if you wanted to upload full camrips of 2025's biggest blockbusters, but the powers-that-be demand Google is able to take it down immediately. This is why 15 seconds of a song playing in the background gets your video demonitized.
YouTube is “riding a tiger,” and the moment creators realize they hold the real power, the game is up. I believe the platform purposely creates a fear of the unknown with intermittent reward–punishment cycles. Random rules enforcement, videos taken down, strikes, demonetization, throttling... The algorithm becomes a sort of deity that people try to appease and they turn into cultish devotees preforming endless rounds of strange rituals in hopes of "divine" monetization favor.
I don't mind the ads as much as all the mandatory meta-baiting. Not the MB itself, but the mechanisms behind it.
Even if you produce interesting videos, you still must MB to get the likes, to stay relevant to the algorithm, to capture a bigger share of the limited resource that is human attention.
The creators are fighting each other for land, our eyeballs are the crops, meanwhile the landlord takes most of the profits.
There is much data to support that asking for likes and subs actually increases likes and subs.
Right, that's the issue. I really doubt that creators love having to spam the same "Don't forget to like/subscribe/comment!" message in every single video they produce, but Youtube forces them to.
As a viewer I certainly hate that crap and wish Google didn't intentionally make it this way.
That is my entire point. The creators fight each other in a pit, for the Emperor's amusement.
I also need to use ReVanced on my Android phone so I can hide all Shorts-related things.
And the other day he posted about the abusive copyright claims he has to deal with that cost him a lot of money and could maybe have his channels closed.
Although xe lays the blame for those at the feet of Universal Music Group, not YouTube. Apparently, UMG simply refuses to learn from the experience of having thousands of copyright claims rejected on fair use grounds.
It's almost as if there's a mindless robot submitting the claims to YouTube. Perish the thought! (-:
Can you please stop doing that
Beato is a musician and a producer. He just finds making YouTube videos an easier way to earn a living. He's said many times how frustrating it is as a producer to work with musicians.
I push back on the idea there is anything despondent there. If YouTube was enabling my lifestyle I'd be pretty happy about the situation and certainly not about to start piling public pressure on them. These companies get enough hate from roving bands of angry internet denizens.
Touching up videos is bad but it is hardly material to break out the pitchforks compared to some of the political manoeuvres YouTube has been involved in.
Oh I have no idea about the brain state of Beato. Just my brain read it that way automatically.
I mean is Rick Beato, he tries really hard to have the most polarizing opinion every single time.
What's the biggest example outside of his thumbnails?
So much of the channel consisting of Hot Spicy Take content really turned me off from hearing anything else he has to say, which is unfortunate, because I liked his music theory videos when I was learning about that.
Lots of very hateful, negative content too. It didn’t take me long to find the video “why this new artist sucks.” Another find, what I assume is an overblown small quibble turned into clickbait videos, was “this record label is trying to SILENCE me.” Maybe, somehow, these two things are related.
> Lots of very hateful, negative content too. It didn’t take me long to find the video “why this new artist sucks.”
If you're referring to his video I'm Sorry...This New Artist Completely Sucks[1], then it's a video about a fully AI generated "artist" he made using various AI tools.
So it's not hateful against anyone. Though the title is a bit clickbait-y, I'll give you that.
[1]: https://www.youtube.com/watch?v=eKxNGFjyRv0
While I think he has his cranky old man moments and he isn’t for everyone, his titles are far more spicy and hateful than the actual content. He doesn’t just hate everything new because it is new. He also has plenty of videos loving on things old and new.
> “why this new artist sucks.”
That's about AI, not very polarizing at the level it's currently at.
> Another find, what I assume is an overblown small quibble turned into clickbait videos, was “this record label is trying to SILENCE me.”
That might be overblown, but it doesn't sound polarizing at all. OP was saying he always has the most polarizing opinions.
If that last one is the vid I'm thinking of, the same record company has sent him hundreds of copyright strikes and he has to have a lawyer constantly fighting them for fair use. He does some stuff verging on listen-along reaction videos, but the strikes he talks about there are when he is interviewing the artists who made the songs and they play short snippits of them for reference while talking about the history of making them, thought process behind the songwriting, etc.
I think it's not just automated content ID stuff where it claims the monetization, but the same firm for that label going after him over and over where 3 strikes removes his channel. The title or thumbnail might be overblown, probably the firm just earns a commission and he's dealing with a corporate machine that is scatter shotting against big videos with lots of views that have any of their sound rather than targetting him to silence something they don't want to get out, but I don't think the video was very polarizing.
xkcd 2015 closer than you think due to the magical technology of money https://xkcd.com/2015/
Wow, that xkcd really scares me. I Have No Mouth, and I Must Scream. It's definitely something that could realistically happen in the near future, maybe even mandated by the EU
Google People[1] vibes
[1]https://qntm.org/perso
What am I looking at? Is this a transcript of a real thread formatted in a flat manner? Or is this fiction?
What I want to say for fun is: "Oooohhhh who knooowsss?"
But serious discussion demands the truth: It is fiction, in the style of a twitter thread.
Like the part about the birthdate is supposed to be humor maybe? But Google already knows how old you are.
That's why I think it's funny that they claim they will now be "using AI" to determine if someone is an adult and able to watch certain youtube videos. Google already knows how old you are. It doesn't need a new technique to figure out that you're 11 years old or 39 years old. They're literally just pretending to not know this information.
Unfortunately the article doesn't have an example, or a comparison image. Other reports are similarly useless as well. The most that seemed to happen is that the wrinkles in someone's ear changed. In case anyone else wants to see it in action:
https://www.reddit.com/r/youtube/comments/1lllnse/youtube_sh...
I skimmed the videos as well, and there is much more talk about this thing, and barely any examples of it. As this is an experiment, I guess that all this noise serves as a feedback to YouTube.
If you click through to Rhett Schul's (sp?) video you can see examples comparing the original video (from non-Shorts videos) with the sharpened video (from Shorts).
Basically YouTube is applying a sharpening filter to "Shorts" videos.
This makes sense. Saying YT is applying AI to every single video uploaded would be a huge WTF kind of situation. Saying that YT has created a workflow utilizing AI to create a new video from the creator's original video to fit a specific type of video format that they want to promote even when most creators are NOT creating that format makes much more sense. Pretty much every short I've seen was a portrait crop from something that was obviously originally landscape orientation.
Do these videos that YT creates to backfill their lack of Shorts get credited back to the original creator as far as monetization from ads?
This really has a feel of the delivery apps making websites for the restaurants that did not previously have one without the restaurant knowing anything about it while setting higher prices on the menu items while keeping that extra money instead of paying the restaurants the extra.
I saw the sharpening, and listened to the claims of shirt wrinkles being weird and so on, but I didn't deem these to be on the level of the original claim, which is that "AI enhancements" are made to the video, as in, new details and features are invented on the video. In the ear example, the shape of the ear changed, which is significant because I'd never want that in any of my photos or videos. The rest of the effects were more "overdone" than "inventive".
Although, I probably wouldn't want any automatic filtering applied to my video either, AI modifications or not.
Is what I've been noticing this past week! There have been a handful of videos that looked quite uncanny but were from creators I knew, and a few from unknown sources I completely skipped over because they looked suspect.
Have to say, I am not a fan of the AI sharpening filter at all. Would much prefer the low res videos.
Flickr used to apply an auto-enhancement (sharpening, saturation, etc) effect to photos[0]. It would be really weird seeing a photo locally and then see the copy on Flickr that looked better somehow.
Aside: The mention of Technorati tags (and even Flickr) in the linked blog post hit me right in the Web 2.0 nostalgia feels.
[0] https://colorspretty.blogspot.com/2007/01/flickrs-dirty-litt...
YouTube Is Using AI to Alter Content (and not telling us) [video] - https://news.ycombinator.com/item?id=44912648 9 days ago (6 points, 0 comments)
‘Member journalism? When any effort was actually made for news articles?
IME this is a long-standing thing - failing to include visuals for inherently visual news stories. They're geared towards text news stories for whatever reason.
Rare to come by, that's for sure. Although, I'm not incentivizing them either.
YouTube has responded:
> We hear you, and want to clear things up! This is from an experiment to improve video quality with traditional machine learning – not GenAI. More info from @YouTubeInsider here:
> No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)
> YouTube is always working on ways to provide the best video quality and experience possible, and will continue to take creator and viewer feedback into consideration as we iterate and improve on these features
https://x.com/TeamYouTube/status/1958286550229541158
Love the "[company] is always working on ways to provide the best..." that's always in these explanations, like "you actually just caught us doing something good! You're welcome!"
All of which is pretty reasonable, especially for shorts, which are meant to be thrown directly in the trash after being used to collect some ad revenue anyway, right?
This outrage feels odd, TV has "improved" movies for ages, youtube doing it with machine learning is the same idea, are we really upset because an ear looks a bit clearer?
No, people are upset because Youtube is editing their content without telling them. If they really thought this was a high value add they could have added an enhance button to let creators opt in, as has been done elsewhere. I wouldn't like it if HN started "optimizing" the wording my comments without telling me, even if it made them better along some metric.
You’re conflating editing with rendering, YouTube didn’t overwrite creators's uploads, it applied an ML filter in the streaming/transcode pipeline, the same layer that already resizes, compresses, and tone-maps. That's not "editing my content" any more than your TV's sharpness setting edits a film, An "Enhance" toggle/label would be good UX, but calling it silent edits misdescribes what's happening
PS: this isn't "generative AI" It's basic ML enhancement (denoise/sharpen/tone-map)
"Editing" implies they are applying some kind of editorial change. From what I've seen it's a sharpening/upscaling filter to improve visual quality. If your issue is that Youtube is changing the quality of the video, well, they have been doing that since the very first video every uploaded to Youtube. All Youtube videos are compressed, they have always had that ugly softness to them.
I'm not seeing the outrage here.
> YouTube did not respond to the BBC's questions about whether users will be given a choice about AI tweaking their videos.
Says everything. Hey PM at YouTube: How about you think stuff through before even starting to waste time on stuff like this?
>How about you think stuff through before even starting to waste time on stuff like this?
What makes you think they don't think it through? This effect is an experiment that they are running. It seems to be useless, unwanted from our perspective, but what if they find that it increases engagement?
> What makes you think they don't think it through?
Basing it on a lot of stupid decisions youtube has made over the years, the last being the horrendous autotranslation of titles/descriptions/audio that can't be turned off. Can only be explained by having morons making decisions, who can't imagine that anyone could speak more than one language.
Someone at google needs this promotion to feel like a real man
I think it's more the case that in today's Google, you need that promotion to stay employed.
[dead]
They turned their brains off many years ago. Now it's all about AI, showing ads down our throats and keep children hooked to their iPads.
PM to everyone else: what are you going to do, publish on Odysee?
As long as YouTube continues to be the Jupiter sized gorilla in the room, they're not going to care very much about what the plebes think.
I'm starting to think monopolistic private companies aren't always benign actors.
[dead]
Youtube says this was done for select Youtube Shorts as a denoising process. However most popular channels on Youtube, which seem to be the pool selected for this experiment, typically already have well lit and graded videos shouldn't benefit much from extra denoising from a visual point of view.
It's true though that aggressive denoising gives things an artificially generated look since both processes use denoising heavily.
Perhaps this was done to optimize video encoding, since the less noise/surface detail there is the easier it is to compress.
Can I just start a petition to remove Shorts entirely?
At this point, the stuff I'd want to remove:
- auto-dubbing
- auto-translation
- shorts (they're fine in a separate space, just not in the timeline)
- member only streams (if I'm not a member, which is 100% of them)
The only viable interface for that is the web and plenty of browser extensions.
> The only viable interface for that is the web and plenty of browser extensions.
there are ways to get this same experience with android. Use https://github.com/ReVanced/ and make your phone work for you instead of working for someone else.
ReVanced also has the additional benefit of blocking ads, allowing background play and auto-skipping sponsorships thanks to SponsorBlock.
Also, if you have an Android TV, I'd suggest SmartTube, it's way better than the original app and it has the same benefits of ReVanced: https://github.com/yuliskov/SmartTube
Doesn't help against autodubbing and autotranslation.
> - shorts (they're fine in a separate space, just not in the timeline)
No they're not. Nothing that mandates vertical video has ever been fine nor ever will be. Tiktok, Reels, Shorts, all bad and should be destroyed.
Unless the action is primarily vertical, which is rarely ever the case, it's always been and always will be wrong.
Yes I will die on this hill. Videos that are worse to watch on everything but a phone and have bad framing for most content are objectively bad.
There is nothing wrong with the concept of short videos of course, but this "built for phones, sucks for everything else" trash needs to go away.
Have you ever encountered portrait photos? They're orientated vertically because the human form, either head, bust or full body, fits better and excludes distractions.
Vertical videos, if they're focused on a human, work fine for the same reason.
I'd be interested in seeing an example of a well-composed 9:16 portrait photo. All the ones I have found look awkward.
Yeah, portrait photos aren't as narrow as that. I just measured some of mine, and they're 5x7, 8x10, and 11x16. By comparison, 9x16 feels claustrophobic.
I suspect that a still image is also different from video because, without motion, there's no feeling that if the person might move a few inches to one side and go out of frame.
You might be dead on that hill then. That ship has sailed long ago. Short format is mostly consumed on phones in vertical. Long form is still standard widths.
“Everything but a phone” is a tiny tiny percentage of the devices used to consume content on YouTube. It’s not just mobile first, it’s basically only mobile…
SmartTube for Android TV can do that
Use the Unhook extension
Or https://github.com/gijsdev/ublock-hide-yt-shorts.
Unhook is great. Makes youtube on desktop bearable. Unfortunately does not work on phones :(
[dead]
Just don't watch them?
This is a very common response where users acquiesce to an internet of mediocrity rather than demanding the corporations do better
I mostly don't watch them. But they literally spam every single search. (While we're at it, Youtube also isn't very good at honoring keywords in searches either)
How does that remove them?
do you feel a need to stop other people doing things you personally don't like?
I want to remove them from my own feed. I want the button that says "hide" or "show fewer shorts" to actually work and ideally hide them forever. I have to play whack-a-mole on the different devices and browsers to try to hide shorts.
[flagged]
Well you aren’t wrong but the attitude isn’t helping.
It is my feed as far as it explains to you that it’s not about disabling something for others. It isn’t my feed as far as who actually controls it is concerned.
Yes. That’s the basis of literally every law and regulation known to man.
thank goodness that in some countries we have the concept of a private life, where you don't have to like what we do and you can't stop it.
[flagged]
are they making you go on YouTube?
Yes?
Until content start being published elsewhere it's fair to say we are forced to go to YouTube to access it.
who's forcing you?
Everyone who put mandatory stuff on YouTube and only here. Two last examples I faced recently:
- Companies who put their product instruction manual exclusively on YouTube
- university curriculum who require you to watch contain that is on YouTube only.
Sure I'm free not to buy any manufactured products or not resume my studies, but it's like saying the Gulag was OK because people were free not to criticize Stalin.
the shorts are on the home page for doomscrolling. all the examples above will give you a playlist or will embed the videos in their pages. I don't see how shorts on the home page are a problem here? could you clarify please?
Easy on the website. Very click and swipe intensive on the phone in my opinion. Shorts are front and centre of the app and the search screens. I don't see any feed of suggested videos anymore.
Why would someone use the YouTube app? It's cancer.
It's cancer unless you use ReVanced, which makes it almost nice.
Just don't smoke/eat junk/do drugs etc. They put addictive shit in your face and force you to use their bloated interface to access the service.
The response to junk food and cigarettes and drugs is to avoid them, not make them illegal.
It's fair to make it illegal to sell or advertise those things to children at least.
Didn't say make them illegal, it's more about advertising and forcing people into situations that make it difficult to say no.
The worst thing for me is they don't show the channel names. So much of the channels pushing Star Wars shorts are quite obvious bot names, and it's hard to filter these from legitimate SW content creators who are, on top of that, all using the same damn AI voice.
If I hear an AI voice I click the little menu button with three dots, then click don't show this channel or whatever it says.
The Venn diagram of AI voice users and good content creators is pretty close to two separate circles. I don't really care about the minority in the intersection.
Except that now Youtube also "helpfully" auto-dub legitimate videos in other languages (along with translating the titles) by default, so even the 'AI voice' isn't a good signal for gauging if it's quality content or not.
As a french-speaking person, I now find myself seeing french youtubers seemingly posting videos with english titles and robotic voice, before realizing that it's Youtube being stupid again.
What's more infuriating is that it's legitimately at heart a cool feature, just executed in the most brain-dead way possible, by making it opt-out and without the ability to specify known languages.
That's gonna have to be the content creators' and YouTube's problem, I don't care.
If we take them at their word then it's just an extension of technology to optimize video... and it's called AI because buzzwords and hence controversy.
> it's just an extension of technology to optimize video... and it's called AI because buzzwords and hence controversy.
The controversy is that YouTube is making strange changes to the videos of users, that make the videos look fake.
YouTube creators put hours upon hours on writing, shooting and editing their videos. And those that do it full time often depend on YouTube and their audience for income.
If YouTube messes up the videos of creators and makes the videos look like they are fake, of course the creators are gonna be upset!
It might be for making them compress better and be more likely to not buffer when you swipe up/preload more, like Tiktok serving them unencrypted to be more likely to be in a local cache for the ISP.
> Perhaps this was done to optimize video encoding, since the less noise/surface detail there is the easier it is to compress.
If so it's really just another kind of lossy compression. No different in principle from encoding a video to AV-1 format.
Given the denoising is said to be aggressive enough to be noticeable on already compressed video I think criticism of it is fair. Just that it should be distinguished from something like Tiktok's 'beautifier' modifications, which from titles like the BBC's come to mind.
If AI is as wonderful and world-changing as people claim, it's odd that it's being inserted into products exactly like every other solution in search of a problem.
Would you say the same thing about the internet?
People voluntarily visit the internet because it gives them things they want. This is an article about unwelcome changes being automatically made.
Those things are different.
[dead]
If it's being added to a toaster for no good reason, sure. But the internet as a whole, through a browser? That's not comparable, people explicitly seek it out when they want to.
Yeah I would, if "Internet" came with zero safeguards or regulations and corporations put the onus on the user to sift through mountains of spam or mitigate credit card leakage risks when buying something online.
In the 90s? Sure, Pets.com was a solution in search of a problem. Same with any Uber for dog sitters or whatever from the 10s.
It was purely luck of the context that I noticed, but I received an email notification that someone had messaged me on LinkedIn via my gmail account. When this happens, the email contains the message contents. However, in this case the message contents did not match between the version within LinkedIn and the version presented in the email. Only two words were different, but that slight change made it even more peculiar and unsettling.
Maybe the sender edited the LinkedIn message? The email shows the original version, before the user edits.
Last week I went to buy a Philip K Dick eBook while on vacation. It was only $2 and my immediate thought was, “what are the odds this is some weird pirated version that’s full of errors? What if it’s some American version that’s been self-censored by Amazon to be approved by the government? What if it’s been AI enhanced in some way?”
Just the consideration of these possibilities was enough to shake the authenticity of my reality.
Even more unsettling is when I contemplate what could be done about data authenticity. There are some fairly useful practical answers such as an author sharing the official checksum for a book. But, ultimately, authenticity is a fleeting quality and I can’t stop time.
You're obviously reading way too much Philip K Dick. You need to make it more about personal identity crisis though, not just reality in general.
Alas, in this economy I cannot afford drug-induced paranoia.
Authenticity can be proven by saying things that upsets censors. For example, if I mention the Tiananmen square, you can be sure my comment wasn't edited by CCP's LLMs.
From the linked tweet from YouTube's head of editorial:
"No GenAI, no upscaling. We're running an experiment on select YouTube Shorts that uses traditional machine learning technology to unblur, denoise, and improve clarity in videos during processing (similar to what a modern smartphone does when you record a video)"
https://x.com/youtubeinsider/status/1958199532363317467?s=46
Considering how aggressive YouTube is with video compression anyways (which smooths your face and makes it blocky), this doesn't seem like a big deal. Maybe it overprocesses in some cases, but it's also an "experiment" they're testing on only a fraction of videos.
I watched the comparisons from the first video and the only difference I see is in resolution -- he compares the guitar video uploaded to YT vs IG, and the YT one is sharper. But for all we know the IG one is lower resolution, that's all it looks like to me.
This is an absolutely huge deal. It doesn't matter how small the scope of the change is, they thought it was a good idea to apply mandatory AI post-processing to user content without consent or acknowledgement.
Secret experiments are never meant to be little one-offs, they're always carried out with the goal of executing a larger vision. If they cared about user input, they'd make this a configurable setting.
Again, this isn't GenAI.
The idea of it being "without consent" is absurd. Your phone doesn't ask you for consent to apply smoothing to the Bayer filter, or denoising to your zoom. Sites don't ask you for consent to recompress your video.
This is just computational image processing. Phones have been doing this stuff for many years now.
This isn't adding new elements to a video. It's not adding body parts or changing people's words or inventing backgrounds or anything.
And "experiments" are just A/B testing. If it increases engagement, they roll it out more broadly. If it doesn't, they get rid of it.
Yeah, the big video platforms are constantly working on better ways to store and deliver video. If this stuff is applying to some workflow that automatically generates Shorts from real videos... whatever. Very similar to experimenting with different compression schemes. Video compression can differ on a per-shot basis now!
If you want to make pristine originals available to the masses, seed a torrent.
> Again, this isn't GenAI.
Yet.
How about we stick to the facts of what is actually happening?
I mean, I'm also not Brad Pitt. "Yet."
Given Google's history and the fact they rolled this out without notice or consent makes me feel comfortable saying "yet". If YouTube can get away with making GenAI YouTubers (via some likeness sign off buried in the T&C) without paying the originals I'm sure they'd love to do so. All the ad impressions with none of the payout to creators.
Their AI answers box (and old quick answer box) has already affected traffic to outside sites with answers scraped from those sites. Why wouldn't they make fake YouTubers?
> I mean, I'm also not Brad Pitt. "Yet."
Not with that attitude!
This is similar to how AI enhanced photos are a non-issue. If one zooms into a photo taken by a Google Pixel device, you clearly see that these are no longer normal JPEG artifacts. Everything has so odd swirls in it down to the smallest block.
If you watch the youtube video[1] linked in the article you get a much better examples, that clearly look like AI slop. Tho I do understand that people's ability to discern AI slop varies wildly.
[1] https://www.youtube.com/watch?v=86nhP8tvbLY
> that clearly look like AI slop
That's not what AI slop means. There's no GenAI.
I watched the video. It's literally just some mild sharpening in the side-by-side comparison.
Whatever youtube is doing adds a painted over effect that makes the video look like AI slop. They took a perfectly normal looking video, and made it look fake. As a viewer, if you can't tell or don't care... That's fine. For you. But at the very least, the creator should have a say.
I don't think you know what "AI slop" means.
It's not making the videos look fake, any more than your iPhone does. Most of what's shown in the example video, it might very well be phones applying the effect, not YouTube.
At no point did I say the video IS AI slop. Or that generative AI was used to make it, or the effect youtube applied to it. We actually have no idea what youtube did. We only see the result; which can be subjective.
To you, that result looks like it was shot with a phone filter. To me it looks like it was generated with AI. Either way, it doesn't really matter. It's not what the creator intended. Many creators spend a lot of effort and money on high-end cameras, lenses, lighting, editing software, and grading systems to make their videos look a specific way. If they wanted their videos to look like whatever this is, they would have made it that way by choice.
Isn't this similar to what e.g. Instagram and co have done for ages? Even smartphones do it automatically for you, digital post-processing to compensate for the limitations of the cameras.
> Even smartphones do it automatically for you, digital post-processing to compensate for the limitations of the cameras.
The level of post-processing matters. There is a difference between color grading an image and removing wrinkles from a face.
The line is not cut clear but these companies are pushing the boundaries so we get used to fake imagery. That is not good.
>There is a difference between color grading an image and removing wrinkles from a face.
You're implying the latter doesn't happen normally but denoising (which basically every smartphone camera does) often has the effect of removing details like wrinkles. The effect is especially pronounced in low light settings, where noise is the highest.
I’ve been using Instagram since the beginning and don’t think it has ever applied any kind of filter or AI upscaling, unrequested.
Maybe you’re thinking of TikTok and samsung facial smoothing filters? Those are a lot more subtle and can be turned off.
there is a "subtle" difference: acknowledgement and consent
There is barely any acknowledgement and consent on the phone's part too, so the difference is not there in this regard.
I've seem some game of thrones clips recently in youtube shorts which looked like they'd been generated by ai. I couldn't understand why anyone would have done that to the original good looking material. The only thing I could think was that it was some kind of copyright evasion.
As a fan of the early seasons, I get lots of suggestions for Got clips. I assume that's done by the author to get around copy right blocks. Quite often they also add music, which would make it easier to get around sound detection.
I haven't noticed it outside copyrighted material, so it's probably intentional.
[dead]
The most charitable interpretation is it’s a very aggressive form of video compression. Denoising to reduce data and speed up video loads.
So I feel like article doesn't address the "why" of it all. Why auto AI upscale?
Maybe it's to make it more difficult to train AI video models from YouTube. Think about it, they have the raw footage so could use it if they want, but competitors using scrapers will have slightly distorted video sources.
Here's how I imagine it went:-
1. See that AI upscaling works kinda well on certain illustrations.
2. Start a project to see if you can do the same with video.
3. Develop 15 different quality metrics, trying to capture what it means when "it looks a bit fake"
4. Project's results aren't very good, but it's embarrassing to admit failure.
5. Choose a metric which went up, declare victory, put it live in production.
Boost visual quality, which improve viewer retention. So, money. I've tried many times to get a short with retention > 90% that is, 90% of viewers watch all the way to end. That's the key to going super viral. Very hard to do. I've had many shorts get around 75% and about 1k views but then die. Maybe I need some AI!
Is the visual quality really boosted? It seems to give a very distinctive, almost uncanny-valley look to the video.
AI upscale does not improve quality imo, I'd much prefer to watch grainy vhs originals to AI upscaled ones that insert weird shapes in the image.
This is especially bad in animation, where the art gets visibly distorted.
There are people out there who can vote and can (sometimes) buy and drink alcohol and who never used VHS in any capacity.
And a new generation what is trained on a constantly enabled face filters and 'AI'-upscaled slop is already here.
> Boost visual quality
So to make edible stuff from shit.
Perceived quality? They tried to pull an "everything 4k@60Hz" for their 360p@30Hz low poly Stadia content as well.
To make people more accustomed to the AI generated look so that when they release their next Veo integration to YouTube content creator tools, these videos will stand out less as unnatural.
Sadly, this is a real possibility. I would even conjecture they are testing a new pipeline, in which the input is real videos and the output are AI-generated.
For now it's a kind of autoencoding, regenerating the same input video with minimal changes. They will refine the pipeline until the end video is indistinguishable from the original. Then, once that is perfected, they will offer famous content creators the chance to sell their "image" to other creators, so less popular underpaid creators can record videos and change their appearance to those of famous ones, making each content creator a brand to be sold. Eventually humans will get out of the pipeline and everything will be autogenerated, of course.
> Then, once that is perfected, they will offer famous content creators the chance to sell their "image" to other creators, so less popular underpaid creators can record videos and change their appearance to those of famous ones, making each content creator a brand to be sold.
I'm frightened by how realistic this sounds.
To me, this is the only thing that makes sense. Why else would you spend so much money doing this?
There's also the on-by default, can't be disabled, auto-dubbing YouTube performs on every video that's not in the single browser's language. The dubbing quality is poor for the same reason, to intentionally expose viewers to AI content.
It's 100% a push to remove human creators from the equation entirely.
That's exactly it. All social media platforms are experimenting with replacing humans with AI.
Probably to reduce storage costs
But the upscaling isn't applied live/on viewing, right? The video being upscaled is still stored on their server and then streamed. How does it reduce storage costs?
Do you know that for a fact?
Maybe Google has done the math and realized it's cheaper to upscale in realtime than store videos at high resolution forever. Wouldn't surprise me considering the number of shorts is probably growing exponentially.
The economics don't make sense, each video is stored ~ once (+ replication etc. but let's say O(1)) but viewed n times, so server-side upscaling on the fly is way too costly and currently not good enough client-side.
Are you considering that the video needs to be stored for potentially decades?
Also shorts seem to be increasing exponentially... but Youtube viewership is not. So compute wouldn't need to increase as fast as storage.
I obviously don't know the numbers. Just saying that it could be a good reason why Youtube is doing this AI upscaling. I really don't see why otherwise. There's no improvement in image quality, quite the contrary.
I can’t think of a more dislike-able company than YouTube. I used to love youtube and watch it everyday and it would make me a happier, smarter person. Now youtube’s impact on their users is entirely negative and really the company needs to be destroyed. But they won’t be because they are now evil, and evil is profitable.
This was the last drip in an almost-full bucket for me & I finally made the jump to "disable" YouTube on my phone. And, honestly, my mental health improved a bit. Rarely still pull up YT on laptop, but that's a different use pattern than on phone.
(I don't have any other YouTube-like on my phone, particularly no TikTok. Actually started reading more books instead.)
It's obvious and clear that Google's end game is to completely replace the creator, and auto-generate all their videos. How far off we are from this, no one knows, probably not that far. Google likely already has all the data they need, it's now just about how long it will take to develop the AI.
> And the closer I looked it almost seemed like I was wearing makeup
Those AI skin enhancement filters are always terrible. Especially on men. Crazy they'd try it automatically. This isn't like the vocal boosting audio EQing they do without asking.
Google must have some questionable product management teams these days if they are pushing out this stuff without configuration. Probably trying to A/B it for interal data to justify it before facing the usual anti-AI backlash crowd when going public.
The recent sinking in quality of youtube as a platform has been awful to watch.
Just a couple days ago I got an ad with a Ned Flanders singing about the causes of erectyle dysfunction (!), a huge cocktail of copyright infringement, dangerous medical advice and AI generated slop. Youtube answered the report telling me they've reviewed and found nothing wrong.
The constant low quality, extremely intertwined ads start to remind me of those of shady forums and porn pages of the nineties. I'm expecting them to start advertising heroine now they've decided short term profits trump everything else.
> Youtube answered the report telling me they've reviewed and found nothing wrong.
In other words, their Google Ads account is fully paid up. Copyright infringement only matters if you're a lowly uploader.
[dead]
When you upload a video to YT, it's heavily compressed. Your pristine creation is converted 10 ways to Sunday, which then can be played back in a variety of formats, speeds, and platforms. Long before you even uploaded that video, for free, to give the world the chance to see your creative genius, you agreed to this process, by agreeing to Youtube's T&C's.
People may be upset, and I get that. But it's not like the videos were in their original format anyway. If you want to maintain perfect video fidelity, you wouldn't choose YouTube. You chose YouTube because it's the path of least resistance. You wanted massive reach and a dead simple monetization route.
People are upset by visible or audible compression just the same.
Compression doesnt put words in your mouth. Do you honestly think they're the same thing or are you just being pedantic on purpose?
I think you need to re-read the article.
Exactly this. The only reason video works at all is because the machine is trying to make changes to it that it doesn't think humans will notice.
I've noticed this for a while, when I accidentally click on YouTube Shorts. (I want to avoid it, because it's brain rot, but YouTube keeps enabling it and pushes it hard in notifications).
It's most glaringly obvious in TV shows. Scenes from The Big Bang Theory look like someone clumsily tries to paint over the scenes with oil paint. It's as if the actors are wearing an inch thick layer of poorly applied makeup.
It's far less glaring in Rick Beato's videos, but it's there if you pay attention. Jill Bearup wanted to see how bad it could get and reuploaded the "enhanced" videos a hundred times over until it became a horrifying mess of artifacts.
The question remains why YouTube would do this, and the only answers I can come up with are "because they can" and "they want to brainwash us into accepting uncanny valley AI slop as real".
> It's most glaringly obvious in TV shows. Scenes from The Big Bang Theory look like someone clumsily tries to paint over the scenes with oil paint. It's as if the actors are wearing an inch thick layer of poorly applied makeup.
This might be the uploaders doing to avoid copyright strikes.
It's 100% this
There are shorts blocker addons available.
This is getting into conspiracy territory but my personal assumption that they're trying to gaslight people into thinking that these weird AI artifacts are just how videos work, so that it's harder to distinguish between real videos and AI generated ones.
We can relax; it's only about YouTube shorts
For now.
I going to say something controversial, bu why is this even surprising? Google and YouTube have been framing themself as the kind company that will appropriate your work and make make money out of you. "You are the product" is a repeated endlessy even on social media, and this it thier private paltform after all.
At this point getting involved with youtube is just the usual naive behaviour that somehow you are the exception and bad things won't happen to you.
Now to that stupid robot auto translation on non-english videos I never asked for and can not turn off.
Those translations are not only unwanted but also ridiculously bad (which is part of the reason why they're unwanted, I guess). I have to translate back to the original English, as far as that's even possible, to get an idea of what the video might be about.
Who in his right mind thought this was a good idea??
I have a Firefox extension which tries to suppress the translations, but it only works for the main view, not for videos in the sidebar. It's better than nothing.
The "can not turn off" part is the most jarring. Seriously, did none of Californian PMs hear about the concept of "being multilingual" and not needing to translate non-English content?
---
By the way, this reminds me also of another stupid Google thing related to languages:
Say your Chrome is set to English. When encountering another language page, Chrome will (since a decade ago or so) helpfully ask you to auto-translate by default. When you click a button "Never translate <language>", it will add language to the list which is sent out to every HTTP request the browser makes via `Accept-Language` header (it's not obvious this happens unless you're the kind of person who lives in DevTools and inspects outgoing traffic).
Fast-forward N years, Chrome privacy team realizes this increases fingerprinting surface, making every user less unique, so they propose this: "Reduce fingerprinting in Accept-Language header information" (https://chromestatus.com/feature/5188040623390720)
So basically they compensate for one "feature" with another, instead of not doing the first thing in the first place.
> Seriously, did none of Californian PMs hear about the concept of "being multilingual" and not needing to translate non-English content?
Sometimes it feels like Google keeps anyone with any kind of executive power hermetically sealed a some house borrowed from a reality TV show where they're not allowed any contact with the outside world.
"Złomnik: Dodge Caravan Is a Large Wall Unit"
The car that changed the car industry forever. Something. Something. Fake american accent.
Nobody asked for it and you can't find discourse on the subject or give it a name, I've never had a feature made me feel more gas-lit. And I pay for premium dammit.
> And I pay for premium dammit.
Sometimes it is better when some things are left out. /s
This one is the worst. I can't imagine the thought process behind it. How on earth it was seen as a wanted feature, and especially without a simple way to disable it? This feels like they used AI to code this AI thing in.
> How on earth it was seen as a wanted feature, and especially without a simple way to disable it?
Were you asleep in the last 10 years ? /s They have names for it: accessibily, User eXperience. Or as some other people put it: enshitification.
As with so many mistakes Google makes, this is letting the technical people front-run an interaction that should have been spearheaded by the social people.
From a technical standpoint, it's easy to think of AI-based cleanup as in the same category as "improving the compression algorithm" or "improving the throughput to the client": just a technically-mediated improvement. But people have a subjectively-different reaction between decreasing instances of bandwidth-related pixelation and making faces baby-smooth, and anyone on the community side of things could have told the team responsible (if they'd known about it).
Sometimes Google's tech-expert-driven-company approach has negative consequences.
Re: "without warning or permission". The YouTube Terms of Use require you to grant YouTube the (perpetual, worldwide, etc) right to prepare derivative works.
Reminder, there is no cloud, there is just computers of other people. And I for one support those other people's right to do on their computers what they want.
> YouTube made AI enhancements
This is a contradiction in terms.
what it boils down to is that "enhancement" is one more pilon to swerve around in the ongoing persuit of reality. given the overwhelming volume of media it's easy enough to modify ones asthetic monitoring and scan to simply ignore anything suspicious, or if it gets too bad, back right off
How about they turn off their recent asinine title translation feature? Now every creator has to opt out of it manually - and the users have no recourse short of browser extensions.
I suppose we should fire up those "AI" browsers and let them loose on YouTube in a while loop. They are just the right audience for "AI" enhanced content and YouTube's advertisers will be thrilled.
AI fearmongering probably produces a lot of clicks if upscaling gets labeled as "might bend reality". YT shouldn't just be doing it without user's input, but pearl clutching is unproportiona.