piyuv 2 days ago

I’m a paying YouTube premium subscriber. Last weekend, I wanted to download something so I can watch it on my way in the train. The app got stuck at “waiting for download..” on my iPad. Same on iPhone. Restart did not work. I gave up after an hour (30 mins hands on trying stuff, 30 mins waiting for it to fix itself). Downloaded the video using yt-dlp, transferred it to my USB c flash drive, and watched it from that.

Awaiting their “premium cannot be shared with people outside household” policy so I can finally cancel. Family members make good use of ad-free.

  • beala 2 days ago

    I'm also a premium subscriber, and have struggled with the same issues on the iPad app. I try to keep some shows downloaded for my toddler, and the download feature never seems to work on the first try.

    I finally got so fed up, I bought a Samsung Galaxy Tab A7 off ebay for $50 and flashed it with LineageOS. I can now load whatever media I want onto the 1 TB sdcard I've installed in it. The 5 year old hardware plays videos just fine with the VLC app. And, as a bonus, I discovered that NewPipe, an alternative YouTube client I installed through the F-Droid store, is actually much more reliable at downloading videos than the official client. I was planning on using yt-dlp to load up the sdcard, but now I don't even need to do that.

    • heavyset_go 2 days ago

      This is exactly why Google is clamping down on running your own choice of apps on Android, as well as pushing things like remote attestation on both phones and browsers.

      It's time to milk the entire userbase for every cent they can get out of them by any means necessary. The future is bleak.

      • DaiPlusPlus a day ago

        > This is exactly why Google is clamping down on running your own choice of apps on Android, as well as pushing things like remote attestation on both phones and browsers.

        Yes, Google is doing this; but I don't believe Google is doing it to squeeze an inconsequentially small boost in YT Premium subscriptions from former-account-sharers - I believe they're doing it because they want to demonstrate that YouTube is a "secure" platform for large, Hollywood-like, production studios to feel comfortable publishing first-runs of new TV content directly to YouTube - and those production companies are famously paranoid, luddite, and comically ignorant of cryptography fundamentals (i.e. they believe DRM can simultaneously allow legal subscriber Alice but deny evil pirate Bob from watching protected content when Alice and Bob are in-reality the same person (it's you, me, us!).

        ..and if not Hollywood studios, then certainly the major sports leagues. [The NFL's lawyers seem like real fun at parties](https://publicknowledge.org/the-nfl-wants-you-to-think-these...).

        • heavyset_go 19 hours ago

          All they have to do is uptick their DRM scheme a la Netflix and Amazon and YouTube would be indistinguishable from either platform from the eyes of rightsholders, and studios have no problem releasing to either platform.

      • ivolimmen a day ago

        They will probably start requiring SecureBoot as well. So at some point running Linux will also pose a problem. It's not impossible but the extra steps are a pain in the butt.

        • mindslight a day ago

          "Secure" boot is mostly a red herring, as there are lots of hardware options these days. Remote attestation takes away your ability to run libre Linux on any device if you want to interact with Google (or other surveillance company) network services. It completely repudiates the idea of the mutual-consent-based protocol.

          • heavyset_go 19 hours ago

            Secure Boot, or similar bootloader root of trust schemes, is the reason millions of phones and tablets will never run anything other than the manufacturer-provided OS and not what the user wants to run.

            • mindslight 18 hours ago

              Yes, of course. I agree 100% that the designed-to-be-ewaste market is terrible. If I had my way, any manufacturer-privileging signing scheme would be illegal. So would the anticompetitive bundling of software with hardware devices, for that matter.

              My point was that the threat of prohibiting libre Linux isn't from all manufacturers deciding to lock out installing Linux on their devices. But rather from remote attestation making it so that Google (et al) are able to force you to run a locked down operating system as a technically-enforced condition of interacting with their servers.

    • bigyabai 2 days ago

      NewPipe is incredible. If Google ever stops signing apps like that, I'll be switching to a Linux phone.

      • extraduder_ire 2 days ago

        What do you mean by signing? Application signing on android is done by the developer, with their own key. Or by fdroid, in the case of apps built by fdroid in the default repository.

        • heavyset_go 2 days ago

          Things have changed.

          Google is doing what Apple does and implementing Gatekeeper-like signature checks to ensure only apps by Google-approved developers can run on Android.

          Microsoft does something similar with Windows Defender: you need to buy a developer certificate that can be revoked at any time if you want to distribute your app and have users be able to run it.

          We're at a point where we need permission from trillion dollar companies to run the apps we want on the hardware we own.

          • 71bw 2 days ago

            >Microsoft does something similar with Windows Defender: you need to buy a developer certificate that can be revoked at any time if you want to distribute your app and have users be able to run it.

            Clarifying: you CAN run an unsigned app just fine on Windows. A lot of freeware/"indie" (for lack of a better term for small software) programs run just fine, the only thing that happens is the user recieves a warning they have to press "Yes" on (which 95% of people do, because That's The Windows UX[patent pending]).

          • extraduder_ire a day ago

            In that case they wouldn't be stopping anything, since they haven't started signing anything yet.

            I also haven't seen any specifics on how that system is supposed to work, but have seen a lot of speculation and (perhaps not unwarranted) fearmongering.

      • cranberryturkey 2 days ago

        as soon as they have a map app that works with car play i'm switching to linux phone.

        • Gabrys1 a day ago

          There are boxes you plug to your CarPlay enabled car that run Android. Run Google Maps on that and you're golden. No need to carry/connect your phone to the car anymore

          • non-nil a day ago

            I don't think you mean a wired-to-wireless dongle, which is all my searches turn up. Can you give me an example of such a device?

        • jraph 2 days ago

          Don't car play and android auto rely on proprietary libraries? I doubt it will come to Linux phones unless they take off or something like microg reimplements the proprietary parts.

        • bigyabai 2 days ago

          GNOME Maps is good enough for me. I don't know what Carplay is and at this point I'd rather not ask.

          • m4tthumphrey 2 days ago

            Framework for using your cars infotainment system as your screen/input device. Android has something similar called Android Auto.

      • 1vuio0pswjnm7 2 days ago

        "If Google ever stops signing apps like that, I'll be switching to a LInux phone."

        Is this another way of saying, "I will keep using it until it stops working"

    • aeyes 2 days ago

      I use yt-dlp inside of a-shell on iOS, then play files using VLC.

      • fyrabanks 2 days ago

        i use this for things i repost on IG with commentary. i would rather not have a huge folder of downloads of random stuff i'm not even sure i want to revisit. (and i'm bad about clearing out space on my phone.)

      • stirlo 2 days ago

        Doesn’t solve VLCs suckiness on iOS. No PiP support when it’s been in iOS for years now…

        • averageRoyalty 2 days ago

          [flagged]

          • amatecha 2 days ago

            Yeah because everyone who has a user experience feedback about a piece of software is magically a skilled programmer? The smug "PRs accepted" doesn't help anyone. Expressing hope for a feature at least shows potential implementers that the feature is wanted.

            • ethbr1 2 days ago

              ideas/assholes/everyone has one, etc

              • Dylan16807 a day ago

                There's a certain rudeness to imposing your own ideas, but that does not apply here. It's not their idea. It's a standard feature of video apps that's missing.

            • idiotsecant 2 days ago

              Expressing hope for a feature? Is that the tone you pulled from that post?

      • comprev 2 days ago

        Nice trick I'll have to try this. Thanks!

      • busymom0 a day ago

        What shell app do you use?

    • nyarlathotep_ a day ago

      >, I discovered that NewPipe, an alternative YouTube client I installed through the F-Droid store, is actually much more reliable at downloading videos than the official client.

      NewPipe is so good and so useful. It can even play 4K and watch livestreams now.

    • Gabrys1 a day ago

      Use Tubular, which is basically NewPipe with Sponsorblock. (And has really nice Android Auto support which I learned after a while)

    • moralestapia 2 days ago

      Tangential.

      The TIDAL app is absolute trash, it has this same issue all the time; not just that, but also, if a download fails it just hangs there and does not download the rest of the album/playlist.

      Also, why would you want to download things in the first place? To watch them offline, right? Well, guess what happens when you open the app w/o an internet connection ... it asks you to login, so you cannot even access your music. 900k/year TOC genius work there.

      The only reason why I haven't canceled is because I'm too lazy to reset my password in order to login and cancel, lol. Might do it soon, though.

      • galaxy_gas 2 days ago

        When I try it for a month, the worst part.. your entire download queue fails forever unless you manually remove hundreds of items one by one

        There is no way to remove the stuck item if it's been pull from streaming library or you in country that -- such traveling etc -- does not have r ights to it. You simply cannot open the track to undownload it

        • lawgimenez a day ago

          Also tried Tidal once with their trial, tried playing some music videos and it was just straight up blurry throughout. Not once did the music video played HD.

      • xienze 2 days ago

        One thing I like about Tidal though: you can download everything, DRM-free, using tidal-ng.

      • OccamsMirror 2 days ago

        But with TIDAL I cut them some slack because they're not a multibillion dollar behemoth.

        I do wish they'd improve their CarPlay search results though. I hate asking for a well known song and getting some obscure EDM remix.

        • moralestapia a day ago

          Oh, but they are.

          It was founded by Jay-Z and then bought by the Twitter dopey guy.

    • martin82 a day ago

      Premium subscriber here.

      Download feature on iOS always works flawlessly whenever I need to hop on a long haul flight (several times a year).

  • femtozer 2 days ago

    I also pay for YouTube Premium, but I still use ReVanced on my smartphone just to disable auto-translation. It’s absolute madness that users can’t configure this in the official app.

    • the_af 2 days ago

      The auto-dub feature is madness. I noticed it first a couple of days ago, I'm crossing my fingers that few authors choose to enable it, and that YouTube makes it easy to disable as a default in settings (not currently possible, you have to do it as you watch, every time).

      I'm in a Spanish speaking country, but I want to watch English videos in English.

      Auto-generated subtitles for other languages are ok, but I want to listen to the original voices!

      • kevin_thibedeau 2 days ago

        It is enabled by default. One creator of English language content had their video misclassified as Spanish and people were getting a machine English dub on an English video. Support to fix it appears to be a nightmare.

        • the_af 2 days ago

          Wait, do you mean it's enabled by default but the author can disable it?

          If not, I wonder why I can still watch most videos in their original language (even though I'm in a Spanish-speaking country), and I only encountered this once so far.

          • extraduder_ire 2 days ago

            It's being gradually rolled out to channels. Will likely hit larger ones earlier, or following some other metric youtube is following.

          • kevin_thibedeau 2 days ago

            There is supposed to be a procedure to manually remove each language dub but it was broken (at least as of last year).

      • OJFord 2 days ago

        I don't want it dubbed whether I speak the language or not.

        • the_af 2 days ago

          Yes, this is what I mean. I NEVER want it dubbed.

          I'd rather use auto-generated subtitles (even if flawed), but I want to hear the original voices!

      • LtdJorge 2 days ago

        What about the auto translated titles? It also happens for chapters in the video...

        Sames languages as you. It drives me nuts because the translations are almost always wrong.

        • Bootvis 2 days ago

          This “feature” amazes me. It is badly done and a bad idea. I haver never watched a dubbed video so why show me a translated title? It’s also surprising, Google has plenty ESL employees on staff.

          • bonoboTP 2 days ago

            There has to be some KPI tied to how often the AI model is used in production for providing translations on YouTube etc. Someone's promotion hangs on the translation feature being used as often as possible on YouTube.

      • pjc50 2 days ago

        Comments are quite good at pointing out when the creator has accidentally left it on (it is of course enabled by default and authors have to actively disable it).

      • zahlman 2 days ago

        > Auto-generated subtitles for other languages are ok, but I want to listen to the original voices!

        The first time I saw this feature, it was on a cover of some pop song in a foreign language. Why on Earth... ?

    • piyuv 2 days ago

      It’ll be fixed when some product manager can offer it as a promotion project

      • godelski 2 days ago

        I was talking to "my friend" about how I'm annoyed my calendar duplicates holidays because it imports from multiple calendars and he asked me "what value" would be provided if this was solved. Confused I said it pushes things off so I can't read events. He clarified he meant monetary value...

        We're both programmers so we're both know we're talking about a one line regex...

        I know quite a number of people like this and they're in high positions at big tech companies... doesn't take a genius to figure out why everything has such shitty user experiences and why all the software is so poorly written. No one even seems to care about the actual product they'll only correct you to tell you the product is the stock and the customer is the shareholder, not the user.

        We fucked up...

        • fragmede 2 days ago

          What did you tell him was the monetary value? Let's say there are like 5 holidays per year that result in days where some people have holidays but others do not, so business meetings happen that day that get missed. Let's say you have a 100 million people using this calendar software. Let's say 0.5 percent of those are in the executive class. Furthermore, let's say 10% of them miss a meeting due to this UI issue. That's 50,000 missed meetings. If we handwave that each of those meetings could have resulted in $10 million deals for their company, this UI bug is costing customers half a trillion dollars!

          So, after estimating the number of ping pong balls that fit on a 747, the thing to do is to go write the regexp and put that on your promo packet. Half a trillion dollars!

          • bonoboTP 2 days ago

            Obviously they meant monetary value for the software company. How much more revenue will they make if they implement it?

          • godelski 2 days ago

            Sorry, let me clarify better (but it leads to similar issues)

            On my iPhone[0] calendar I imported my Microsoft (work) and Google (personal) calendars, also having the iPhone calendar. If we take last Labor day as an example, if I don't disable the Holiday calendars in Microsoft and Google, I have 3 entries for Labor Day. Holidays sit at the top of the day so if I'm on my phone I basically won't see any other events. If I'm on my macbook and my Calendar is using 60% of my vertical space I see "Labor Day +3 more". Full screen I can see 4 maybe 5 entries....

            So I can save a large chunk of real estate by doing a simple fucking 1 line regex. At the same time I can effectively merge the calendars, so I get to see the holidays that are in one but not the others.

            Effectively, I can ACTUALLY SEE WHAT I HAVE SCHEDULED FOR THE DAY[1]

            This, of course, also affects other things. Sometimes Google will add an event because I got an email later. Fuck, now I have dupes... Same thing happens with birthdays... Or you can hit that fun bug where you have for some god damn reason duplicate contacts with the same name, phone number, and birthday, you get triplicate calendar entries and merging[2] and results in quadruple entries!

            I have missed so many fucking things because I didn't see it on my calendar[3]. And someone has the audacity to ask how much money would be saved? We've spent longer discussing the problem than it would take to fix it! These aren't junior people I'm talking to (who ask dumb things like "but I can't control or merge the other calendars" not recognizing it's a display issue), but like a L6 at Amazon.[4]

              > So, after estimating the number of ping pong balls that fit on a 747, the thing to do is to go write the regexp and put that on your promo packet.
            
            I swear, the problem is no one realizes the point of leetcode questions was never to get the answers right, but to just have some problem for an interviewee work on and see how they go about solving it. I'd rather an engineer get the wrong answer with a good thought process than get the right answer with shitty code that was obviously memorized. It's much harder to teach people how to think than it is to teach them some specific thing to remember.

            [0] I've almost immediately regretted this decision...

            [1] General frustration yelling, not yelling at you

            [2] No, the "find duplicate contacts" option does not in fact find duplicate contacts (what fucking data are they looking for? Because it sure as hell isn't identical names. Why isn't it even trying to do similar names?!)

            [3] I've also missed so many fucking things because that little scroll wheel wasn't completely finished with its animation and so saved the wrong day or switched AM to PM. I've missed so many things because I have so little control over notifications and they disappear not if I dismiss them, but if I just unlock my god damn phone. So not just that one liner needs to be done, but it would do a lot and these other one-liners would also greatly help.

            [4] Dude was complaining about candidates using GPT to do leetcode problems and how he had a hard time figuring out if they were cheating or not. One of my many suggestions was "why not do in person interviews?" which was answered with how expensive plane tickets were (his interviewees were local) and contradicted his prior and later statements about how costly it is to hire/interview someone. I'm sorry, what percentage of 6 engineer's salaries to do 6 interviews for an hour is a single round trip ticket for a domestic flight? Or to have someone... drive in...

        • indymike 2 days ago

          Hmm. If the annual subscription is $100 then the value of fixing this is $100.

          If it is free, then, what's the profile worth for a year... there's the value.

          User retention is a thing.

          • godelski 2 days ago

            I mean these numbers are just made up anyways, so why are engineers concerned with them? The idea of engineers needing to justify monetary value is just... ill conceived. They should be concerned with engineering problems. Let the engineering manager worry about the imaginary money numbers.

              > User retention is a thing.
            
            Problem is no one needs to care about the product's quality if the product has the market cornered... Even less of a concern if the users don't know how to pick good products from bad products. Tech illiteracy directly leads to Lemon Markets
            • TeMPOraL 2 days ago

              > I mean these numbers are just made up anyways, so why are engineers concerned with them?

              That's what they're directly or indirectly being graded on. Even if they don't have to show how their work impacted the company's bottom line, their managers or their managers' managers have to, and poop just rolls downhill.

              > The idea of engineers needing to justify monetary value is just... ill conceived. They should be concerned with engineering problems. Let the engineering manager worry about the imaginary money numbers.

              If this was only possible in this industry. If you're in a small company, you're wearing multiple hats anyway. If you're in a big corp, well, my wife hates that I see this in everything, but - hidden inflation is a thing. As roles are eliminated (er, "streamlined"), everyone is forced to be responsible for things they're not really supposed to care about (my favorite example is filing expense reports).

              As you aptly put it upthread: we fucked up...

              • godelski 2 days ago

                  > That's what they're directly or indirectly being graded on.
                
                I think you'd agree that this should have never been the case. Engineering managers or project managers, sure. But engineers? That's just silly.

                We need firewalls. One group's primary concern needs to be on the product. Another group's primary concern needs to be on keeping the business alive and profitable.

                Too much of the former and you fail to prioritize the right work. Too much of the latter and you build vaporware. The downsides of biasing in one direction is certainly worse than the other...

                  > my wife hates that I see this in everything, but - hidden inflation is a thing.
                
                Lol, your wife might have a field day with mine...

                I have a fundamental belief that there's far more complexity than we let on. That as we advance complexity only increases. What was once rounding errors end up becoming major roadblocks. It's the double edged nature of success: the more you improve the harder it is to improve. I truly will never understand how everyone (including niche experts) thinks things are so simple.

                But my partner is doing her PhD in economics, so she also thinks about opportunity costs quite a lot but I think she (and a lot of her friends) were quite unaware of how a lot of stuff operates in tech[0].

                Probably doesn't help that, as you know, I'm not great at brevity :/

                [0] My favorite thing to at her department get togethers (alcohol is always involved) is to introduce them to open source software. Quite a number of them find it difficult to understand how much of the world is based on this type of work and how little money it makes. Not to mention the motivations behind it. The xz hack led to some interesting discussions...

                • elcritch 2 days ago

                  Perhaps it's because I've been in tech for so long, but I can't comprehend PhD candidates not knowing about open source software.

                  • godelski 2 days ago

                    PhD Economists, not PhD Computer Scientists.

                    Don't worry, people didn't go completely brain dead lol. And most of the economists know about it but not the scale or how it fits in the larger ecosystem. They really just know it as "there's sometimes tools on GitHub".

          • kelnos a day ago

            Right, but fixing something is only worth $100 if they actually are losing paid users over that thing.

            I suspect they aren't losing users over duplicated holidays in the calendar.

          • reflexco 2 days ago

            User retention is not much of a thing anymore thanks to the stickiness afforded by integrated services and network effects.

            You can't just switch calendar/video streaming when everything else is integrated with it/everyone is exclusively posting on this network.

        • yacthing 2 days ago

          > We're both programmers so we're both know we're talking about a one line regex...

          As a big tech programmer, it's almost never that simple...

          Small edges cases not covered by a one line regex can mean big issues at scale, especially when we're talking about removing things from a calendar.

          • godelski 2 days ago

              > As a big tech programmer, it's almost never that simple...
            
            I'll be fair and agree that I'm being a bit facetious here. But let's also admit that if you are unable to dedupe entries in a calendar with identical names then something is fundamentally broken.

            I did purposefully limit to holiday calendars as an example because this very narrow scope vastly simplifies the problem, yet is a real world example you yourself can verify.

            You're right that edge cases can add immense complexities but can you really think of a reason it should be difficult to dedupe an event with identical naming and identical time entries, especially with the strong hint that these are holidays? Let's even just limit ourselves to holidays that exclusively fall over full day periods (such as Labor Day).

            Do you really think we cannot write a quick solution that will cover these cases? The cases that dominate the problem? A solution whose failure mode results in the existing issue (having dupes)? Am I really missing edge cases which require significantly more complex solutions that would interfere with the handling of these exceptionally common cases? Because honestly, this appears like a standard table union problem. With the current result my choices are having triplicate entries, which has major consequences to usability, or the disabling of several calendars, which fails to generalize the problem and also results in missing some minor holidays. Honestly, the problem is so bad I'd be grateful even if I had to manually approve all such dedupes...

            If not, I'd really like to hear. Because it really means I've greatly mischaracterized the problem and I should not be using this example. Nor the example of a failure to FIND contacts with identical names, nicknames, phone numbers, birthdays, and differ only on an email address and note entry. Because I have really been under the strong impression that the latter is a simple database query where we should return any entry containing matches (failure mode being presenting the user with too many matches rather than a lack of matches. We can sort by number of duplicate fields and display matches in batches if necessary. A cumbersome solution is better than the current state of things...).

            I'm serious in my request but if I have made a gross mischaracterization then I think you'd understand how silly this all looks. I really do want to know because this is just baffling to me.

            If I truly am being an idiot, please, I encourage you to treat me like one. But don't make me take it on your word.

            • yacthing a day ago

              That's a lot of words, but I think it boils down to: you're making an assumption that two calendar events with identical naming and identical time entries will always have a desired behavior of being deduped.

              - Maybe you want to separately invite people to the same thing and have different descriptions, now you're increasing the number of things to equate.

              - Maybe a user creates one event that is simply a title and a time, and they then want to create a second one for another purpose. However, it keeps getting deduped and they don't know why. Now you have a user education problem that you have to solve.

              - Now you might think: well just make it a toggle in the settings! Okay well now you have to add a new setting and that expands the scope of the project. Do you make it opt-in or opt-out? If it's opt-in, what if no one uses it? Do you maintain the feature if there's a migration? If it's opt-out, you still have the above problems.

              I could go on. And this is mostly an exercise of not underestimating a "simple" change. Calendars (and anything involving time) in particular can get very complicated.

              • godelski a day ago

                  > will always have a desired behavior of being deduped.
                
                Okay, let's say people like repetition. Optional flag. Great, solved.

                  > Maybe you want to separately invite people to the same thing
                
                To a... holiday? Sorry, I already cannot invite people to a holiday in my existing calendar. I have no ability to edit the event. This capacity does not exist in my Apple Calendar nor Google Calendar and I'm not going to check that Outlook Calendar because the answer doesn't matter.

                  > Maybe a user creates one event that is simply a title and a time,
                
                Again, no need to auto-dedupe. But having collisions and requiring unique name entries is not that uncommon of a thing.

                  > And this is mostly an exercise of not underestimating a "simple" change
                
                Except to introduce your complexity you also had to increase the scope of the problem. Yeah, I'm all for recognizing complexity but come on man, we're talking about fucking Apple who makes you do it their way, by visiting 12 different menus, or the highway. We're talking about the same company who does not have the capacity to merge two contacts and only has the option "find duplicate contacts" but is unable to find duplicates despite multiple matching fields.

                So what's your answer? Keep the bullshit and do not provide an option to allow merges or dedupes? Literally all the problems you've brought up can be resolved by prompting the user with a request to merge OR just giving them the ability to do so. You really think triplicate entries is a better result than allowing a user to select three entries, right click, "merge entries"? Come on...

                • yacthing a day ago

                  > So what's your answer?

                  My answer is simply: It's not a 5 minute regex change.

                  I'm not even saying it shouldn't be prioritized or isn't worth the effort. Just that you should give the problem a bit more respect.

                  • godelski a day ago

                      > you should give the problem a bit more respect.
                    
                    The more generalized problem? Absolutely!

                    The very idealized trivial cases we're discussing and I've stressed we're discussing? I'm unconvinced.

      • ChocolateGod 2 days ago

        and removed when that person who is promoted doesn't work on it again

    • gradstudent 2 days ago

      I tried installing ReVanced recently. The configuration of the system (install a downloader/updater which then installs the app) was a huge turn-off. Why is it so complicated? Moreover, why not NewPipe or LibreTube?

      • SchemaLoad 2 days ago

        I haven't used it myself, but my understanding was that revanced was patching the offical youtube app, while the other two are from scratch reimplementations. You wouldn't be allowed to distribute the full version of revanced, you can only distribute the patch.

      • Yokolos 2 days ago

        Because no matter how much the YouTube app may suck in various ways, it's still vastly better than NewPipe and LibreTube in UX and much more enjoyable to use. So I'd rather use a patched version where the bad parts are removed over something like NewPipe which is just nowhere as polished.

    • rodrigodlu 2 days ago

      Thanks for the recommendation.

      I was using the browser feature that disables the mobile mode on smartphones.

      The autodub feature should be disabled asap. Or at least have a way to disable globally on all my devices.

    • ekianjo 2 days ago

      I wonder who got the idea at Youtube that forced auto-dub was a good idea. This shows how dysfunctional the management is. It's one thing to have assholes in your team, it's a different thing to not look at what they are doing.

  • masklinn 2 days ago

    Even more hilariously, if you upload to YouTube then try to download from your creator dashboard thing (e.g. because you were live-streaming and didn’t think to save a local copy or it impacts your machine too much) you get some shitty 720p render while ytdlp will get you the best quality available to clients.

    • hmstx 2 days ago

      Oh, that reminds me of a similar experience with Facebook video. Did a live DJ stream a few years ago but only recorded the audio locally at max quality. Back then, I think I already had to use the browser debugger to inspect the url for the 720p version of the video.

      When they recently insisted by email I download any videos before they sunset the feature, their option only gave me the SD version (and it took a while to perform the data export).

  • beerandt 2 days ago

    Canceled mine after ad-free stopped working on YouTube Kids of all things (on ShieldTV). Was probably a bug, but with practically no customer service options, no real solutions besides cancel.

    I was also a holdover from a paying Play Music subscriber, and this was shortly after the pita music switchover to youtube, so it was a last straw.

    • underlipton 2 days ago

      Halfway ready to fist-fight whichever exec drove the death of Play Music. It was a very, very good application, which could have continued to function as such when the platform ended, but they wouldn't even let us have that. I still have them and refuse to uninstall.

  • meindnoch 2 days ago

    >Awaiting their “premium cannot be shared with people outside household” policy so I can finally cancel.

    Then I have good news for you! https://lifehacker.com/tech/youtube-family-premium-crackdown

    In fact, I've got an email from them about this already. My YT is still ad-free though, so not sure when it's going to kick in for real.

    • pixl97 2 days ago

      Ya I got this message when I was on vacation for a week. Seems a little messy on their part.

  • shantara 2 days ago

    I’m another Premium user in the same position. I use uBlock Origin and Sponsorblock on desktop and SmartTube on my TV. I pay for Premium to be able to share ad-free experience with my less technical family members, and to use their native iOS apps. If they really tighten the rules on Premium family sharing, I’ll drop the subscription in an instant.

    • al_borland 2 days ago

      I’m a Premium user and primarily watch on AppleTV. A little while ago they added a feature where if I press the button to skip ahead on the remote when a sponsor section starts, it skips over the whole thing. It skips over “commonly skipped” sections.

      While it doesn’t totally remove it, it lets me choose if I want to watch or not, and gets me past it in a single button press. All using the native app. I was surprised the first time this happened. I assume the creators hate it.

  • observationist 2 days ago

    ReVanced and other alternatives exist.

    So long as they are broadcasting media to the public without an explicit login system, so as to take advantage of public access for exposure, it will remain perfectly legitimate and ethical to access the content through whatever browser or software you want.

    After they blitzed me with ads and started arbitrarily changing features and degrading the experience, I stopped paying them and went for the free and adblocking clients and experience.

    I may get rid of phones from my life entirely if they follow through with blocking third party apps and locking things down.

    • mschuster91 2 days ago

      the problem is, you cannot be sure what Google does if they catch you violating their ToS. They have killed off entire google accounts for YT copyright strikes with no recourse.

      • realusername 2 days ago

        That's why I'm not using Google accounts for anything important, I left gmail in 2014 and I really advise everybody to do the same.

        You never know when the hammer can drop.

        • bornfreddy 2 days ago

          This. I simply don't understand why some people rely on Google given the risk level, impact and their no-recourse-except-maybe-public-shaming policy.

        • fragmede 2 days ago

          Google doesn't capriciously deprecate things in a short amount of time. When they sunset features, there's plenty of warning. They'll tell you that there's a hammer, that it's going to drop on you in 6 months, which is plenty of time for you to get out from under it. Which, I mean, I'd rather there not be a hammer, but it's not like they're gonna announce on a Friday that they're shutting down Google Keep on Monday and I need to wreck my whole weekend in order to save all my notes.

          • catgirlinspace 2 days ago

            The hammer isn't shutting down a service, it refers to your Google account getting banned for a violation or whatever reason they feel like.

          • realusername 2 days ago

            I'm not afraid of them deprecating Gmail, I'm afraid thar I wake up one day and the account is banned without recourse.

        • Akronymus 2 days ago

          Yeah, same. I still have a gmail account that just forwards emails, and I update the email on services as they come on. Being on your own domain for email is just better.Though, I use a service provider to handle the mail server itself

    • poulpy123 a day ago

      > ReVanced and other alternatives exist.

      until next year, when google will require real name and address for dev of side loaded apps

  • paxys 2 days ago

    I'm constantly baffled by how bad the implementation of YouTube Premium downloads is. Videos will buffer to 100% in a matter of seconds but get endlessly stuck when I hit the download button. Why? All the bytes are literally on my device already.

    • jerf 2 days ago

      The whole YouTube app is weird. Sometimes it lets you do 1.0x-2.0x. Sometimes it lets you range from .25x-4x. Sometimes it pops up a text selection box with every .05x option from .1 to 4.0. Sometimes it has a nicer UI with shortcut selections for common choices and a sliding bar for speed. It recently picked up a bug where if you're listening to a downloaded video, but turn the screen off and on again, the video playback seems to crash. A few months ago it became very, very slow at casting, all manipulations could take 30 seconds to propagate to the cast video (pause, changing videos, etc)... but they didn't usually get lost. (It would be less weird if they did just get lost sometimes.) You aggressively can't cast a short to a TV, in a way that clearly shows this is policy for some incomprehensible reason, but if you use the YouTube app directly on your set top box it'll happily play a short on your TV. Despite its claims in small text that downloads are good for a month without being rechecked, periodically it just loses track of all the downloads and has to redownload them. It also is clearly trying to reauthorize downloads I made just 30 minutes ago sometimes when I'm in a no-Internet zone, defeating the entire purpose. When downloads are about 1/4th done it displays the text "ready to watch on the download screen" but if you try to watch it it'll fail with "not yet fully downloaded".

      Feels like the app has passed the complexity threshold of what the team responsible for it can handle. Or possibly, too much AI code and not enough review and testing. And those don't have to be exclusive possibilities.

      • knome 2 days ago

        the control changes sound like you might have gotten caught in some kind of a-b testing

        • jerf 2 days ago

          They flop back and forth at a high frequency though. I can hit all three cases in five minutes and it's been like that for months.

          Also there is never a sensible reason to offer video speeds as a combo-box popup of all options from .05x to 4.00x. It's like three times the vertical size of my screen.

          • Barbing 2 days ago

            All that testing and they've never thought to offer a one-tap way to get back into speed control once I've adjusted the speed one or more times on the same video.

            Don’t get me started on the “highest quality” account setting absolutely never selecting 4K options when available. They simply have to try to save the bandwidth money by nesting quality options a couple taps away. (A userscript fixes this on desktop and even in Safari iOS/iPadOS, but I don’t deserve the quality I’m paying for if I use their native app.) [Privileged rant over!]

            • TeMPOraL 2 days ago

              This is becoming a common pattern everywhere now.

              Case in point (and sorry for bringing up this topic), LLM providers seem to be doubling down on automatic model selection, and marketing it as a feature that improves experience and response quality for the users, even though it's a blatant attempt to cut serving costs down by tricking users (or taking the choice away) into querying a cheaper, weaker model. It's obviously not what users want - in this space, even more than in video streaming, in 90%+ of end-user cases, what the user wants is the best SOTA model available.

              At least with YouTube, I recall them being up front about this in the past, early in the COVID-19 pandemic - IIRC the app itself explained in the UI that the default quality is being lowered to conserve bandwidth that suddenly got much more scarce.

    • lukan 2 days ago

      Because they want to control the bytes on your devices.

      Giving you the bytes would be easy, the hard part is preventing the free flow of information. And those bugs are the side effects.

  • some-guy 2 days ago

    Also a paying YT Premium subscriber. I live in a rural part of CA where there isn't much 5G reception. For extremely long drives in my minivan, I allow my toddler to watch Ms. Rachel on the screen via an HDMI port input from my iPhone. Youtube Premium videos have DRM that disallow downloads to play over HDMI, so I had to do what you did and add them as files locally to VLC and play them from there.

  • stronglikedan 2 days ago

    > Awaiting their “premium cannot be shared with people outside household” policy

    I recently got paused for "watching on another device" when I wasn't. I don't think that policy you mention is too far off.

  • hysan 2 days ago

    I also have YouTube premium and watch mostly on my iPad and TV. YouTube constantly logs me out at least once per day. I notice because I’ll randomly start seeing ads again (I open videos from my rss reader, never their site). This never happened when I wasn’t on premium. I don’t get what they’re doing, but my impression after almost a year is that it’s only slightly less annoying than getting ads. At this point, I might as well not renew and just use ad block.

  • ac29 2 days ago

    > Awaiting their “premium cannot be shared with people outside household” policy so I can finally cancel

    That's been a policy for a while, the sign up page prominently says "Plan members must be in the same household".

    No idea if its enforced though.

    • phkahler 2 days ago

      I have 2 homes. Every time I "go up north" I have to switch my Netflix household and then back again when I return. This sounds like that won't even be possible.

      • ewoodrich a day ago

        If it works like Youtube TV you are given the option to switch household locations when you get the nag screen.

  • fragmede 2 days ago

    I'll admit to using yt-dlp to get copies of videos I wish to have a local copy of, which can't be taken away from me by somebody else, but I pay for premium because that pays for content I watch. If you don't pay for content, where's it going to come from? Patreon only works for super dedicated stars with a huge following.

    • Marsymars 2 days ago

      I can’t speak for everyone, but I don’t watch content that needs to be “paid for” in that way. e.g. The last several videos I downloaded for archiving were install instructions for car parts that were uploaded by the part manufacturer. (And that aren’t readily available through any other channel.)

  • dostick 2 days ago

    YouTube’s “Download” is not really a download, it’s actually “cache offline” within YouTube app.

    • johnisgood 2 days ago

      Lmao, people really should stop giving them money.

  • icelancer 2 days ago

    YouTube premium "download" is also just completely fake. Downloaded where? What file can I copy?

    • TeMPOraL 2 days ago

      Files? What era do you hail from? Prehistory?

      There are no files anymore. I mean, there technically are, but copyright industry doesn't want you to look at them without authorization, security people don't want you to look at them at all, and UX experts think it's a bad idea for you to even know such thing as "files" exists.

      Share and enjoy. Like and subscribe. The world is just apps all the way down.

      • SchemaLoad 2 days ago

        Ironically a very large chunk of youtube creators themselves need the ability to download real files so they can use segments in their own videos.

        TikTok is very strange in that it actually does let you download real files.

  • hamandcheese 2 days ago

    I have the opposite problem... frequently streaming a video gets stuck buffering even on my gigabit fiber connection, but I can download a full max quality version in a matter of seconds.

  • N0isRESFe8GXmqR 2 days ago

    I run into that download issue all the time. I need to pause downloading each video. Force close the youtube app. Then unpause the downloads to get them downloading again. It has been happening for years and is still unfixed.

  • loganlinn 2 days ago

    I had a similar experience on YouTube Music. I discovered the message was misleading and I just had to enable downloads when not on WiFi

  • yolo_420 2 days ago

    I am a premium subscriber so I can download via yt-dlp in peace without any errors or warnings.

    We are not the same.

    • a96 12 hours ago

      For now.

  • maplethorpe 2 days ago

    What video did you watch?

    • piyuv 2 days ago

      Nintendo Direct. Download issue persisted with all videos though

  • cactusplant7374 2 days ago

    Why not use Brave browser and their playlist feature for offline downloads?

    • godelski 2 days ago

        > Why not use Brave browser
      
      Why not use a non-chromium browser and help prevent Google from having larger control over the Internet?

      We still need competition in the browser space or Google gets to have a disproportionate say in how the Internet is structured. I promise you, Firefox and Safari aren't that bad. Maybe Firefox is a little different but I doubt it's meaningfully different for most people [0]. So at least get your non techie family and friends onto them and install an ad blocker while you're at it.

      [0] the fact that you're an individual may mean you're not like most people. You being different doesn't invalidate the claim.

      • cactusplant7374 2 days ago

        Firefox is in decline and Brave will soon overtake it. Brave blocks ads natively. There is a lot of advantage in that but we also may eventually have a new model that funds the internet. And I don't see Firefox or Safari disrupting advertising.

        https://data.firefox.com/dashboard/user-activity

        https://brave.com/transparency/

        • jazzyjackson 2 days ago

          I'll just throw out there that zen-browser.app is a gentle fork of Firefox to make it look like the (abandoned, chromium) arc browser, it's great.

        • bl4kers 2 days ago

          That Brave link shows growth has flatlined

        • godelski 2 days ago

          I think you've missed the point entirely.

          The point is that if everyone is using a single browser (not just Chrome/Chromium) then that actor gets disproportionate control over the internet. That's not good for anyone.

          The specific gripe to Chromium is that _Google_ gets that say, and I think they are less trustworthy than other actors. I'm not asking anyone to trust Mozilla, but anyone suggesting Mozilla is less trustworthy than Google probably has a bridge to sell you. Remember that being Chromium still means that Brave is reliant upon Google. That leads to things like this[0,1]. Remember, the chromium source code is quite large, which is why things like [0] aren't so easily found. I also want to quote a quote from [0.1]

            This is interesting because it is a clear violation of the idea that browser vendors should not give preference to their websites over anyone elses.
          
          That wouldn't be the first time people have found Google preferencing their browser and it is pretty known this happens with YouTube. Do we really want ANY company having such control over the internet? Do we really want Google to?

            > https://data.firefox.com/dashboard/user-activity
            > https://brave.com/transparency/
          
          I'm not sure what you're trying to tell me here. That Brave has 64% of the number of users as Firefox? That Brave users really like Gemini, Coinbase, and Uphold? That Brave users are linking their Brave account to sites like Twitter, YouTube, Reddit, GitHub, Vimeo, and Twitch? That Brave Ads is tracking via the state level? Honestly I have more questions looking at the Brave "transparency" report, as it seems to have more information about users than Firefox...

          If you're extra concerned about privacy and that's your reason for Brave, then may I suggest the Mullvad browser[2]? It is a fork of Firefox and they work with Tor to minimize tracking and fingerprinting. You get your security, privacy, and out from under the boot of Google.

          [0] https://github.com/brave/brave-browser/issues/39660

          [0.1] https://simonwillison.net/2024/Jul/9/hangout_servicesthunkjs...

          [1] https://www.bleepingcomputer.com/news/google/google-to-kill-...

          [2] https://mullvad.net/en/browser

          • Awesomedonut 2 days ago

            I'm a loyal Brave user and I feel my loyalties being swayed right now...

            • godelski 2 days ago

              I really do have a lot of respect for Brave and what they're trying to do, don't get me wrong. I think they are trying to address a meaningful problem and I do not think their solution is ill-conceived. I want to be clear about that.

              But I do think it is a far bigger problem that we let a single actor have so much control over the fundamental structure of the internet. The problem isn't Brave so much as it is Chromium. But criticizing Brave (and Opera, Edge, etc) is a consequence of this.

              You must ask yourself which is the bigger concern?

                - If you believe the major concern is an ad based ecosystem on the internet, then choose Brave. Especially if you believe it is unlikely that data leakage "features" implemented by Google are not likely to be captured by downstream projects.
              
                - If you believe the major concern is the number one ad based company who's entire market is based on the erosion of data privacy, then choose *literally anything* that is not Chromium based.
              
              I think the latter is far more damning and honestly is an upstream issue to the concern Brave is trying to address. That's why I say I would encourage Brave to move away from Chromium. I actually would encourage them to develop their own engine since I think 3 choices is far from sufficient, but I'll take a Gecko or WebKit version as a major victory.

              But this is my opinion. There is no right answer here. It has to come down to you.

              If you agree with me then I'd encourage you to look at Firefox. It is good by default and with a few easy to find options you can have strong privacy and installing uBlock is a trivial task. If you are more privacy conscious, I encourage you to look at the Mullvad Browser, which is a Firefox fork with strong privacy defaults (maintained by the Tor and Mullvad teams). If you want a WebKit then check out Orion. I use this on both my iPhone and iPad (my Macbook and linux desktop are still Firefox), as Orion allows add-ons, so you can get ad blocking on your phone (when I was on Android I just used Firefox mobile which supports extensions). If you really want to encourage a 4th player I believe LadyBird is the popular kid on the block, but I honestly don't know too much and last I knew it was not quite to a stable state.

              You don't have to agree with me, but I just want to make people aware that they do have a say in the future. There's no solution that doesn't have drawbacks, but I think on a techie form we should be able to have a more complex discussion and recognize that there are consequences to our choices. I think it is also important to recognize our choices multiply as we tend to be the ones who inform our non-techie peers. If you've ever installed software for a friend or family member, then realize how our choices multiply.

              I'd also encourage you to promote more conversations among techie groups so we can hear the diverse set of opinions and concerns. It's a complex world and it is easy to miss something critical.

              • Awesomedonut 2 days ago

                Thanks for the detailed response! I got lots of food for thought

          • cactusplant7374 2 days ago

            > I'm not sure what you're trying to tell me here.

            I'm telling you that Firefox is going to be out of business soon because users favor ad blocking and blocking trackers. That is the trend. Firefox isn't growing anymore.

            > Honestly I have more questions looking at the Brave "transparency" report, as it seems to have more information about users than Firefox...

            Metrics can be transmitted without revealing the user. This is well known.

            You can't suggest anything. I am done with this conversation.

            • godelski 2 days ago

                > I'm telling you that Firefox is going to be out of business soon
              
              Do you not think everyone saying Firefox is going to be out of business soon plays a role in this?

              Regardless, I think you've ignored the root of my argument. I'm not trying to be a Firefox fanboy here but it's not like there's many options. The playing field is Chrome, Firefox, Safari. So only one of these is not "big tech".

                > Metrics can be transmitted without revealing the user. This is well known.
              
              This is not well known and I think you've kinda "told on yourself" here. It is fairly well known in the privacy community that it is difficult to transmit user data without accidentally revealing other information. Here's a rather famous example[0,1]. I'd encourage you to read it and think carefully about how deanonymization might be possible after just reading a description of the datasets they deanonymize.

                > You can't suggest anything. I am done with this conversation.
              
              If you wish to disengage then that is your choice. I am really trying to engage with you faithfully here. I'm not even really attacking Brave here, as my critique is over the Chromium ecosystem. I think if you look at my points again you can see how they would dramatically shift if Brave were based off of Gecko or Webkit. Honestly, I would be encouraging Brave usage were it under those umbrella. Or even better, if it had its own engine! Because my point is about monopolization.

              [0] https://courses.csail.mit.edu/6.857/2018/project/Archie-Gers...

              [1] https://arxiv.org/abs/cs/0610105

    • piyuv 2 days ago

      I’m not using brave browser so did not know it could download videos

      • QuantumNomad_ 2 days ago

        I’m using Brave, but didn’t know either :p

    • a96 12 hours ago

      Brave is a series scam company.

    • pcdoodle 2 days ago

      Nice, didn't know Brave could do that.

  • gjsman-1000 2 days ago

    For anyone here who runs a startup, I propose two lifestyle benefits you should add:

    1. Unlimited YouTube Premium

    2. Unlimited drink reimbursement (coffee, tea, smoothies, whatever)

    The psychological sense of loss from those two things would be larger than any 5% raise.

    • edoceo 2 days ago

      I don't like that math, rather have the 5% than $8k in perks.

      • gjsman-1000 2 days ago

        The pitch is for the employer: This would likely be both cheaper and simultaneously stickier.

    • whatshisface 2 days ago

      I personally wouldn't want to hire a startup employee who couldn't figure out how to install a browser extension. ;-)

      • gjsman-1000 2 days ago

        You're assuming startups are all tech. At my job, tech is not even 1/3 of employees.

        • dotancohen 2 days ago

          Browser extensions are not meant for the technical crowd, they're meant to be installed by all users of the browser. If someone is not bright enough to figure out how to install a browser extension, or change a lightbulb, or refill the ice tray, tech worker or not I don't need them in my startup.

      • posterguy 2 days ago

        ah yes, let me just install a browser extension on the kids ipad

        • whatshisface 2 days ago

          FYI for next time you're buying, you can install Firefox on Android, although this is perhaps threatened by Google's planned changes to user's ability to install software.

        • godelski 2 days ago

          Here, this will help with that

          https://kagi.com/orion/

          • gonzalohm a day ago

            Only works in iOS

            • godelski a day ago

              I think you need to read the comment I was responding to a bit more carefully. (Hint, they would not have made that comment had their kid had an Android tablet)

            • socksy a day ago

              But you don't need it for Android... Can happily install uBlock Origin on bog standard Firefox there.

est 2 days ago

I really appreciate the engineering effort went into this "JavaScript interpreter"

https://github.com/yt-dlp/yt-dlp/blob/2025.09.23/yt_dlp/jsin...

  • sirbranedamuj 2 days ago

    This is the buried lede in this announcement for me - I had no idea they were already going to such lengths. It's really impressive!

  • Aurornis 2 days ago

    This is perfect for the problem they were solving. Really cool that they took it this far to avoid adding further overhead.

  • XnoiVeX 2 days ago

    It's a subset of Javascript. HN discussion here https://news.ycombinator.com/item?id=32794081

    • extraduder_ire 2 days ago

      This description reminds me of program language wars excerpt I saw somewhere years ago about how c was obviously superior to Pascal, because with enough preprocessor macros you can compile any Pascal program in c. Followed by some hideous and humorous examples.

    • ASalazarMX 2 days ago

      This is amazing, an old school but effective approach in this modern age. I was afraid they were going to embed a browser.

  • codedokode 2 days ago

    I decided just to look at the code for a moment and discovered ChainMap in Python.

    • ddtaylor 2 days ago

      This is excellent for some of my usages. I want to have my AI agents "fork" their context in some ways, this could be useful for that instead of juggling a tree of dictionaries.

    • bjackman a day ago

      Ha, that's cool. I have implemented a crappy and probably broken version of this type before. Next time I won't have to!

  • LordShredda 2 days ago

    I'm on mobile, this seems like an actual js interpreter that only does objects and arithmetic. Impressive that it went that far

  • supriyo-biswas 2 days ago

    Heh, now I wonder how much JavaScript it actually interprets and given that it’s < 1000 lines, whether it could be used towards an introductory course in compilers.

    • kccqzy 2 days ago

      Obviously not. An introductory course would introduce concepts like lexers, parsers, AST, etc, instead of working on strings.

      Here are lines 431 through 433:

          if expr.startswith('new '):
              obj = expr[4:]
                  if obj.startswith('Date('):
    • Too 2 days ago

      There’s a famous presentation by David Beazley where he implements a WASM interpreter in Python in under an hour. Highly recommended.

      • bangaladore 2 days ago

        Bytecode interpreters are quite simple compared to the actual lexer / parser.

  • jokoon 2 days ago

    Wait I thought they were running an entire browser engine

    • ddtaylor 2 days ago

      Over time they probably will require that. I believe YT still allows most of these things because of "legacy" apps, which they have been killing off bit by bit. I'm not sure if anyone is cataloging the oldest supported app, but most things like using YT from a slightly older game console don't work anymore.

      Basically any publicly known method that can sip video content with doing the least work and authentication will be a common point of attack for this.

  • stevage 2 days ago

    heh, that's pretty cool.

  • jollyllama 2 days ago

    I wonder how long until it gets split off into its own project. For the time being, it could do with a lot more documentation. At least they've got some tests for it!

    • CaptainOfCoit 2 days ago

      > I wonder how long until it gets split off into its own project

      The submission is literally about them moving away from it in favor of Deno, so I think "never" probably gets pretty close.

      • jollyllama 2 days ago

        Thanks for explaining - I didn't understand that this is what was being replaced.

    • zahlman 2 days ago

      Aside from the fact that the point of the announcement is that they're dropping it entirely, this "interpreter" is a hack that definitely is nowhere near capable of interpreting arbitrary JS. For example, the only use of `new` it handles is for Date objects, which it does by balancing parens to deduce the arguments for the call, then treating the entire group of arguments as a string and applying regexes to that.

ddtaylor 2 days ago

When I first got with my wife I seemed a bit crazier than I am because I am a media hoarder for 30+ years. I don't have any VHS, DVDs, etc. laying around because I only keep digital copies, but I have pretty decent archives. Nothing important really, just normal stuff and some rare or obscure stuff that disappears over time.

My wife was interested in the idea that I was running "Netfix from home" and enjoyed the lack of ads or BS when we watched any content. I never really thought I would be an "example" or anything like that - I fully expected everyone else to embrace streaming for the rest of time because I didn't think those companies would make so many mistakes. I've been telling people for the last decade "That's awesome I watch using my own thing, what shows are your favorites I want to make sure I have them"

In the last 2 years more family members and friends have requested access to my Jellyfin and asked me to setup a similar setup with less storage underneath their TV in the living room or in a closet.

Recently-ish we have expanded our Jellyfin to have some YouTube content on it. Each channel just gets a directory and gets this command ran:

    yt-dlp "$CHANNEL_URL" \
      --download-archive "downloaded.txt" \
      --playlist-end 10 \
      --match-filters "live_status = 'not_live' & webpage_url!*='/shorts/' & original_url!*='/shorts/'" \
      -f "bv*[height<=720]+ba/b[height<=720]" \
      --merge-output-format mp4 \
      -o "%(upload_date>%Y-%m-%d)s - %(title)s.%(ext)s"
It actually fails to do what I want here and download h264 content so I have it re-encoded since I keep my media library in h264 until the majority of my devices support h265, etc. None of that really matters because these YouTube videos come in AV1 and none of my smart TVs support that yet AFAIK.
  • int_19h 2 days ago

    I have set up a Plex server and started ripping shows from various streaming providers (using StreamFab mostly) specifically because my wife got frustrated with 1) ads starting to appear even on paid plans and 2) never-ending game of musical chairs where shows move from provider to provider, requiring you to maintain several subscriptions to continue watching. She's not a techie at all, she's just pissed off, and I know she's not the only one.

    Let's make sure that when all those people come looking for solutions, they'll find ones that are easy to set up and mostly "just work", at least to the extent this can be done given that content providers are always going to be hostile.

  • entropie 2 days ago

    First I ran a simple script, now I use ytdltt [1] to allow my mother via telegram bot to download YT videos (in her case its more like audiobooks) and sort them in directories so she can access/download it via jellyfin. Shes at around 1.2TB audiobooks in like 3 years.

    1: https://github.com/entropie/ytdltt

  • axiolite 2 days ago

    > It actually fails to do what I want here and download h264 content so I have it re-encoded

    I struggled with that myself (yt-dlp documentation could use some work). What's currently working for me is:

        yt-dlp -f "bestvideo[width<800][vcodec~='^(avc|h264)']+bestaudio[acodec~='^((mp|aa))']"
  • ticoombs 2 days ago
    • meeb 2 days ago

      Thanks for the mention :)

      • toomuchtodo 2 days ago

        Do you have any plans for the target to be S3 compatible in addition to a posix file system? If I wanted to sync a YouTube channel to a Backblaze B2 bucket or Minio, for example.

        • meeb a day ago

          No plans for that currently, you're welcome to open an issue on GitHub though and I'll investigate if it's sensible to implement.

  • werid 2 days ago

    use the new preset feature to get h264: -t mp4

    you can also skip the match filters by running the /videos URL instead of the main channel url.

    if you want 720p, use -S res:720

  • herzzolf 2 days ago

    I recently discovered Pinchflat [1], which seems like an *arr-inspired web alternative, and works great for me - I just need to add the videos I want downloaded to a playlist and it picks them up. Also uses yt-dlp under the hood.

    1. https://github.com/kieraneglin/pinchflat

  • binaryturtle 2 days ago

    Tried this: "yt-dlp -f 'bestvideo*[ext=mp4]+bestaudio[ext=m4a]/best[ext=mp4]/best' -S vcodec:h264 -other_options …" ? I'm still getting proper h264 with that (my Raspberry PI 3 only wants a proper codec too… none of that mushy new-era codec stuff. ;) )

  • rasz 2 days ago

    >fails to do what I want here and download h264 content

    you are missing [vcodec^=avc1] ?

  • getcrunk 2 days ago

    Do you have to put in cookies to avoid the sign in/bot prompt? Do you use a vpn to download yt videos?

wraptile 2 days ago

Days of just getting data off the web are coming to an end as everything requires a full browser running thousands of lines of obfuscated js code now. So instead of a website giving me that 1kb json that could be cached now I start a full browser stack and transmit 10 megabytes through 100 requests, messing up your analytics and security profile and everyone's a loser. Yay.

  • nananana9 2 days ago

    On the bright side, that opens an opportunity for 10,000 companies whose only activity is scraping 10MB worth of garbage and providing a sane API for it.

    Luckily all that is becoming a non-issue, as most content on these websites isn't worth scraping anymore.

    • judge2020 2 days ago

      *and whose only customers are using it for AI training

      • TeMPOraL 2 days ago

        They can afford it because the market rightfully bets on such trained models being more useful than upstream sources.

        In fact, at this point in time (it won't last), one of the most useful applications of LLMs is to have them deal with all the user-hostile crap that's bulk of the web today, so you don't have to suffer through it yourself. It's also the easiest way to get any kind of software interoperability at the moment (this will definitely not last long).

  • daemin 2 days ago

    This 1kb os json still sounds like a modern thing, where you need to download many MB of JavaScript code to execute and display the 1kb json data.

    What you want is to just download the 10-20kb html file, maybe a corresponding css file, and any images referenced by the html. Then if you want the video you just get the video file direct.

    Simple and effective, unless you have something to sell.

    • pjc50 2 days ago

      The main reason for doing video through JS in the first place, other than obfuscation, is variable bitrate support. Oddly enough some TVs will support variable bitrate HLS directly, and I believe Apple devices, but not regular browsers. See https://github.com/video-dev/hls.js/

      > unless you have something to sell

      Video hosting and its moderation is not cheap, sadly. Which is why we don't see many competitors.

      • a96 12 hours ago

        P2P services proved a long ago that hosting is not a problem. Politics is a problem.

        What we don't see is more web video services and services that successfully trick varied content creators to upload regularly to their platform.

        https://en.wikipedia.org/wiki/PeerTube also must be mentioned here.

      • Zopieux 2 days ago

        And by "not many" you really mean zero competitors.

        (before you ask: Vimeo is getting sold to an enshitification company)

        • axiolite 2 days ago

          Those "zero" include: Rumble, Odysee, Dailymotion, Twitch, Facebook watch... etc.

          And a decent list here: https://ideaexplainers.com/video-sites-like-youtube/

          • pjc50 2 days ago

            Twitch does live streaming but recently severely limited the extent of free hosting for archived content.

            Not actually heard of the first two, what's their USP?

          • treyd 2 days ago

            Rumble and Odysee and populated with crazy ragebaiting rightwingers, conspiracy theorists, and pseudo-libertarians.

            Twitch has the issues the other commenter described, and both Twitch and Facebook are owned by billionaires who are actively collaborating with the current authoritarian regime. Facebook in particular is a risk space for actually exercising free speech and giving coherent critiques of authority.

            Dailymotion is... maybe okay? As a company it seems like it's on life support. There's a "missing middle" between the corporate highly produced content that's distributed across all platforms and being a long tail dumping ground. I did find things like university lectures there, but there isn't creators actually trying to produce content for Dailymotion like there is on YouTube.

            • axiolite a day ago

              > Rumble and Odysee and populated with crazy ragebaiting rightwingers, conspiracy theorists, and pseudo-libertarians.

              So, just like Youtube, then?

              • treyd 5 hours ago

                Proportionally speaking, there's a much higher concentration.

  • xnx 2 days ago

    It's an arms race. Websites have become stupidly/unnecessarily/hostilely complicated, but AI/LLMs have made it possible (though more expensive) to get whatever useful information exists out of them.

    Soon, LLMs will be able to complete any Captcha a human can within reasonable time. When that happens, the "analog hole" may be open permanently. If you can point a camera and a microphone at it, the AI will be able to make better sense of it than a person.

    • Gigachad 2 days ago

      The future will just be every web session gets tied to a real ID and if the service detects you as a bot you just get blocked by ID.

      • wraptile 2 days ago

        > The future will just be every web session gets tied to a real ID

        This seems like an awful future. We already had this in form of limited ipv4 addresses wher each IP is basically an identity. People started buying up ip addresses and selling them as proxies. So any other form of ID would suffer the same fate unless enforced at government level.

        Worst case scenario we have 10,000 people sitting in front of the screens clicking page links because hiring someone to use their "government id" to mindlessly browse the web is the only way to get data of the public web. That's not the future we should want.

      • xnx 2 days ago

        I definitely agree logins will be required for many more sites, but how would the site be able to distinguish humans from bots controlling the browser? Captcha is almost obsolete. ARC AGI is too cumbersome for verifying every time.

        • Gigachad 2 days ago

          Small scale usage at the same level as a normal person would probably go under the radar, but if you try scraping, running multiple accounts or posting any more than you would a normal user it’ll be picked up once they can link all actions to a real person.

          If you are just asking Siri to load a page for you, that probably gets tolerated. Maybe very sensitive sites will go verified mobile platform only and Apple/Google will provide some kind of AI free compute environment like how they can block screen recording or custom roms today.

          Yes it is 100% the death of the free and open computing environment. But captchas are no longer going to be sufficient. It seems realistic to block bots if you are willing to fully lock down everything.

          • xnx 2 days ago

            The next frontier is entire fake personas to login and scrape sites ... which is why government/real-world verification will be required soon.

    • goku12 2 days ago

      Please remember that an LLM accessing any website isn't the problem here. It's the scraping bots that saturate the server bandwidth (a DoS attack of sorts) to collect data to train the LLMs with. An LLM solving a captcha or an Anubis style proof of work problem isn't a big concern here, because the worst they're going to do with the collected data is to cache them for later analysis and reporting. Unlike the crawlers, LLMs don't have any incentives in sucking up huge amounts of data like a giant vacuum cleaner.

      • TeMPOraL 2 days ago

        Scraping was a thing before LLMs, there's a whole separate arms race around this for regular competition and "industrial espionage" reasons. I'm not really sure why model training would become a noticeable fraction of scrapping activity - there's only few players on the planet that can afford to train decent LLMs in the first place, and they're not going to re-scrape the content they already have ad infinitum.

        • int_19h 2 days ago

          > they're not going to re-scrape the content they already have

          That's true for static content, but much of it is forums and other places like that where the main value is that new content is constantly generated - but needs to be re-scraped.

          • a96 12 hours ago

            If only sites agreed on putting a machine readable URL somewhere that lists all items by date. Like a site summary or a syndication stream. And maybe like a "map" of a static site. It would be so easy to share their updates with other interested systems.

            • int_19h 12 minutes ago

              Why should they agree to make life even easier for people doing something they don't want?

  • dpedu 2 days ago

    And it's all to sell more ads.

  • mrsilencedogood 2 days ago

    fortunately it is now easier than ever to do small-scale scraping, the kind yt-dlp does.

    I can literally just go write a script that uses headless firefox + mitmproxy in about an hour or two of fiddling, and as long as I then don't go try to run it from 100 VPS's and scrape their entire website in a huge blast, I can typically archive whatever content I actually care about. Basically no matter what protection mechanisms they have in place. Cloudflare won't detect a headless firefox at low (and by "low" I mean basically anything you could do off your laptop from your home IP) rates, modern browser scripting is extremely easy, so you can often scrape things with mild single-person effort even if the site is an SPA with tons of dynamic JS. And obviously at low scale you can just solve captchas yourself.

    I recently wrote a scraper script that just sent me a discord ping whenever it ran into a captcha, and i'd just go look at my laptop and fix it, and then let it keep scraping. I was archiving a comic I paid for but was in a walled-garden app that obviously didn't want you to even THINK of controlling the data you paid for.

    • wraptile 2 days ago

      > fortunately it is now easier than ever to do small-scale scraping, the kind yt-dlp does.

      this is absolutely not the case. I've been web scraping since 00s and you could just curl any html or selenium the browser for simple automation but now it's incredibly complex and expensive even with modern tools like playwright and all of the monthly "undetectable" flavors of it. Headless browsers are laughably easy to detect because they leak the fact they are being automated and that they are headless. Not to even mention all of the fingerprinting.

      • sharpshadow a day ago

        > modern browser scripting is extremely easy, so you can often scrape things with mild single-person effort even if the site is an SPA with tons of dynamic JS.

        I think he means the JS part is now easy to run and scrape compared to the transition time from basic download scraping to JS execution/headless browser scraping. It is more complex but the tools haven’t been as evolved as they are now a couple of years ago.

      • johnisgood 2 days ago

        +1

        I made a web scraper in Perl a few years ago. It no longer works because I need a headless browser now or whatever it is called these days.

        Web scraping is MUCH WORSE TODAY[1].

        [1] I am not yelling, just emphasizing. :)

      • immibis a day ago

        mozilla-unified/dom/base/Navigator.cpp - find Navigator::Webdriver and make it always return false, then recompile.

  • einpoklum 2 days ago

    Those days are not coming to an end:

    * PeerTube and similar platforms for video streaming of freely-distributable content;

    * BitTorrent-based mechanisms for sharing large files (or similar protocols).

    Will this be inconvenient? At first, somewhat. But I am led to believe that in the second category one can already achieve a decent experience.

    • dotancohen 2 days ago

      To how many content creators have you written to request them share their content on PeerTube or BitTorrent? How did they respond? How will they monetize?

      • einpoklum 2 days ago

        1. Zero

        2. N/A, but enough content creators on YT are very much aware of the kind of prison it is, especially in the years after the Adpocalypse.

        3. Obviously, nobody should be able to monetize the copying of content. If it is released, it is publicly released. But they can use LibrePay/Patreon/Buy me a coffee, they can sell merch or signed copies of things, they can do live appearances, etc.

        • a96 12 hours ago

          3. they already do, since YT just doesn't really pay and regularly flips out in weird ways

  • pjc50 2 days ago

    I think this is just another indication of how the web is a fragile equilibrium in a very adversarial ecosystem. And to some extent, things like yt-dlp and adblocking only work if they're "underground". Once they become popular - or there's a commercial incentive, like AI training - there ends up being a response.

  • elric a day ago

    Not only that, but soon it will require age verification and device attestation. Just in case you're trying to watch something you're not supposed to.

  • bjourne 2 days ago

    For now, yes, but soon CloudFlare and ever more annoying captchas may make that option practically impossible.

    • nutjob2 2 days ago

      You should be thankful for the annoying captchas, I hear they're moving to rectal scans soon.

  • pmdr 2 days ago

    > Days of just getting data off the web are coming to an end

    All thanks to great ideas like downloading the whole internet and feeding it into slop-producing machines fueling global warming in an attempt to make said internet obsolete and prop up an industry bubble.

    The future of the internet is, at best, bleak. Forget about openness. Paywalls, authwalls, captchas and verification cans are here to stay.

    • TeMPOraL 2 days ago

      The Internet was turned into a slop warehouse well before LLMs became a thing - in fact, a big part of why ChatGPT et al. has so extreme adoption worldwide is because they let people accomplish many tasks without having to inflict on yourself the shitfest that's the modern web.

      Personally, when it became available, o3 model in ChatGPT cut my use of web search by more than half, and it wasn't because Google became bad at search (I use Kagi anyway) - it's because even the best results are all shit, or embedded in shit websites, and the less I need to browse through that, the better for me.

      • pmdr a day ago

        > The Internet was turned into a slop warehouse well before LLMs became a thing

        I suppose that's thanks to Google and their search algos favoring ad-ridden SEO spam. LLMs are indeed more appealing and convenient. But I fear that legitimate websites (ad-supported or otherwise) that actually provide useful information will be on the decline. Let's just hope then that updated information will find its way into LLMs when such websites are gone.

        • TeMPOraL a day ago

          In terms of utility as training data, the Internet is a poisoned well now, and the poison is becoming more potent over time. Part of it is the SEO spam and content marketing slop, both of which kept growing and accumulating. Part of it is even more slop produced by LLMs, especially by cheap (= weak) models, but also by LLMs in general (any LLM used to produce content is doing worse job at it than a model from subsequent generation, so it's kinda always suboptimal for training purposes). And now part of it are people mass-producing bullshit out of spite, just to screw with AI companies. SNR on the web is dropping like a brick falling into a black hole.

          It's a bit of a gamble at this point - will the larger models, or new architectures, or training protocols, be able to reject all that noise and extract the signal? If yes, then training on the Internet is still safe. If not, it's probably better for them to freeze the datasets blindly scrapped from the Internet now, and focus on mining less poisoned sources (like books, academic papers, and other publications not yet ravaged by the marketing communications cancer[0], also ideally published before the last 2 years).

          I don't know which is more likely - but I'm not dismissing the possibility that the models will be able to process increasingly poisoned data sets just fine, if the data sets are large enough, because of a very basic and powerful idea: self-consistency. True information is always self-consistent, because it reflects the underlying reality. Falsehoods may be consistent in the small, but at scale they're not.

  • SV_BubbleTime 2 days ago

    Do you know what Accelerate means?

    I want them to go overboard. I want BigTech to go nuts on this stuff. I want broken systems and nonsense.

    Because that’s the only way we’re going to get anything better.

    • jdiff 2 days ago

      Accelerationism is a dead-end theory with major holes in its core. Or I should say, "their" core, because there's a million distant and mutually-incompatible varieties. Everyone likes to say "gosh, things are awful, it MUST end in collapse, and after the collapse everyone will see things MY way." They can't all be right. And yet, all of them with their varied ideas still think it'll be a good idea to actively push to make things worse in order to bring on the collapse more quickly.

      It doesn't work. There aren't any collapses like that to be had. Big change happens incrementally, a bit of refactoring and a few band-aids at a time, and pushing to make things worse doesn't help.

      • exe34 2 days ago

        I'm not waiting for the collapse to fix things - I'm waiting for it so that I won't have any more distractions and I can go back to my books.

        • jdiff 2 days ago

          As I said, there aren't any collapses like that to be had. Heaven and Earth will be moved to make the smallest change necessary to keep things flowing as they were. Banks aren't allowed to fail. Companies, despite lengthy strings of missteps and billions burned on dead ends, still remain on top.

          You can step away from the world (right now, no waiting required). But the world can remain irrational longer than you can wait for it to step away from you, and pushing for more irrationality won't make a dent in that.

          • exe34 2 days ago

            Oh I think the world will push me away at the next Android update. If I can't root/firewall/adblock/syncthing/koreader, the mobile phone will simply become a phone again.

            • TeMPOraL 2 days ago

              Ain't that right, eh? It's not the end of the world. Just the end of a whole lot of nice and fun possibilities we've grown to enjoy.

          • immibis a day ago

            Everything that can't go on forever will eventually stop. On the other hand, the market can remain irrational longer than you can remain solvent.

            The basic governing principles of the economy were completely rewritten in 1971, were completely rewritten again in 2008, were completely rewritten again in 2020 - probably other times too - and there are only so many more things they can try. The USA is basically running as a pseudo-command economy at the top level now - how long do those typically last? - with big businesses being supported by the central bank.

            The economy should have collapsed in 1971, 2008 and 2020 (and probably other times) as well, but they kept finding new interventions that would have seemed completely ludicrous 20 years earlier. I mean, the Federal Reserve just buying financial assets? With newly printed money? (it still has a massive reserve of them, this program did not end, that money is still in circulation and it's propping a lot of economic numbers up)

            All predictions about when the musical chairs will end are probably wrong. The prediction that it'll end in the next N years is just as likely to be wrong, as the prediction that it won't. Some would argue it already has ended, decades ago, and we are currently living in the financial collapse - how many years of income does it take to get a house now? The collapse of Rome' took several centuries. At no point did the people think they were living in a collapsing empire. Each person just thought that how it was in their time was how it always was.

      • hnfong 2 days ago

        Look at history, things improve and then things get worse, in cycles.

        During the "things get worse" phase, why not make it shorter?

        • jancsika 2 days ago

          Let's give it a shot.

          The year is 2003. Svn and cvs are proving to be way too clunky and slow for booming open source development.

          As an ethical accelerationist, you gain commit access to the repos for svn and cvs and make them slower and less reliable to accelerate progress toward better version control.

          Lo and behold, you still have to wait until 2025 for git to be released. Because git wasn't written to replace svn or cvs-- it was written as the result of internal kernel politics wrt access to a closed-source source management program Bitkeeper. And since svn and cvs were already bad enough that kernel devs didn't choose them, you making them worse wouldn't have affected their choice.

          Also, keep in mind that popularity of git was spurred by tools that converted from svn to git. So by making svn worse, you'd have made adoption of git harder by making it harder on open source devs to write reliable conversion tools.

          To me, this philosophy looks worse than simply doing nothing at all. And this is in a specific domain where you could at least make a plausible, constrained argument for accelerationism. Your comment instead seems to apply to accelerationism applied to software in general-- there, the odds of you being right are so infinitesimal as to be fatuous.

          In short, you'd do better playing the lottery because at least nothing bad happens to anyone else when you lose.

        • TeMPOraL 2 days ago

          > During the "things get worse" phase, why not make it shorter?

          Because it never gets better for the people actually living through it.

          I imagine those in favor of the idea of accelerating collapse aren't all so purely selfless that they're willing to see themselves and their children suffer and die, all so someone elses' descendants can live in a better world.

          Nah, they just aren't thinking it through.

        • a96 12 hours ago

          There's no cycle. It's just a long slide with illusionary changes in between.

        • hobs 2 days ago

          It doesn't foreshorten the cycle, it prolongs it and makes it worse.

    • nananana9 2 days ago

      If you showed me the current state of YouTube 8 years ago - multiple unskippable ads before each video, 5 midrolls for a 10 minute video, comments overran with bots, video dislikes hidden, the shorts hell, the dysfunctional algorithm, .... - I would've definitely told you "Yep, that will be enough to kill it!"

      At this point I don't know - I still have the feeling that "they just need to make it 50% worse again and we'll get a competitor," but I've seen too many of these platforms get 50% worse too many times, and the network effect wins out every time.

      • encom 2 days ago

        It's classic frog boiling. I want them (for whatever definition of "them") to just nuke the frog from orbit.

random29ah 2 days ago

It's almost funny, not to mention sad, that their player/page has been changed, filling it with tons of JS that makes less powerful machines lag.

For a while now, I've been forced to change "watch?v=" to "/embed/" to watch something in 480p on an i3 Gen 4, where the same video, when downloaded, uses ~3% of the CPU.

However, unfortunately, it doesn't always work anymore.

https://www.youtube.com/watch?v=xvFZjo5PgG0 https://www.youtube.com/embed/xvFZjo5PgG0

While they worsen the user experience, other sites optimize their players and don't seem to care about downloaders (pr0n sites, for example).

  • Too 2 days ago

    Many performance problems on YouTube are because they now force everyone to use the latest heavy codecs, even when your hardware does not have acceleration for it. I have a laptop that is plenty powerful for everything else and plays 4K h264 no problem. 720p on YouTube on the other hand turns it into a hot slate after a minute and grinds everything to a halt.

    There are browser extensions like h264ify that block newer codecs but WHY??? Is nobody at YouTube caring about the user experience? It’s easier and more reliable to just download the videos.

    • kasabali a day ago

      Nah, their page and player are ridiculously heavy and slow regardless of the video codec.

  • skydhash 2 days ago

    Put that next to GitHub. The app is nearly unusable on an i5 8th, often I just download a snapshot to browse locally.

  • oybng 2 days ago

    You are not alone. In Q1 2025 I was forced to adopt the embed player. In Q3 2025, google intentionally broke the embed player. Now the only youtube access I have is via yt-dlp. Long live yt-dlp and its developers

  • bArray 2 days ago

    Personally I am looking to get away from Youtube and looking towards some form of PeerTube/peer-based platform.

Andrews54757 2 days ago

Nsig/sig - Special tokens which must be passed to API calls, generated by code in base.js (player code). This is what has broken for yt-dlp and other third party clients. Instead of extracting the code that generates those tokens (eg using regular expressions) like we used to, we now need to run the whole base.js player code to get these tokens because the code is spread out all over the player code.

PoToken - Proof of origin token which Google has lately been enforcing for all clients, or video requests will fail with a 403. On android it uses DroidGuard, for IOS, it uses built in app integrity apis. For the web it requires that you run a snippet of javascript code (the challenge) in the browser to prove that you are not a bot. Previously, you needed an external tool to generate these PoTokens but with the Deno change yt-dlp should be capable of producing these tokens by itself in the near future.

SABR - Server side adaptive bitrate streaming, used alongside Google's UMP protocol to allow the server to have more control over buffering, given data from the client about the current playback position, buffered ranges, and more. This technology is also used to do server-side ad injection. Work is still being done to make 3rd party clients work with this technology (sometimes works, sometimes doesn't).

Nsig/sig extraction example:

- https://github.com/yt-dlp/yt-dlp/blob/4429fd0450a3fbd5e89573...

- https://github.com/yt-dlp/yt-dlp/blob/4429fd0450a3fbd5e89573...

PoToken generation:

- https://github.com/yt-dlp/yt-dlp/wiki/PO-Token-Guide

- https://github.com/LuanRT/BgUtils

SABR:

- https://github.com/LuanRT/googlevideo

EDIT2: Addeded more links to specific code examples/guides

  • ACCount37 2 days ago

    If you ever wondered why the likes of Google and Cloudflare want to restrict the web to a few signed, integrity-checked browser implementations?

    Now you know.

    • jasode 2 days ago

      >If you ever wondered why the likes of Google and Cloudflare want to restrict the web

      I disagree with the framing of "us vs them".

      It's actually "us vs us". It's not just us plebians vs FAANG giants. The small-time independent publishers and creators also want to restrict the web because they don't want their content "stolen". They want to interact with real humans instead of bots. The following are manifestations of the same fear:

      - small-time websites adding Anubis proof-of-work

      - owners of popular Discord channels turning on the setting for phone # verification as a requirement for joining

      - web blogs wanting to put a "toll gate" (maybe utilize Cloudflare or other service) to somehow make OpenAI and others pay for the content

      We're long past the days of colleagues and peers of ARPANET and NFSNET sharing info for free on university computers. Now everybody on the globe wants to try to make a dollar, and likewise, they feel dollars are being stolen from them.

      • btown 2 days ago

        But this, too, skips over some nuance. There are a few types of actors here:

        - small content creators who want to make their content accessible to individuals

        - companies that want to gobble up public data and resell it in a way that destroys revenue streams for content creators

        - gatekeepers like Cloudflare who want to ostensibly stop this but will also become rent-extractors in the process

        - users who should have the right to use personal tools like yt-dlp to customize their viewing experience, and do not wish to profit at the expense of the creators

        We should be cautious both that the gatekeepers stand to profit from their gatekeeping, and that their work inhibits users as well.

        If creators feel this type of user (often a dedicated fan and would-be promoter) is a necessary sacrifice to defend against predatory data extractors… then that’s absolutely the creator’s choice, but you can’t say there’s a unified “us” here.

        • TeMPOraL 2 days ago

          But then it's not (small creators + users) vs. the other parties you listed. Small creators, like small business, often exhibit the worst kinds of greed and exploitative behavior.

          Also there's a lot of misalignment between users and providers at the cultural level - the society is yet to fully process the implications of "digital revolution" (and copyright industry meddling with everything isn't helping). A big chunk of that boils down to the same thing that started "the war on general-purpose computing": producers have opinions on how their products should be used, and want to force consumers to only use them as prescribed.

          Whether it's because they want to exploit the consumers through a side channel (e.g. ads), or to "protect intellectual property", or because they see artistic value in the integrity of their creation, or because they think they know better than customers - reasons are many, but underneath them all, is the core idea the society hasn't yet worked out: whether, and to what degree, are producers even morally entitled to that kind of control.

          My personal answer is: they're not (nor they are to their old business models). But then it's producers, not consumers, who have all the money and control here.

      • skydhash 2 days ago

        > small-time websites adding Anubis proof-of-work

        Those were already public. The issue is AI bot ddos-ing the server. Not everyone has infinite bandwith.

        > owners of popular Discord channels turning on the setting for phone # verification as a requirement for joining

        I still think that Discord is a weird channel for community stuff. There's a lot of different format for communication, but people are defaulting to chat.

        > web blogs wanting to put a "toll gate" (maybe utilize Cloudflare or other service) to somehow make OpenAI and others pay for the content

        Paid contents are good (Coursera, O'Reilly, Udemy,...). But a lot of these services wants to have free powered by ads (for audience?).

        ---

        The fact is, we have two main bad actors: AI companies hammering servers and companies that want to centralize content (that they do not create) by adding gatekeeping extension to standard protocols.

      • bayindirh 2 days ago

        > Now everybody on the globe wants to try to make a dollar, and likewise, they feel dollars are being stolen from them.

        I'm not in it for the dollar. I just want the licenses I put on my content/code to be respected, that's all. IOW, I don't what I put out there to be free forever (as in speech and beer) to be twisted and monetized by the people who re in this for the dollar.

      • pryelluw 2 days ago

        I don’t feel like dollars are stolen from me. It’s more of companies abusing my goodwill to publish information online. From higher bills as a result of aggressive crawling, to copying my work and removing all copyright/licensing from the code. Sure, fair use and all, but when they return the same exact code it just makes me wonder.

        Nowadays, producing anything feels like being the cows udder.

      • jrochkind1 2 days ago

        i want my content borrowed/shared, and I still need to be engaged in this stuff because the poorly behaved distributed bots that have arisen in the past year are trying to take boundless resources from my site(s), that I cannot afford.

      • jrm4 2 days ago

        Then some of those small people are wrong too.

        I wish we could all just stop fighting the truth of the tech -- it costs ZERO to make copies of things, and adjust accordingly.

        Patreon (and keep it real, OnlyFans) are roughly the only viable long term models.

      • einpoklum 2 days ago

        > The small-time independent publishers and creators also want to restrict the web because they don't want their content "stolen".

        I'm sure some music creators may have, years ago, been against CD recorders, or platforms like Napster or even IRC-based file transfer for sharing music. Hell, maybe they were even against VCRs back in the day. But they were misguided at best.

        People who want to prevent computer users from freely copying data are, in this context at least, part of "them" rather than "us".

      • bitwize 2 days ago

        Duh. I've known this for decades. The biggest advocates for DRM I've known are small-time content creators: authors, video producers, musicians. They've been saying the same thing since the 90s: without things like DRM, their stuff would be pirated, and they'd like to earn a living doing what they love instead of grinding at a day job to support themselves while everybody benefits from their creative output. In addition, major publishers and record labels won't touch stuff that's been online because of the piracy risk. They don't want to make an investment in smaller creators without a return in the form of sales of copies. That last bit is less true of music now than it used to be because of streaming and stuff, but the principle still applies.

        This is why the DMCA will never be repealed, DRM will never go away, and there is no future for general purpose computing. People want access digital content, but the creators of that content wouldn't release it at all if they knew that it could be copied endlessly by whomever receives it.

        • goku12 a day ago

          That isn't entirely true. Perhaps it's because small content creators aren't a monolithic group. There are a few who try the alternative approaches and succeed. For example, whenever buying ebooks, I first check if the author sells it directly or through small publishers. It's always a better deal if they do. Cheaper than what you pay on amzn, DRM-free and occasionally lifetime free updates (eg: The Kubernetes book by Nigel Poulton). Despite the lower price, the author gets most, if not all of what you pay. They're sometimes liberal with the sharing policy too. They ask you to not share it around in large numbers, while conceding that just a copy or two is expected. I find this to be a reasonable demand. Therefore I encourage people to buy a copy for themselves if they like the book.

          I have heard someone trying this approach with music albums and succeeding at it. The album is more likely to go viral due to the easiness in sharing, while you'll always find consumers who volunteer to pay you. While the returns per copy is low, the large number of copies means that your profits may be higher than if it were DRM-encumbered. Musicians may also like the fact that there are no powerful middlemen that they have to contend with. In fact, this is what YouTube creators already do when they choose alternative monetization paths like Patreon.

          What's really needed is for people to support and encourage this model and such creators. We used to earlier blame them saying that people choose convenience and short term savings over long term market health. But that's no longer applicable. People are so fed up with being exploited under consumerism that they've started boycotting these big players to regain their independence and self sufficiency. The real issue preventing open digital markets is just the lack of awareness of their existence. This message has to be spread somehow.

          • a96 12 hours ago

            https://en.wikipedia.org/wiki/Useful_idiot is the type of person that will speak against their and common good because someone told them it's bad.

            Just look at the hordes of people advocating Brave, which is a series scam company project.

      • mschuster91 2 days ago

        > The small-time independent publishers and creators also want to restrict the web because they don't want their content "stolen"

        ... or just keep their site on the Internet. There hasn't been any major progress on sanctioning bad actors - be it people running vulnerable IoT crap that ends up being taken over by a botnet, cybercriminals and bulletproof hosters, or nation state actors. As long as you don't attack targets from your own geopolitical class (i.e. Russians don't attack Russians, a lot of malware will just quit if it spots Russian locale), you can do whatever the fuck you want.

        And that is how we end up with darknet services where you can trivially order a DDoS taking down a website you don't like or, if you manage to get your opponent's IP leaked during an online game, their residential IP address. Pay with whatever shitcoin you have, and no one is any wiser who the perpetrator is.

      • mrguyorama 2 days ago

        >The small-time independent publishers and creators also want to restrict the web

        Oh really? Does Linus's Floatplane go to this extent to prevent users from downloading stuff? Does Nebula? Does whatever that gun youtuber's version of video site do this?

        Does Patreon?

      • johnebgd 2 days ago

        It’s like we are living in an affordability crisis and people are tired of 400 wealthy billionaires profiting from peoples largess in the form of free data/tooling.

      • krageon a day ago

        It's us vs them. What big corps want is fundamentally adversarial due to it's motivation. I like to think that humans can conceptually not be your enemy.

      • greenavocado 2 days ago

        When Nixon slammed the gold window shut so Congress could keep writing blank checks for Vietnam and the Great Society, it wasn't just some monetary technicality. It was the moment America broke its word to the world and broke something fundamental in us too. Suddenly money wasn't something you earned through sweat or innovation anymore. It became something politicians and bankers could conjure from thin air whenever they wanted another war, another corporate bailout, another vote-buying scheme.

        Fast forward fifty years and smell the rot. That same fiscal recklessness Congress spending like drunken sailors while pretending deficits don't matter has bled into every pore of society. Why wouldn't it? When BlackRock scoops up entire neighborhoods with Fed-printed cash while your kid can't afford a studio apartment, people notice. When Tyson jacks up chicken prices to record profits while diners can't afford bacon, people feel it. And when some indie blogger slaps a paywall on their life's work because OpenAI vacuumed their words to train ChatGPT? That's the same disease wearing digital clothes.

        We're all living in Nixon's hangover. The "us vs us" chaos you see Discord servers demanding your phone number, small sites gatekeeping against bots, everyone scrambling to monetize scraps that's what happens when trust evaporates. Just like the dollar became Monopoly money after '71, everything feels devalued now. Your labor? Worth less each year. Your creativity? Someone's AI training fuel. Your neighborhood? A BlackRock asset on a spreadsheet.

        And Washington's still at it! Printing trillions to "save the economy" while inflation eats your paycheck alive. Passing trillion-dollar "infrastructure bills" that somehow leave bridges crumbling but defense contractors swimming in cash. It's the same old shell game: socialize the losses, privatize the gains. The factory worker paying $8 for eggs understands this. The nurse getting lectured about "wage spirals" while hospital CEOs pocket millions understands this. The teenager locking down their Discord because bots keep spamming scams? They understand this.

        Weimar happened when money became meaningless. 1971 happened when promises became meaningless. What you're seeing now the suspicion, the barriers, the every-man-for-himself hustle is what bubbles up when people realize the whole system's running on fumes. The diner owner charging $18 for a burger isn't greedy. The blogger blocking AI scrapers isn't a Luddite. They're just building levees against a flood Washington started with a printing press half a century ago.

        The tragedy is that we're all knee-deep in the same muddy water, throwing sandbags at each other while the real architects of this mess the political grifters, the Fed bankers, the extraction-engine capitalists watch dry-eyed from their high ground. Until we stop accepting their counterfeit money and their counterfeit promises, we'll keep drowning in this rigged game. The gold window didn't just close in '71. The whole damn social contract rusted shut.

        • zahlman 2 days ago

          What does any of this have to do with yt-dlp?

          • dotancohen 2 days ago

            Ostensibly the same forces that drove Nixon to move the dollar off of gold, are driving Google to destroy third party YouTube clients.

        • chrisweekly 2 days ago

          Wow. That was eloquent, and coherent, and depressing. I'd be grateful for someone to counter with something less dismal. Good things are still happening in the world. A positive future remains possible -- but we have to be able to imagine it to bring it into being.

          • Dylan16807 a day ago

            Semi coherent. The greed and corruption is a real theme but would still be 100% possible while on the gold standard.

            • immibis a day ago

              They'd have to physically steal gold from people, and people would notice that. Or they could mine more gold, but that's hard. Or they could publicly and officially change the exchange rate (of dollars to gold), and people would notice that politicians make it go down, the same way that people notice when politicians make taxes go up (they notice way more than when prices other than taxes go up).

              With the current system, they (the central bank) can just increase some people's numbers in some spreadsheets, and the effects are extremely indirect. Nominally this is in exchange for assets of equal value so the situation returns the normal after some time, but that hasn't been happening - the amount of money created this way has not been decreasing at any meaningful rate.

              • Dylan16807 a day ago

                Just selling bonds would have raised more than enough money to give out corruptly.

                And corporate bailouts are downright cheap compared to the federal budget.

                • immibis 13 hours ago

                  And it would be impossible to bail out those bonds when they defaulted, nor to reuse the bonds to back money.

                  • Dylan16807 12 hours ago

                    > And it would be impossible to bail out those bonds when they defaulted

                    Well the US hasn't defaulted so changing how a default works wouldn't really affect the trajectory we took. And a default would be pretty catastrophic either way.

                    > nor to reuse the bonds to back money.

                    I don't know what you mean here.

              • habinero a day ago

                Considering the amount of panics and depressions and general economic insanity that happened on the gold standard in the 1800, none of this is true.

          • sillyfluke 2 days ago

            Well on the bright side blood avocados are still green. Which the poster also seems to appreciate.

            • greenavocado 2 days ago

              Lately I've had to resort to buying avocados from Costco in those little plastic cups because whole avocados in many supermarkets in my region have started to spoil too quickly. Sad.

          • attila-lendvai 2 days ago

            until people learn money, the concept, nothing will change. and that in turn will hardly happen while the bad guys own childhood (compulsory schooling).

        • habinero a day ago

          Sir, this is a Wendy's.

          The gold standard is objectively terrible economic policy and "society was better when I was young" has been a meme for thousands of years.

          It feels nice to attribute everything bad to this one weird trick, but it's fake.

    • mtrovo 2 days ago

      I don't know, it's really hard to blame them. In a way, the next couple of years are going to be a battle to balance easy access to info with compensation for content creators.

      The web as we knew it before ChatGPT was built around the idea that humans have to scavenge for information, and while they're doing that, you can show them ads. In that world, content didn't need to be too protected because you were making up for it in eyeballs anyway.

      With AI, that model is breaking down. We're seeing a shift towards bot traffic rather than human traffic, and information can be accessed far more effectively and, most importantly, without ad impressions. So, it makes total sense for them to be more protective about who has access to their content and to make sure people are actually paying for it, be it with ad views or some other form of agreement.

      • SV_BubbleTime 2 days ago

        Don’t worry!

        Ads are coming to AI. The big AI push next will be context, your context all the time. Your phone will “help” and get all your data to OpenAI…

        “It looks like you went for a run today? Good job, you deserve a treat! Studies show a little ice cream after a long run is effectively free calories! It just so happens the nearest Dairy Queen is running a promotion just for the next 30 minutes. I’m getting you directions now.”

        • codedokode 2 days ago

          It would not be that much of a problem if ads promoted healthy and tasty food but they will probably promote an ice-cream made from a powder and chemicals emulating taste of berries rather than from milk and fresh-picked berries.

          • therein 2 days ago

            It still would be. Loss of agency. Ads are text and images you see. Native advertising in a chatbot conversation is a third party bidding their way into your conversation. Machine showing you an ad versus injecting intention into your context are very different things.

        • bitwize 2 days ago

          This is why contra Louis Rossman, Clippy was not a good thing for humanity.

        • nebula8804 2 days ago

          If open source AI becomes good enough would this model hold? I guess they will try to shut down the open models as they come close?

        • Noumenon72 2 days ago

          "I'm calling the user analysis tool... it seems this user is health conscious. I'll suggest a trail app for their next run instead of ice cream."

      • chrisweekly 2 days ago

        I think your point is valid, but FTR the "shift" happened long before ChatGPT; bot traffic has exceeded that of humans for over a decade.

    • th0ma5 2 days ago

      Weird people talking about small time creators wanting DRM I've never seen that... Usually they'd be hounding for any attention? I don't know why multiple accounts are seemingly independently bringing this up, but maybe it is trying to muddy the waters? This concept?

    • supriyo-biswas 2 days ago

      At least for YouTube, viewbotting is very much a thing, which undermines trust in the platform. Even if we were to remove Google ads from the equation, there’s nothing preventing someone from crafting a channel with millions of bot-generated views and comments, in order to paid sponsor placements, etc.

      The reasons are similar for Cloudflare, but their stances are a bit too DRMish for my tastes. I guess someone could draw the lines differently.

      • ACCount37 2 days ago

        If any of this was done to combat viewbotting, then any disruption to token calculation would prevent views from being registered - not videos from being downloaded.

        • supriyo-biswas 2 days ago

          From my perspective both problems are effectively the same. I want to count unique users by checking for asset downloads and correlating unique session IDs. People can request the static assets directly, leading to view booting and waste of egress bandwidth.

          The solution: have clients prove they are a legitimate client by running some computationally intensive JS that interacts with DOM APIs, etc. (which is not in any way unique to big tech, see Anubis/CreepJS etc.)

          The impact on the hobbyist use case is, to them, just collateral damage.

          • ACCount37 2 days ago

            No, the difference is: if I'm fighting viewbots, I want zero cues to be emitted to the client. The client should NEVER know whether its view is being counted or not, or why.

            Having no reliable feedback makes it so much harder for a viewbotter to find a workaround.

            If there's a visible block on video downloads? They're not fighting viewbots with that.

            • supriyo-biswas 2 days ago

              For general spam deterrence I agree, but how do you prevent paying for the bandwidth in this case?

      • wzdd 2 days ago

        Youtube has already accounted for this by using a separate endpoint to count watch stats. See the recent articles about view counts being down attributed to people using adblockers.

        Even if they hadn't done that, you can craft millions of bot-sponsored views using a legitimate browser and some automation and the current update doesn't change that.

        So I'd say Occam's razor applies and Youtube simply wants to be in control of how people view their videos so they can serve ads, show additional content nearby to keep them on the platform longer, track what parts of the video are most watched, and so on.

      • rwmj 2 days ago

        I'm sure that's a problem for Youtube. What does it have to do with me rendering Youtube videos on my own computer in the way I want?

        • pwg 2 days ago

          > What does it have to do with me rendering Youtube videos on my own computer in the way I want?

          It doesn't. That interferes with google's ad revenue stream, which is why YT continues to try to make it harder and harder to do so.

        • bitwize 2 days ago

          You don't have that right. When you view copyrighted content, you do so at the pleasure of the licensor.

          • rwmj 2 days ago

            How you watch copyrighted content has never been something that copyright has controlled.

            • bitwize a day ago

              If the content needs to be copied or downloaded in order to be watched, you may do so exclusively under terms set by the licensor, period. You may not even get fair use rights, as to get the content in the first place you might have to agree to terms of service waiving them, and being found to use the content in an unapproved way would be grounds for cutting off your access.

      • imiric 2 days ago

        Like another comment mentioned: that's a problem for YouTube to solve.

        They pay a lot of money to many smart people who can implement sophisticated bot detection systems, without impacting most legitimate human users. But when their business model depends on extracting value from their users' data, tracking their behavior and profiling them across their services so that they can better serve them ads, it goes against their bottom line for anyone to access their service via any other interface than their official ones.

        This is what these changes are primarily about. Preventing abuse is just a side benefit they can use as an excuse.

      • sporkxrocket 2 days ago

        As a viewer, this is not even remotely my problem.

      • ForHackernews 2 days ago

        > which undermines trust in the platform

        What? What does this even mean? Who "trusts" youtube? It's filled with disinformation, AI slop and nonsense.

        • supriyo-biswas 2 days ago

          I provided an example is given right after that sentence. Trustworthiness of the content is an entirely separate thing.

        • attila-lendvai 2 days ago

          you forgot the excessive censorship, of course to "fight disinformation"...

          it even became an interesting signal which "disinformation" they deem censorship-worthy.

    • eek2121 2 days ago

      The fact you shoved Cloudflare in there shows your ignorance of the actual problems and solutions offered.

    • codedokode 2 days ago

      There could be valid reasons for fighting downloaders, for example:

      - AI companies scraping YT without paying YT let alone creators for training data. Imagine how many data YT has.

      - YT competitors in other countries scraping YT to copy videos, especially in countries where YT is blocked. Some such companies have a function "move all my videos from YT" to promote bloggers migration.

      • transcriptase 2 days ago

        >AI companies

        Like Google?

        >scraping YT without paying YT let alone creators for training data

        Like Google has been doing to the entire internet, including people’s movement, conversations, and habits… for decades?

        • codedokode 2 days ago

          > Like Google?

          Like Google competitors obviously.

          > Like Google has been doing to the entire internet, including people’s movement, conversations, and habits… for decades?

          Yes, but if you allowed to index your site (companies even spent money to make site better indexable), Google used to bring customers and AI companies bring back nothing. They are just freeloaders.

      • toomuchtodo 2 days ago

        - Enforce views of ads

        (not debating the validity of this reason, but this is the entire reason Youtube exists, to sell and push ads)

      • baxuz 2 days ago

        Then they should allow a download API for paying customers.

        • balder1991 2 days ago

          But even if you’re a paying customer, the creator is only paid if you watch it on the platform.

        • codedokode 2 days ago

          Music labels publish the music on YT in exchange for ad revenue, they won't be happy if someone would download their music for free, and making music is expensive, google how much just a single drum mic costs and you need lot of them.

          • baxuz 2 days ago

            > for paying customers

            • codedokode 2 days ago

              YT shares income from subscriptions with music labels? I didn't hear about this, and even if they shared the download must be paid much higher than a view because after downloading a person could potentially listen for a track hundred times in a row.

              • diet_mtn_dew 2 days ago

                Youtube premium includes Youtube Music, which is alphabet's streaming service, and I assume that they are paying the same fees as everyone else.

                • codedokode 2 days ago

                  > as everyone else

                  "Everyone else" do not allow to download music in an unencrypted format, so it makes sense if YT doesn't allow also.

        • dylan604 2 days ago

          It's not YT's content though.

      • Chris2048 2 days ago

        Who says these are valid?

      • supriyo-biswas 2 days ago

        Why is this being downvoted? Are people really gonna shoot the messenger and fail to why a company may be willing to protect their competitive position?

    • gjsman-1000 2 days ago

      Everything trends towards centralization on a long enough period.

      I laugh at people who think ActivityPub or Mastodon or BlueSky will save us. We already had that, it was called e-mail, look what happened once everyone started using it.

      If we couldn't stop the centralization effects that occurred on e-mail, any attempt to stop centralization in general is honestly a utopian fool's errand. Regulation is easier.

      • toomuchtodo 2 days ago

        I am a big supporter of AT Protocol, and I contribute some money to a fund to build on it. Why laugh at running experiments? Nothing will "save us," it is a constant effort as long as humans desire to use these systems to connect. Email exists today, and is very usable still as a platform that cannot be captured. The consolidation occurred because people do not want to run their own servers, so we should build for that! Bluesky and AT Protocol are experiments in building something different, with different use cases and capabilities, that also cannot be captured. Just like email. You can run your own PDS. You can run your own stack from PDS to users "end to end" if you so choose. You can pay to do both of these tasks. No one can buy this or take it away from you, if it is built on protocols instead of a platform someone can own and control.

        Regulation would be great. The EU does it well. It is lacking in the US, and will be for some time. And so we have to downgrade to technical mitigations against centralization until regulation can meet the burden.

      • numpad0 2 days ago

        e-mail can't handle 24/7 1k posts/sec traffic which Twitter was about. A more appropriate analogue is IRC.

  • dylan604 2 days ago

    > For the web it requires that you run a snippet of javascript code (the challenge) in the browser to prove that you are not a bot.

    How does this prove you are not a bot. How does this code not work in a headless Chromimum if it's just client side JS?

    • Andrews54757 2 days ago

      Good question! Indeed you can run the challenge code using headless Chromium and it will function [1]. They are constantly updating the challenge however, and may add additional checks in the future. I suppose Google wants to make it more expensive overall to scrape Youtube to deter the most egregious bots.

      [1] https://github.com/LuanRT/BgUtils

      • toomuchtodo 2 days ago

        LLMs solve challenges. Can we not solve these challenges with sufficiently advanced LLMs? Gemini even, if you're feeling lulz-y.

        • balder1991 2 days ago

          Yes, by spending money.

          • toomuchtodo 2 days ago

            I agree, in some cases and depending on LLM endpoint, some money may need to be spent to enable ripping. But is it cheaper than paying Youtube/Google? That is the question.

            • dylan604 2 days ago

              sometimes, it's not about the cost. it's about who/where the money is being spent.

    • Beretta_Vexee 2 days ago

      Once JavaScript is running, it can perform complex fingerprinting operations that are difficult to circumvent effectively.

      I have a little experience with Selenium headless on Facebook. Facebook tests fonts, SVG rendering, CSS support, screen resolution, clock and geographical settings, and hundreds of other things that give it a very good idea of whether it's a normal client or Selenium headless. Since it picks a certain number of checks more or less at random and they can modify the JS each time it loads, it is very, very complicated to simulate.

      Facebook and Instagram know this and allow it below a certain limit because it is more about bot protection than content protection.

      This is the case when you have a real web browser running in the background. Here we are talking about standalone software written in Python.

      • dylan604 2 days ago

        why can a bot dev not just get all of these values from the laptop's settings and hardwire the headless version to have the same values?

        • Beretta_Vexee 2 days ago

          Because the expected values are not fixed, it is possible to measure response times and errors to check whether something is in the cache or not, etc.

          There are a whole host of tricks relating to rendering and positioning at the edge of the display window and canvas rather than the window, which allow you to detect execution without rendering.

          To simulate all this correctly, you end up with a standard browser, standard execution times, full rendering in the background, etc. No one wants to download their YouTube video at 1x speed and wait for the adverts to finish.

  • Aperocky 2 days ago

    And barely a few days after google did it the fix is in.

    Amazing how they simply couldn't win - you deliver content to client, the content goes to the client. Could be the largest corporation of the world and we still have yt-dlp.

    That's why all of them wanted proprietary walled gardens where they would be able to control the client too - so you get to watch the ads or pay up.

nikcub 2 days ago

Just the other day there was a story posted on hn[0][1] that said YouTube secretly wants downloaders to work.

It's it's always been very apparent that YouTube are doing _just enough_ to stop downloads while also supporting a global audience of 3 billion users.

If the world all had modern iPhones or Android devices you'd bet they'd straight up DRM all content

[0] https://windowsread.me/p/best-youtube-downloaders

[1] https://news.ycombinator.com/item?id=45300810

  • trenchpilgrim 2 days ago

    More specifically, yt-dlp uses legacy API features supported for older smart TVs which don't receive software updates. Eventually once that traffic drops to near zero those features will go away.

    • knowitnone3 2 days ago

      So more people using yt-dlp will increase the likelihood of Youtube keeping legacy APIs?

      • ddtaylor a day ago

        They probably have metrics they track, such as purchases or customer activity.

  • Aurornis 2 days ago

    That conspiracy theory never even made sense to me. Why would anyone think that a payment and ad-supported content platform secretly wants their content to be leaked through ad and payment free means?

    • judge2020 2 days ago

      Mainly the theory that, if you can’t use downloaders to download videos, then people will no longer see YT as the go-to platform for any video hosting and will consider alternatives.

      And I call that a theory for a reason. Creators can still download their videos from YT Studio, I'm not sure how much importance there is on being able to download any video ever (and worst case scenario people could screen recording videos)

      • Liquix 2 days ago

        i'd argue that 95%+ of users (creators and viewers) couldn't care less about downloading videos. creators use youtube because it's where the viewers and money are, viewers use youtube because it's where all the content is. none of them are going to jump ship if yt-dlp dies.

        also, one could assume that the remaining 5% are either watching with vlc/mpv/etc or running an adblocker. so it's not like google is going to lose ad revenue by breaking downloaders like yt-dlp. grandparent comment (legacy smart TV support) is the much more likely explanation

        • dredmorbius 2 days ago

          It's not the 95% you're concerned about, it's the 1%, or 0.0001%, who are top content creators, who both archive their own production and use YT as a research tool themselves (whether simple "reply videos" or something more substantive). Ultimately Google will kill the goose and lose the eggs.

          Those creators are what drive the the bulk of viewers to the platform.

          Though, come to think of it, as YT's become increasingly obnoxious to use (the native Web client is utterly intolerable, front-ends such as Invidious are increasingly fragile/broken, yt-dlp is as TFA notes becoming bogged down in greater dependencies) I simply find myself watching (or as my preference generally is, listening) to far less from the platform.

          I may be well ahead of the pack, but others may reach similar conclusions in 5--10 years. Or when a less-annoying alternative clearly presents itself.

    • DecentShoes a day ago

      I agree, all I can think of is that durely alot of commentary YouTubers rely on YouTube downloaders to use fair-use snippets of other people's videos in their commentary videos?

    • attila-lendvai 2 days ago

      being a de facto monopoly has a lot of value that is hard to quantify...

      e.g. censorship, metadata, real time society-wide trends, etc...

      google is way-way more than just a company.

bArray 2 days ago

Ronsor [1] and reply by seproDev:

> Why can't we embed a lightweight interpreter such as QuickJS?

> @Ronsor #14404 (comment)

The linked comment [2]:

> @dirkf This solution was tested with QuickJS which yielded execution times of >20 minutes per video

How on earth can it be that terrible compared to Deno?

[1] https://github.com/yt-dlp/yt-dlp/issues/14404#issuecomment-3...

[2] https://github.com/yt-dlp/yt-dlp/issues/14404#issuecomment-3...

  • jlokier 2 days ago

    > How on earth can it be that terrible [>20 minutes] compared to Deno?

    QuickJS uses a bytecode interpreter (like Python, famously slow), and is optimised for simplicity and correctness. Whereas Deno uses a JIT compiler (like Java, .NET and WASM). Deno uses the same JIT compiler as Chrome, one of the most heavily-optimised in the world.

    That doesn't normally lead to such a large factor in time difference, but it explains most of it, and depending on the type of code being run, it could explain all of it in this case.

    QuickJIT (a fork of QuickJS that uses TCC for JIT) might yield better results, but still slower than Deno.

    • ynx 2 days ago

      JIT is still banned by policy on a LOT of mobile devices, meaning that previous usage of yt-dlp on mobile is now effectively unsupportable.

      • SpaghettiCthulu 2 days ago

        I haven't tested this, but in theory running deno with `--v8-flags='--jitless'`[^1][^2] will disable the JIT compiler.

        [^1]: https://v8.dev/blog/jitless

        [^2]: https://docs.deno.com/runtime/getting_started/command_line_i...

        • Klonoar 2 days ago

          If the performance drops due to lack of JIT, then GPs comment about effectively useless on mobile might still hold weight.

          • int_19h 2 days ago

            It's probably a lot more than that. A well-optimized bytecode interpreter is not 100x slower than native code. But also e.g. QuickJS uses refcounting with the occasional tracing to remove cycles, and while it's a simple and reliable approach, it's not fast.

      • 1317 2 days ago

        well yt-dlp would also be banned surely? so it's not an issue anyway

    • bArray 2 days ago

      My concern is either that QuickJS is something like 100x slower, or that even when using Deno, the download experience will be insanely slow.

      In my mind, an acceptable time for users might be 30 seconds (somewhat similar to watching an ad). If QuickJS is taking >20 minutes, then it is some 40x slower? Seems very high?

      > QuickJIT (a fork of QuickJS that uses TCC for JIT) might yield better results, but still slower than Deno.

      Interesting, not come across it before. Running C code seems like an insane workaround from a security perspective.

  • ronsor 2 days ago

    It's horrifying and Google must've worked very hard to kill the performance in other interpreters.

    • lucianbr 2 days ago

      The brightest minds (or something close) working hard to make computation slower and more difficult, so that someone can profit more.

  • darknavi 2 days ago

    That is interesting. We use QuickJS in Minecraft (Bedrock, for modding) and while it's much slower than V8 it's not _that_ much slower.

apetresc 2 days ago

The writing is on the wall for easy ripping. If there's any YT content you expect you'll want to preserve for a long time, I suggest spinning up https://www.tubearchivist.com/ or something similar and archiving it now while you still can.

  • wintermutestwin 2 days ago

    I agree and feel that the time is now to archive all of the truly valuable cultural and educational content that YT acquired through monopolistic means.

    This solution looks interesting, but I am technical enough to know that this looks like a PITA to setup and maintain. It also seems like it is focused on downloading everything from a subbed channel.

    As it is now, with a folder of downloaded videos, I just need a local web server that can interpret the video names and create an organized page with links. Is there anything like this that is very lightweight with a next next finish install?

  • lyu07282 2 days ago

    They already had the proper-DRM tech for youtube movies for years, why didn't they already turn that on for all content?

    • trenchpilgrim 2 days ago

      It would break many millions of old consumer devices that no longer receive updates, like old smart TVs. They are waiting for that old device traffic to drop low enough before they can force more robust measures.

      You already need such things for certain formats.

    • occz 2 days ago

      It's not really a matter of just turning it on when it comes to the kind of scale that YouTube has on their catalogue. It's practically impossible to retranscode the whole catalogue, so you're more or less stuck with only doing it for newly ingested content, and even there the tradeoffs are quite large when it comes to actually having DRM.

      I think we can safely assume that the only content under DRM at YouTube today is the content where it's absolutely legally necessary.

      • immibis a day ago

        DRM is added at the last hop before the content is sent to the client. It has to be, because the key is different per client.

    • Mindwipe 2 days ago

      YouTube's delivery scale is enormous and adding additional complexity if they don't have to is probably considered a no no.

      But if they decide they have to, they can do it fairly trivially.

    • advisedwang 2 days ago

      YT probably HAD to put the DRM on in order to get the license deal with the studios. Nobody is twisting their arm as much so other interests (wider audience, less server side resources, not getting around to it) can prevail.

    • immibis a day ago

      They have to pay a technology license fee per month per protected title.

adzm 2 days ago

I was surprised they went with Deno instead of Node, but since Deno has a readily available single-exe distribution that removes a lot of potential pain. This was pretty much just a matter of time, though; the original interpreter in Python was a brilliant hack but limited in capability. It was discussed a few years ago for the YouTube-dl project here https://news.ycombinator.com/item?id=32793061

  • nicce 2 days ago

    Node does not have the concept of security and isolation like the Deno has. There is maintainer comment in the same thread.

    • doctorpangloss 2 days ago

      What evidence is there that Deno's "security and isolation" works?

      It's their application, yt-dlp can use whatever it wants. But they made their choices for stylistic/aesthetic reasons.

      • nicce 2 days ago

        What evidence is telling the opposite?

        Scripts use V8 isolation, identical to Chrome. What comes to rest, we can only trust or review by ourself, but it is certainly better than nothing in this context.

        • arbll 2 days ago

          Identical to Chrome except the part where Chrome uses os-level sandboxing on top. V8 exploits are common, Deno sandboxing by itself is not a good idea if you are executing arbitrary code.

          • nicce 2 days ago

            We are comparing to situation where the alternative is nothing. Maybe we just should remove locks from the doors because someone has lockpicked door somewhere.

            • arbll 2 days ago

              I never said it was a poor choice in this specific context but propagating the idea that Deno's sandboxing is safe and "basically the same security as chrome" is wrong and can easily do damage the next time someone that has read this thread needs a way to execute untrusted JS.

              • nicce 2 days ago

                Someone who understands what V8 isolation means, knows that it means process-level memory and garbage collectors. I didn't claim that it includes Chrome's OS sandbox features too.

                But the usage of V8 means that Deno must explicitly provide the access (for V8) for networking and filesystem - the foundations for sandboxing are there.

  • arbll 2 days ago

    The sandboxing features of Deno also seem to have played a role in that choice. I wouldn't overly trust that as a security layer but it's better than nothing.

    • hyperrail 2 days ago

      This is the first time I've heard of Deno so I'm only going by their Security & Permissions doc page [1], but it looks like the doc page at the very end recommends using system-level sandboxing as a defense in depth. This suggests that Deno doesn't use system sandboxing itself.

      To me this is a bit alarming as IIRC most app runtime libraries that also have this in-runtime-only sandboxing approach are moving away from that idea precisely because it is not resistant to attackers exploiting vulnerabilities in the runtime itself, pushing platform developers instead toward process-level system kernel-enforced sandboxing (Docker containers or other Linux cgroups, Windows AppContainer, macOS sandboxing, etc.).

      So for example, .NET dropped its Code Access Security and AppDomain features in recent versions, and Java has now done the same with its SecurityManager. Perl still has taint mode but I wonder if it too will eventually go away.

      [1] https://docs.deno.com/runtime/fundamentals/security/

      • arbll 2 days ago

        Deno is a V8 wrapper, the same JS engine as Chrome. Vulnerabilities are very common there, not necessarily because it's poorly designed but more because there's massive financial incentives in findings them.

        This plus what you mentioned is why I would never trust it to run arbitrary code.

        Now in the context of yt-dlp it might fine, google isn't going to target them with exploits. I would still prefer if they didn't continue to propagate "DeNo iS SaFe BeCauSe It HaS sAnDbOxInG" because I've seen projets that were actually executing arbitrary JS rely on it thinking it was safe.

    • CuriouslyC 2 days ago

      Deno sandboxing is paper thin, last time I looked they had very simple rules. It's a checkbox feature. If you want isolation use WASM.

      • ndjddirbrbrbfi 2 days ago

        It doesn’t have granularity in terms of what parts of the code have what permission - everything in the same process has the same permission, but aside from that I’m not sure what you mean about it being paper thin. Certainly WASM is a great option, and I think it can facilitate a more nuanced capabilities model, but for cases like this AFAIK Deno should be secure (to the extent that V8 is secure, which Chrome’s security depends on).

        It being a checkbox feature is a weird way to frame it too, because that typically implies you’re just adding a feature to match your competitors, but their main competitors don’t have that feature.

        In what ways does it fall short? If there are major gaps, I’d like to know because I’ve been relying on it (for personal projects only myself, but I’ve recommended it to others for commercial projects).

        • arbll 2 days ago

          Chrome does not rely exclusively on V8's security or else it would routinely get exploited (See v8 CVEs if you don't believe me). The hard part of browser exploitation today is escaping from the os-level sandbox put on the processes that run each of your tabs.

          Trusting Deno's sandboxing by itself isn't a great idea. An attacker only has to wait for the next V8 exploit to drop, probably a question of a few months at worse.

          Now like I mentioned above it's probably ok in yt-dlp context, Google isn't going to target it with an exploit. It's still important that folks reading this don't takeaway "deno sandbox safe" and use it the next time they need to run user-supplied JS.

        • CuriouslyC 2 days ago

          Last I looked it was just very basic pattern matching allow/deny with no real isolation, and there have been multiple real escapes already. It's better than nothing, and probably good enough for bush league security, but I wouldn't pitch it to my milspec customers.

          • yborg 2 days ago

            Why are your milspec customers downloading from YouTube? This is the Deno use case being discussed.

            • CuriouslyC 2 days ago

              Don't be reductive, people reading this aren't going to fence their opinion of Deno to the "use in YTDLP" case.

      • silverwind 2 days ago

        WASM can not run JavaScript unfortunately.

        • em-bee 2 days ago

          WASM can run a javascript interpreter or compiler. if isolation is the goal, that may even make sense.

  • Analemma_ 2 days ago

    Keep in mind that yt-dlp doesn't just support YouTube, which-- notwithstanding the claims of "all DRM is malware" etc.-- probably won't download actively harmful code to your computer: it also supports a huge number of video streaming sites, including some fairly obscure and sketchy ones. Sandboxing in the interpreter that's at least as good as what you'd get in a browser is a must, because by design this is doing untrusted code execution.

m_ke 2 days ago

I used to work on video generation models and was shocked at how hard it was to find any videos online that were not hosted on YouTube, and YouTube has made it impossibly hard to download more than a few videos at a time.

  • raincole 2 days ago

    > YouTube has made it impossibly hard to download more than a few videos at a time

    I wonder why. Perhaps because people use bots to mass-crawl contents from youtube to train their AI. And Youtube prioritizes normal users who only watch a few videos at most at the same time, over those crawling bots.

    Who knows?

    • m_ke 2 days ago

      I wonder how Google built their empire. Who knows? I’m sure they didn’t scrape every page and piece of media on the internet and train models on it.

      My point was that the large players have monopoly hold on large swaths of the internet and are using it to further advantage themselves over the competition. See Veo 3 as an example, YouTube creators didn’t upload their work to help Google train a model to compete with them but Google did it anyways, and creators didn’t have a choice because all eye balls are on YouTube.

      • raincole 2 days ago

        > how Google built their empire. Who knows

        By scraping every page and directing the traffic back to the site owners. That was how Google built their empire.

        Are they abusing the empire's power now? In multiple ways, such as the AI overview stuff. But don't pretend that crawling Youtube and training video generation models is the same as what Google (once) brought to the internet. And it's ridiculous to expect Youtube to make it easy for crawlers.

  • fibers 2 days ago

    you have to feed it multiple arguments with rate limiting and long wait times. i am not sure if there have been recent updates other than the js interpreter but ive had to spin up a docker instance of a browser to feed it session cookies as well.

    • m_ke 2 days ago

      Yeah we had to roll through a bunch of proxy servers on top of all the other tricks you mentioned to reliably download at a decent pace

      • trenchpilgrim 2 days ago

        What are your thoughts on the load scrapers are putting on website operators?

        • immibis a day ago

          What are your thoughts on the load website operators are putting on themselves to block scrapers?

  • aihell 2 days ago

    [flagged]

    • CaptainOfCoit 2 days ago

      Unusually well-argued post, hard to disagree with...

      What exactly is the problem? That they worked on video generation models? That they only used YouTube? That they downloaded videos from YouTube? That they downloaded multiple videos from YouTube?

AndyKelley 2 days ago

This is very related to a talk I did last year [1]. "Part 2: youtube-dl" starts at 18:21. It dips toes into an analysis about software that fundamentally depends on ongoing human labor to maintain (as compared to, e.g. zlib, which is effectively "done" for all intents and purposes).

More concretely, the additional Deno dependency is quite problematic for my music player, especially after I did all that work to get a static, embeddable CPython built [2].

Ideally for me, yt-dlp would be packaged into something trivially embeddable and sandboxable, such as WebAssembly, calling into external APIs for things like networking[3]. This would reduce the value delivered by the yt-dlp project into pure DRM-defeating computation, leaving concerns such as CLI/GUI to a separate group of maintainers. A different project could choose to fulfill those dependencies with Deno, or Rust, or as in my case, built directly into a music player in Zig.

Of course I don't expect the yt-dlp maintainers to do that. They're doing something for fun, for free, for pride, for self-respect... in any case their goals aren't exactly perfectly aligned with mine, so if I want to benefit from their much appreciated labor, I have to provide the computational environment that they depend on (CPython[4] and Deno).

But yeah, that's now going to be a huge pain in the ass because now I either have to drop support for yt-dlp in my music player, or additionally embed deno, as well as introduce Rust as a build dependency... neither of which I find acceptable. And don't even get me started on Docker.

[1]: https://www.youtube.com/watch?v=SCLrNqc9jdE

[2]: https://github.com/allyourcodebase/cpython

[3]: https://ziglang.org/news/goodbye-cpp/

[4]: https://github.com/yt-dlp/yt-dlp/issues/9674

MangoToupe 2 days ago

At some point we’re going to need a better place to put videos than YouTube. The lack of any democratization of bulk storage is beginning to be a real problem on the internet.

Yes, we have archive.org. We need more than that, though.

I’m sure there’s some distributed solution like IPFS but I haven’t seen any serious attempt to make this accessible to every day people.

  • coldpie 2 days ago

    > The lack of any democratization of bulk storage is beginning to be a real problem on the internet.

    There are many thousands of paid hosting services, feel free to pick one. It turns out hosting TB of data for free is a pretty tricky business model to nail down.

    • superkuh 2 days ago

      There have been plenty of free distributed hosting services for the web that worked perfectly (popcorn time, etc, etc). It's just that every time they become popular they are attacked legally and shut down. The problem is not technical, or even resource based, the problem is legal. Only a mega-corp can withstand the legal attacks.

      And even if the legal attacks could be mitigated most people would still use youtube because they're there for the money (or for people who are there for the money). They are not there for a video host. Youtube enables distribution of money and there's no way that any government would let any free system distribute money without even more intense legal, and indeed physically violent, attacks.

  • zenmac 2 days ago

    There are: peertube, odysee, minds, rumble, bitchute web torrent)...

    It is the same reason why people just can't get off IG. Network effect and in YT case a lot of disk space and bandwidth.

    • MangoToupe 2 days ago

      I don’t think network effect matters much if you’re not trying to advertise the content. Organizations can just link to it from their site.

      I admit I haven’t looked into peertube, and I didn’t think that rumble was any better than YouTube. I don’t recognize the others. Thank you; I’ll resurvey.

      • zenmac 2 days ago

        yeah there are alternatives for sure, but it takes time to discover them. But many search engine are offering searching of videos. So may just be a good idea to start building a public index of all the videos.

        And it is 2025, the HN crowd here can usually just deploy their video to CDN. Many business are also just hosting their own videos.

        BTW forgot to mention Odyssey underlying protocol is https://lbry.com

        And seems like there are past article about it on HN: https://news.ycombinator.com/item?id=24594663

        • MangoToupe a day ago

          > And it is 2025, the HN crowd here can usually just deploy their video to CDN. Many business are also just hosting their own videos.

          This is a very bad standard and not indicative of the capabilities of the average small-to-medium business.

          Nonetheless, I agree it's more complicated than I made it seem—YouTube is not an insurmountable problem for a determined actor.

  • bob1029 2 days ago

    If you want to compete with YT you need to basically build AWS S3 in your own data centers. You'd have to find a way to make your service run cheaper than google can if you wanted to survive. You'd have to get very scrappy and risky. I'd start with questions like: how many 9s of durability do we actually need here? Could we risk it until the model is proven? What are the consequences for losing cat videos and any% speed runs of mario64? That first robotic tape library would be a big stepwise capex event. You'd want to make sure the whole thing makes sense before you call IBM or whoever for a quote.

    • ndriscoll 2 days ago

      Games Done Quick has raised 10s of millions for charity. I suspect they could raise a few thousand for a few dozen TB of nvme storage if they wanted to host a speedrun archive.

      • warkdarrior 2 days ago

        YouTube get 700,000 hours of video uploaded every day. That's 4.3 PB added per day. You may need more than a few dozen TB... https://www.reddit.com/r/AskProgramming/comments/vueyb9/how_...

        • ndriscoll 2 days ago

          They don't get 700,000 hours of any particular niche though, so it's easy enough for small groups to compete with youtube for their needs.

        • archargelod 2 days ago

          That's because youtube allows almost everything sfw to be hosted on their platform and without any limits.

          I can imagine if they've added rate-limiting, e.g. 30GB per IP per week - that would've reduced amount of crap, literal white noise and spam/scam videos uploaded to Youtube in several magnitudes. Another strategy is, if a video doesn't get 1000 views after a week - it's deleted.

    • jsheard 2 days ago

      > If you want to compete with YT you need to basically build AWS S3 in your own data centers. You'd have to find a way to make your service run cheaper than google can if you wanted to survive.

      YouTube's economy of scale goes way beyond having their own datacenters, they have edge caches installed inside most ISP networks which soak up YT traffic before it even reaches a Google DC. It would take a staggering amount of investment to compete with them on cost.

  • mschuster91 2 days ago

    The problem with bulk storage is that it will be abused at large scale.

    CSAM peddlers, intellectual property violators, unconsensual sexual material ("revenge porn"), malware authors looking for places to exfiltrate stolen data, propagandists and terrorists, the list of abusers is as long as it is dire.

    And for some of these abuser classes, the risk for any storage service is high. Various jurisdictions require extremely fast and thorough responses for a service provider to not be held liable, sometimes with turnaround times of 24 hours or less (EU anti terrorism legislation), sometimes with extremely steep fines including prison time for responsible persons. Hell, TOR exit node providers have had their homes raided and themselves held in police arrest or, worse, facing criminal prosecution and prison time particularly for CSAM charges - and these are transit providers, not persistent storage.

    And all of that's before looking on the infrastructure provider side. Some will just cut you off when you're facing a DDoS attack, some will bring in extortionate fees (looking at you, AWS/GCE/Azure) for traffic that may leave you in personal bankruptcy. And if you are willing to take that risk, you'll still run the challenge of paying for the hardware itself - storage isn't cheap, 20TB of storage will be around 200€ and you want some redundancy and backups, so the actual cost will rather be 60-100€/TB plus the ongoing cost of electricity and connectivity.

    That's why you're not seeing much in terms of democratization.

    • MangoToupe 2 days ago

      Maybe that’s true, but YouTube is just absolutely miserable to use in every way. There’s got to be better options.

      • mschuster91 a day ago

        Well, you're welcome to create one if you have the money for it and the appetite for getting your home raided by the FBI because some moron used your service to promote terrorism or CSAM.

        Youtube can get away with its shit service and utter lack of sensible moderation simply by being under Google's roof and the effort required to start up a competitor.

  • pmdr 2 days ago

    > I’m sure there’s some distributed solution like IPFS

    Almost 25 years on the internet and I have not been able to download anything from IPFS. Does one need a PhD to do so?

    • johnisgood a day ago

      Same. Are we missing information, or is it really stagnating and not being utilized for whatever reasons?

  • a96 11 hours ago

    Also, archive.org is in magaland, so that is a very endangered service.

  • reaperducer 2 days ago

    I keep seeing ads on TV for Photobucket (Which I thought was dead) for 1TB of storage for either free, or $5, depending on the ad.

    Maybe there is an opportunity for that company to expand.

progbits 2 days ago

Can anyone explain specifically what the YT code does that the existing python interpreter is unusable and apparently quickjs takes 20 minutes to run it?

Is it just a lot of CPU-bound code and the modern JIT runtimes are simply that much faster, or is it doing some trickery that deno optimizes well?

  • progbits 2 days ago

    From https://github.com/ytdl-org/youtube-dl/issues/33186

    > Currently, a new style of player JS is beginning to be sent where the challenge code is no longer modular but is hooked into other code throughout the player JS.

    So it's no longer a standalone script that can be interpreted but it depends on all the other code on the site? Which could still be interpreted maybe but is a lot more complex and might need DOM etc?

    Just guessing here, if anyone knows the details would love to hear more.

    • zenmac 2 days ago

      Yeah that is guess google using spaghetti code to keep their yt moat.

    • Chris2048 2 days ago

      Could something like tree-shaking be used to reduce the player code to just the token generating bit? Or does the whole player js change for each video?

    • zelphirkalt 2 days ago

      Sounds like a really silly way to engineer things, but then again Google has the workforce to do lots of silly things and the cash to burn, so they can afford it.

      • Klonoar 2 days ago

        It's silly from an engineering perspective, but unfortunately clever from YT's perspective of "how do we complicate this as much as possible".

  • ACCount37 2 days ago

    YouTube is mining cry-

    I mean, running some unknown highly obfuscated CPU-demanding JS code on your machine - and using its results to decide whether to permit or deny video downloads.

    The enshittification will continue until user morale improves.

    • johnisgood a day ago

      And at the same time we are against websites mining crypto. At this point they could do that, too...

zahlman 2 days ago

Noteworthy to me: deno is MIT licensed, but PyPI distributions (at least the ones I checked) include neither license nor source code. It's normal for pre-built distributions ("wheels") to contain only the Python code (which for a project like this is just a small bootstrap used to find the compiled executable — it doesn't appear to be providing any Python API), but they should normally still have a LICENSE file.

It's also common to have the non-Python (here, Rust) source in source distributions ("sdists"), but this project's sdist is only a few kilobytes and basically functions as a meta-package (and also includes no license info). It "builds" Deno by detecting the platform, downloading a corresponding zip from the GitHub releases page, extracting the standalone Rust executable, and then letting Hatchling (a popular build tool in the Python ecosystem) repackage that in a wheel.

Update: It turns out that the Python package is published by a third party, so I submitted an issue (https://github.com/manzt/denop/issues/1) to ask about the licensing.

  • zahlman 2 days ago

    (Update 2: the distribution has been updated. Thanks to Trevor Manz for an unexpectedly prompt response!)

sedatk 2 days ago

2045:

"yt-dlp needs a copy of your digitized prefrontal cortex in order to bypass Youtube's HumanizeWeb brain scanner"

  • grishka 2 days ago

    This assumes that YouTube will still exist in 2045.

    • int_19h a day ago

      I think it's safe to say that DRM and ads will still exist either way, so...

sphars 2 days ago

This will be interesting to see how it affects the numerous Android apps on F-Droid that are essentially wrappers around yt-dlp to create a YouTube Music clone.

ivanjermakov 2 days ago

Can we remove heartdropping mystery from the title? My first thought is that Google makes it more difficult to download from YouTube.

"yt-dlp moves to Deno runtime"

  • Fabricio20 2 days ago

    Google is making it harder to download from Youtube. Your first thought is correct! Every other website that yt-dlp supports doesn't require this change. Additionally, yt-dlp is still written in python, it has not moved to deno. They are only adding a deno dependency for the javascript challenges added by youtube.

    • ivanjermakov 2 days ago

      I get that, but still title is too "loud".

  • latexr 2 days ago

    > "yt-dlp moves to Deno runtime"

    That makes it seem like yt-dlp itself was rewritten from Python to JavaScript (for those who even know it’s Python) or that it used to use Node and now uses Deno.

nromiun 2 days ago

TIL that you can run frontend Javascript with a package like Deno. I thought you need a proper headless browser for it.

  • bob1029 2 days ago

    I was thinking the same walking into this thread. I figured DOM/CSS/HTML would be part of the black box magic, but I suppose from the perspective of JS all of that can be faked appropriately.

  • skydhash 2 days ago

    I think you only need something like `jsdom` to have the core API available. The DOM itself is just a tree structure with special nodes. Most APIs are optional and you can provide stubs if you're targeting a specific websites. It's not POSIX level.

    • johnisgood a day ago

      I would like to know more about this. I had some web scrapers in Perl but they no longer work. :(

      • immibis a day ago

        The state of the art is to remote-control a real browser now. Defeats all not-a-real-browser checks. You can even click on the cloudflare captchas.

Alifatisk 2 days ago

The length Youtube have gone to make it impossible to download videos. At the same time, Tiktok allows anyone to download a video with just right click

  • latexr 2 days ago

    On the other hand, I can navigate, search, and watch any video on YouTube without an account. With TikTok, I can’t even scroll the page without tricks.

  • DecentShoes a day ago

    Nah, the uploader can turn that off if they want, which they tend to do on any popular video. I've resorted to screen recording, but even that is blockable with the (bullshit, shouldn't exist). "can't screenshot this due to security" API that exists on mobile operating systems now

    • immibis a day ago

      You can point a camera at the screen, at least.

      Once upon a time (around 2000) they tried to fix this by making cameras illegal except for licensed photographers.

  • doublerabbit 2 days ago

    With the recent forced buy of TikTok with Rupert, Larry and co, I doubt that's going to be a thing for much longer; they will want to make money some how.

sharperguy 2 days ago

Why can youtube not just give a micropayments backed API? Just charge a few cents per video download and be done with it.

  • eitau_1 2 days ago

    meanwhile Youtubers: a penny per view would be 10x what Youtube pays us

    https://www.youtube.com/watch?v=3nloigkUJ-U&t=4851s

    • kccqzy 2 days ago

      The YouTube RPM (revenue per mille) strongly depends on the location of the audience and the topic of the video. It could be anywhere from $0.5 to $20. That 10x figure could very well be true for that YouTuber, but it's also true that other YouTubers already earn more than a penny per view.

  • trenchpilgrim 2 days ago

    They do. It's called YouTube Premium.

    • yreg 2 days ago

      It's not though. You can't download an mp4 to use however you wish with YouTube Premium. And definitely not via an API.

      • trenchpilgrim 2 days ago

        None of that was mentioned in the comment?

        • exe34 2 days ago

          did you miss the word "API"? it was there.

    • pmdr 2 days ago

      AFAIK Premium allows you to download to persistent browser storage. But is it DRM-free/open or usable format?

sombragris 2 days ago

Many Linux distros have Firefox's JavaScript (SpiderMonkey?) runtime independently packaged and available. Can it be used for this?

  • silverwind 2 days ago

    Yes, Spidermonkey can be ran standalone and would probably be much more secure than Deno would be because it does not have all the server-related APIs.

AbuAssar 2 days ago

on why they chose Deno instead of node:

"Other JS runtimes (node/bun) could potentially be supported in the future, the issue is that they do not provide the same security features and sandboxing that deno has. You would be running untrusted code on your machine with full system access. At this point, support for other JS runtimes is still TBD, but we are looking in to it."

  • codedokode 2 days ago

    While deno has sandboxing, it also has potential access to hundreds of dangerous functions, it might be better just to write a tiny wrapper around JS engine that adds only the function to write to stdout.

    • AbuAssar 2 days ago

      Deno blocks by default the access to network, storage and environment variables

      • codedokode 2 days ago

        JS interpreter doesn't have the access at all - it is provided by native functions, added by the wrapper like deno, so there is nothing to block.

feverzsj 2 days ago

That's why youtube is so buggy and slow.

rcarmo 2 days ago

So, instead of using something lightweight and embeddable like QuickJS, they opted for Deno? Nothing specifically against it, just seems... overkill

  • Waraqa 2 days ago
    • rcarmo 2 days ago

      There are other embeddable JS engines out there.

      • ohdeargodno 2 days ago

        And you're welcome to implement them and submit them as a PR.

        The yt-dlp contributors are not in the business of writing or finding JS runtimes, they're trying to download videos.

Havoc 2 days ago

Really feels like the somewhat open nature of yt is running on borrowed time

  • hackingonempty 2 days ago

    First you spend money to create something people really want and build a big user base.

    Then you open it up to third party businesses and get them tied to your platform, making money off your users.

    Once locked in you turn the screws on the businesses to extract as much money from them as possible.

    Finally you turn the screws on the users to extract every last bit of value from the platform before it withers and fades into irrelevance.

    • akudha 2 days ago

      What you say is true for most companies/software, but YouTube can play a nasty game for a very long time before it withers into irrelevance (if at all). They have enormous moat, one would need enormous resources to take on YouTube, I don't think anyone has that kind of patience or resources to even attempt. Like it or not, we are stuck with YT for a while.

      I have learned so much from YouTube - I wish it was more open and friendly to its creators and users :(

      In the meantime, all we can do is support smaller alternatives like https://nebula.tv/

hiccuphippo 2 days ago

I wonder if the youtube phone app also needs a JS runtime or is it able to bypass the JS requirements somehow.

NewPipe will probably need to add a JS runtime too.

  • gamer191 2 days ago

    yt-dlp dev here

    The Android app uses an API which does not require a JS runtime, but it does require a Play Integrity token. The iOS app uses an API which is assumed to require an App Attest token.

    Also, neither API supports browser cookies, which is a necessity for many users.

  • simlevesque 2 days ago

    Any Android app can run code inside a WebView without adding anything.

novoreorx 2 days ago

Surprisingly, Deno was chosen as the first JavaScript runtime due to its security features. I thought it was almost dead, as Bun is growing very quickly among developers.

zb3 2 days ago

Fortunately the community is not alone in this fight, because many AI companies need to be able to download YT videos. But they should sponsor yt-dlp more directly..

icyfox 2 days ago

The folks at deno have really done a fantastic job at pushing a JS runtime forward in a way that's more easily plug and play for the community. I've used `denoland/deno_core` and `denoland/rusty_v8` quite a bit in embedded projects where I need full JS support but I can't assume users have node/bun/etc installed locally.

Not surprised to see yt-dlp make a similar choice.

arbll 2 days ago

I wonder if we're going to see JS runtime fingerprinting attempt from google now

  • jeroenhd 2 days ago

    I doubt it'd be difficult for Google to detect if the client is a browser or not. They already need to check for signals of abnormal use to detect things like clickfarms and ad scams.

    • cxr 2 days ago

      > detect if the client is a browser

      User agents are like journalists: there's no such thing as pretending to be one.

      If someone writes their own client and says, "This is a browser", then it is one.

  • dishsoap 2 days ago

    Have they not done this for years and years already?

  • rs186 2 days ago

    Ah, JavaScript Run-time Integrity checks!

zelphirkalt 2 days ago

What I found much more annoying, and so far have not been able to work around, is that yt-dlp requires you to have a YouTube account, something that I have not had for a decade or so, and am unwilling to create again.

What tool can I use to simply store what my browser receives anyway, in a single video file?

  • a96 11 hours ago

    yt-dlp has never required an account. If it looks like that, either you're seeing some error from, e.g., youtube and not yt-dlp claiming that or you're running some sort of scam version instead of the real thing.

  • degamad 2 days ago

    When did it start requiring one? It didn't require one the last time I used it a few months ago...

    • ACCount37 2 days ago

      Google started using IP range blocks recently. If they decide that your IP stinks, they'll block YouTube viewing and demand that you log in.

      It's inconsistent as fuck, and even TOR exit nodes still work without a log in sometimes.

      • astroflection 2 days ago

        I can confirm this. I guess they didn't like me using Invidious.

        • ACCount37 2 days ago

          That's bad enough for normal VPN users who use VPN for privacy reasons. But a lot of countries have heavily censored web, and not using a VPN is simply not an option there.

          Good on Google for kicking people while they're down.

    • zelphirkalt 2 days ago

      I think for me it has been this way for a year or so. Maybe it is because I am on a VPN. I also cannot view YouTube videos on YouTube any longer, because it always wants me to log in, to "prove I am not a bot". So I have switched to only using invidious instances, and if they don't work, then I just cannot watch the video.

      I wish content creators would think of their own good more, and start publishing on multiple platforms. Are there any terms that YouTube has for them, that reduce revenue, if they publish elsewhere as well? Or is it mostly just them being unaware?

      • Telaneo 2 days ago

        It's a VPN thing, or a 'you've been downloading too much and are ratelimiting you' thing.

  • skydhash 2 days ago

    It must be a pretty recent (as in added yesterday) addition, as I was watching youtube with mpv+yt-dlp.

  • 2OEH8eoCRo0 2 days ago

    > What tool can I use to simply store what my browser receives anyway, in a single video file?

    This. I'm interested in such a tool or browser extension.

  • crtasm 2 days ago

    I'm using it right now without a youtube account.

zelphirkalt 2 days ago

How will this JS execution be contained/isolated? Do we have to run it inside a VM, or containers?

  • quux 2 days ago

    They are running the JS in Deno, a sandboxed JS runtime.

BolexNOLA 2 days ago

What are folks thoughts on jdownloader2 these days? Hell is that still kicking?

  • tommy92 2 days ago

    Yeah my go to for youtube still. Working as good as ever for that so far.

scosman 2 days ago

Great ad for deno. I hit a similar one the other day from pydantic. They make a MCP server for running sandboxed python code and the they did that… Python to WASM, and wasm running in deno.

dandiep 2 days ago

I've been using yt-dlp to download transcripts. Are there alternatives that don't require going through all these hoops? I'm guessing no.

  • a96 8 hours ago

    I thought transcripts already broke a long ago. Are they working again?

beyondcompute 2 days ago

Why won’t they use my browser for downloads, for example through TestCafe? That would also allow downloading premium quality (for subscribers) and so on.

  • rfl890 2 days ago

    I think you can get premium formats through the --cookies-from-browser flag

buyucu 2 days ago

I was scared this morning when yt-dlp did not work, but a git pull fixed it.

A huge thank you to the yt-dlp folks. They do amazing work.

charcircuit 2 days ago

This change has been long overdue. The web player has been broken with yt-dlp for such a long time.

cakealert 2 days ago

Why are they using the web target? YouTube has multiple other attack vectors which have no javascript barriers.

Plenty of devices have YouTube players which are not being capable of being updated and which must work, exploit those APIs.

  • gamer191 2 days ago

    "Attack vectors" is a very interesting choice of words. Yt-dlp is literally using a public API for its intended purpose (accessing videos). The only difference is how yt-dlp is delivering the videos to the user. Probably as much of an "attack" as user-agent spoofing or using browser extensions.

    But to answer your question, no, there aren't any suitable APIs (I've looked into it). They all either require JavaScript (youtube.com and the smart tv app) or require app integrity tokens (Android and iOS). Please let me know if you know something I don't?

    • cakealert 2 days ago

      What about the smart TVs? There have to be a lot of them, do all of them run JS?

      Also what kind of environments are executing the JS? If Google begins to employ browser fingerprinting that may become relevant.

      • gamer191 2 days ago

        Youtube’s tv app is actually just a website (youtube.com/tv, although you need a tv user agent). So yeah, I think most tvs are using JavaScript and the rest are using the tvlite api which has less formats than web_safari (which will continue to work in yt-dlp without Deno if you’re willing to accept 1080p downloads with inferior codecs)

      • int_19h a day ago

        They have been using the older APIs kept around for the benefit of those smart TVs for a very long time, but things move on and newer TVs get fancier hardware and more full-featured software, which includes YouTube, and so Google has started proactively dropping support.

tomalaci 2 days ago

Looks like this runtime is written in Rust. Really does seem like Rust is rapidly swallowing all kinds of common tools and libraries. In this case a single compiled binary for multiple architectures is quite convenient for something like yt-dlp.

chrsw 2 days ago

The whole point of YouTube (now) is it’s a market for human attention and behavior. Eyeballs, engagements, tracking and analytics. They will go to great lengths to protect and increase the value of this market.

jhatemyjob 2 days ago

Might as well start an effort to rewrite the whole project in Javascript at this point

ddtaylor 2 days ago

I would be really interested in hearing what Mike Hearn has to say about this. AFAIK he was responsible for something very similar that was used at Google for other products that had some overlap.

nunobrito a day ago

This morning youtube was already breaking the functionality of yt-dlp.

To solve, just upgrade on linux using:

pip install -U "yt-dlp[default]"

kgdbx 2 days ago

That seems a lot of dev work, why not just run in browser then? There are extensions that work pretty well, like Tubly downloader, Video DownloadHelper.

anthk 2 days ago

I refuse to run JS on my n270 netbook; even less with a propietary license. Thus, I will just use some invidious mirror.

shmerl 2 days ago

Will Debian package Deno for it?

lxgr 2 days ago

> Up until now, yt-dlp has been able to use its built-in JavaScript "interpreter" [1]

Wow, this is equal parts fascinating and horrifying.

Edit, after looking into it a bit: It seems like a self-contained build of deno weighs in at around 40 MB (why?), so I can see why they tried to avoid that and appreciate the effort.

[1] https://github.com/yt-dlp/yt-dlp/blob/2025.09.23/yt_dlp/jsin...

whywhywhywhy a day ago

Honestly the time is now to start building a P2P mirror on top of yt-dlp so the videos only need to be scraped a few times, within 4 years it's obvious ads will be burned into the video file or yt-dlp will no longer function at all and then it will be too late to mirror YouTube and the content will be locked up and eventually lost.

porphyra 2 days ago

At this rate they are just gonna have to ship a whole web browser with it lol.

jokoon 2 days ago

Youtube is victim of its success

I don't promote piracy, but it seems that it's easier to download music from youtube than using torrents, which is quite surprising.

Who expected that such a big company would contribute to piracy?

phplovesong 2 days ago

Youtube is the real monopoly. Creators are also slaves, as they cant monetize elsewhere, and also they cant let their users download their own content. And the icing on the cake is youtube is unbearable without an ad-blocker, and even with that youtube has started throttling ad-block users.

Its such a shithole, with no real replacement, sad state of affairs.

  • mavhc 2 days ago

    why can't they monetize elsewhere?

    • trenchpilgrim 2 days ago

      > Here's the problem (and it's not insurmountable): right now, there's no easy path towards sustainable content production when the audience for the content is 100x smaller, and the number of patrons/sponsors remains proportionally the same.

      https://www.jeffgeerling.com/blog/2025/self-hosting-your-own...

      • mavhc a day ago

        So they're not paying YouTube but get free advertising for their product from it, which brings in 100x more users that elsewhere? Seems like an OK deal.

    • HankStallone 2 days ago

      Some do, and those who are able to make the move to patronage or subscriber monetization seem much happier for it. But that's most viable for creators who have already built up a viable customer base, which usually started on YouTube. It's much harder if you start out somewhere else.

    • layer8 2 days ago

      Much, much, much smaller audience elsewhere.

      • pessimizer 2 days ago

        And if the audiences got larger on a site, governments around the world would decide together to drag them into court and keep them there until they closed down or sold to Ellison's kid.

sylware a day ago

Allright, it means youtube is soon gone for me: deno is just a front-end to the abominations of the web engines from the whatng cartel. It would be a good compromise if I could have such an engine which I could build with a simple C compiler. You have modern javascript engines implemented in C, but no web engine, "la creme de la creme" of software abominations.

Just a few weeks/months ago, gogol search was blocked to noscript/basic (x)html browsers (I have witness gogol agenda about this unfold over the last few years).

Will use yt-dlp(zero whatng) until it breaks for good I guess.

The US administration failed to regulate the market domination of youtube with enforced simple and stable in time technical standards (what Big Tech hates). I don't blame them, since those nasty guys are smart border-line crime lords (and for sure serial offenders in EU).

Is there any other ways, non Big Tech ways, to access Sabine H. content? Or should I said good bye right now?

  • int_19h a day ago

    The change includes the ability to point yt-dlp at a Deno or Node binary of your own choice, which, in principle, allows someone to use a different runtime so long as it provides everything the script needs to run successfully.

    • sylware 9 hours ago

      What?

      How could you miss the point that much?

      • int_19h 13 minutes ago

        > It would be a good compromise if I could have such an engine which I could build with a simple C compiler. You have modern javascript engines implemented in C, but no web engine

        I'm merely telling you that you can do exactly that.

weq 2 days ago

I started this fight with youtube back when it was called google video, because i wanted to watch content but my 28.8kbps modem didnt have the grunt todo it in one session.

When i started getting 100,000's of downloads a day, google updated their html and blocked me from their search engine. I did the cat and mouse a few times but in the end it wasnt worth it.

Glad to see the legacy still lives on :D

https://gvdownloader.sourceforge.net/

  • Squarex a day ago

    Cool. But Youtube was never called google video. Youtube was acquired by Google and Google Videos was a separate site.

oybng 2 days ago

amazing how posts critical of google quickly fall off the front page during north american hours

elcapitan 2 days ago

Is there an official name for this endless uphill battle? Counter-Enshittification?

  • layer8 2 days ago

    Cleaning the Augean stables.

UltraSane 2 days ago

I would pay for YouTube if Google created the best possible search engine they could for it. I'm talking inverted and semantic indexing of every word of every video with speaker tagging and second level timestamping. I want to be able to runs queries like "Give me the timestamps of every time Veritasium said math while wearing a blue shirt."

syrusakbary 2 days ago

I wonder if they could use Wasmer to execute Javascript under the hood without limitations.

zeristor 2 days ago

I did download YouTube videos a few years ago, I did value that YouTube could keep your place.

But it’s a real mess it keeps crashing, something I might too humbly put down to me having too many files, but passive aggressively put it down to YouTube on iPad not having a limited amount of storage space.

On the other hand there’s a number of amazing videos I’ve downloaded to watch which have been remotely wiped. Grrr

maxlin 2 days ago

> Note that installing Deno is probably not necessary if you don't want to, only downloading it. Just like yt-dlp, Deno is available as a fully self-contained single executable from their GitHub releases: https://github.com/denoland/deno/releases.

> Yeah, you can just extract the zip and put deno.exe in the same folder as yt-dlp

I hope they just make this automatic if this truly becomes necessary. yt-dlp not having requirements, or having them built-in, isn't something just "convenient", I think there's users that wouldn't use the tool without that simplicity. Most people really don't like and try to avoid having to fight dependencies.

eth0up 2 days ago

SABR

  • jumpocelot 2 days ago

    "In 2025, YouTube started rolling out a new streaming protocol, known as SABR, which breaks down the video into smaller chunks whose internal URLs dynamically change rather than provide one whole static URL. This is problematic because it prevents downloaders (such as yt-dlp) from being able to download YouTube videos at resolutions higher than 360p due to only detecting format code 18 (which is the only format code available that doesn't use SABR). So far, this issue has only affected the web client, so one workaround would be to use a different client, such as tv_embedded (where SABR has not yet been rolled out to), so for instance in yt-dlp you could add --extractor-args "youtube:player_client=tv_embedded" to use that client. It is not known how long this workaround will work as intended, as YouTube rolls out SABR to more and more clients."

    https://wiki.archiveteam.org/index.php/YouTube/Technical_det...

    • sphars 2 days ago

      Thanks for the comment, OP just throwing out just "SABR" like we're all supposed to know what it means.

      • eth0up 2 days ago

        Sorry, I saw the submission (no votes and aging), upvoted it and left the comment thinking the post would die. But someone thankfully did what I should have.

  • adzm 2 days ago

    This is unrelated to the JavaScript challenge this post is about, and a very specific technology for video streaming. SABR means "server-side adaptive bitrate" and is a bespoke video streaming protocol that Google is moving towards, away from the existing DASH protocol. There is some info here https://github.com/LuanRT/yt-sabr-shaka-demo

  • tomalaci 2 days ago

    You need at least 5 letters for Wordle.

  • VladVladikoff 2 days ago

    What does the Society of American Baseball Research have to do with this?

  • bontoJR 2 days ago

    Sneak Attack By Roger?

  • pluc 2 days ago

    it's pronounced sabray

oybng 2 days ago

more dependency bloat just to deobfuscate some randomly generated bs that's increasingly complex for no reason and has no value existing in the first place, much like its creators

BoredPositron 2 days ago

[flagged]

  • antiloper 2 days ago

    [flagged]

    • BoredPositron 2 days ago

      I know the definition precisely, which is why I used that word. If you disagree with my usage, please explain why their actions don't constitute extortion.

      Here's what happened: They leveraged their platform to get 60,000 people to sign a petition.

      Using this petition and the resulting legal proceeding, they forced the entire community's hand by framing it as "the last chance" to get the trademark released, which it is solely because of their actions.

      They unilaterally made it the "last chance" without seeking even minimal input from the community they claim to represent. Not even OpenJS. Now they're demanding 200k, citing the difficulty of proceeding alone and suggesting they might not be able to succeed without this funding.

      This is textbook extortion: creating artificial urgency, leveraging community pressure, and demanding payment under the threat that their stated goal will fail without it.

      You don't need enemies with friends like that.

DrStartup 2 days ago

none of it will matter soon. anything you want to see or watch will be dynamically generated just for you. enders game is here.

  • hooverd 2 days ago

    why would I want that?

phendrenad2 2 days ago

Good to see the mice are still winning the cat-and-mouse game. Selfishly, I kind of want the cat to start to win, to satisfy my curiosity. I predict that if YouTube ever actually blocked downloading, a YouTube competitor that supports downloading would start to immediately gain popularity. I want to know if I'm right about that, and there's no way to test unless Google actually starts to win. Go, Google, go! I believe in you!

  • amlib 2 days ago

    I suspect that if youtube ever fully blocks video downloads you will start to see a lot of piracy groups and file sharing communities providing youtube content.

  • codedokode 2 days ago

    Those who download videos are a minority and targeting minorities will never give you exponential growth. Furthermore, the same minority probably abuses ad blockers so it would be difficult to squeeze a single cent from these freeloaders.

    • mejutoco 2 days ago

      > targeting minorities will never give you exponential growth

      Serving a niche is a very good way to start with many products. It is even common gospel in startups. With the rest I agree.

RiverCrochet 2 days ago

A friend of mine recorded a YouTube video using OBS. She had to do some minor edits on it and could not use her system during the recording, but it worked. I told her to stop it, as that is infringing on the creator's copyright and is an assault on the nation's digital economy. She hasn't recorded a video since, at least not that I know about. I feel good about making sure YouTube can reasonably profit off of creators' content since they give away the storage and access for free.

  • smiley1437 2 days ago

    Instructions on Vine-glo grape concentrate during prohibition: "Do not place the liquid in this jug and put it away in the cupboard for twenty-one days, because then it would turn into wine."

    • RiverCrochet 2 days ago

      I had another friend that simply recorded YouTube videos from their smartphone. As a zealous law abiding citizen, I immediately smacked the phone out of his hand and lectured on how copyright law is the foundation of the Information Age, which is the future, and disregarding it is an affront to modern life and civilization. I made him delete all his videos, and even made him hand write letters of apologies to the YouTube creators. These creators don't reveal their home addresses, but I'm sure they appreciated the emails containing the scan of the handwritten letters.

      We have an old SCSI scanner, so it took about as long to scan it as it did to write it.

trilogic 2 days ago

No requirements for me. I don´t use YT at all :) There are plenty of better alternatives.

  • VladVladikoff 2 days ago

    My brother sent me a long talk on YouTube and pleaded with me to listen to it. Watching was pointless the video was just talking heads sitting in chairs. However you can’t just play a video and turn off your phone while listening to the audio on headphones. The mobile browser sleeps and the audio stops. So I used yt-dlp to rip the audio and dropped it into my Plex server to listen to with Prologue. It wasn’t even about the ads, I just wanted to do some gardening and listen to something on headphones while I worked, without my phone screen on.

    • ndriscoll 2 days ago

      Firefox Mobile has an extension "Video Background Play Fix" to disable the Page Visibility API anti-feature.

    • jaffa2 2 days ago

      on iphone, if you use youtube in the browser in stead of the app (as you should), then you can do background listening if you play the video, lock the phone, unlock the phone, play th video again, lock the phone, unlock the phone, resume play with the media controls, lock the phone.

  • blacklion 2 days ago

    I'm watching not youtube but video creators. There is no even worse alternative if person you want to watch doesn't publish video on other site.

    Maybe, for watching "recommended" stream without any subscriptions there are alternatives (which? I cannot name good ones, anyway), but if you watch your subscription you are bound to platform which contain this subscription. And no, content creators are not interchangeable.

  • exitb 2 days ago

    It's obviously not about YT the product, but about YT the content library. I don't think there are better alternatives to that content library.

  • frizlab 2 days ago

    until someone shares a video with you

  • exe34 2 days ago

    any recommendations?

    • trilogic 2 days ago

      Dailymotion, Vimeo etc. No ads, no bs it feels like freedom again. And if they change others will replace them.

      • exe34 2 days ago

        And how do you watch content on dailymotion and vimeo when the content creators only post them on YouTube?

        • trilogic a day ago

          >And how do you watch content on dailymotion and vimeo when the content creators only post them on YouTube? I just don´t watch it. My time is too precious to loose it in ads and nonsense policy.

          • exe34 a day ago

            I agree, I'll stick to YouTube while yt-dlp works, but at some point it won't be worth the effort. but let's not pretend the alternatives are comparable.