destroy-2A 2 years ago

Try working in banking for 20 years, stuck behind at least 1 layer of citrix living in citrix inception. Latency for every keystroke, your brain starts to add latency to latency that is not there to compensate for a life lived wearing citrix latency goggles.

  • jiggawatts 2 years ago

    I was a "Citrix consultant" for about two decades.

    I'd walk into customer sites for the first time, meet people, and within minutes they would start ranting about how bad Citrix is.

    I suspect only dentists get this kind of feedback from customers before a procedure.

    Having said that, 99% of the time the problem boils down to this:

    The guy (and it is a guy) signing the cheques either doesn't use Citrix OR uses it from the head office with the 10 Gbps link.

    The poor schmuck in the backwater rural branch office on a 512 Kbps link shared by two dozen staff gets no say in anything, especially not the WAN link capacity.

    I've seen large distributed orgs that were 100% Citrix "ugprade" from 2 Mbps WAN links to 4 Mbps to "alleviate network congestion" in an era where 100 Mbps fibre-to-the-home is standard. With 2 Mbps you can watch PDF documents slooooowly draw across the screen, top-to-bottom, line by line. Reminds me of the 2400 baud days in the early 90s downloading the first digital porn, eagerly watching the pixels filling the screen.

    Don't blame Citrix. Blame the bastard in the head office that doesn't give a f%@$ about anyone not him.

    • acdha 2 years ago

      I agree in general but I do blame Citrix for some foot-guns. The Citrix admins at my employer have never figured out how to configure it to get keyboard latency below ~120ms (on a gigabit LAN), and the silly health meter always reports the connection as excellent. This is mostly on them - in classic enterprise IT thinking, if it’s not down your job is done - but I’m somewhat disappointed that it’s even possible to configure it to have latency twice that of a modem.

      • zasdffaa 2 years ago

        A 120ms should feel immediate. IIRC anything under 300ms feels instant.

        • babypuncher 2 years ago

          This is just flat out wrong. Any seasoned gamer can feel the difference between a few tens of milliseconds.

          300ms would render most video games unplayable.

          I see this claim a lot and it's making me want to build a website that gives you some common interactions (moving a mouse cursor, pressing a button) with adjustable latency so people can see just how big of an impact seemingly small amounts of lag have on how responsive something feels.

          • rollcat 2 years ago
            • Izkata 2 years ago

              After using xterm for years, I don't like gnome-terminal anymore because its lag while typing has become noticeable. It's right around 30ms on this site, and xterm around 10-20ms.

            • qu4z-2 2 years ago

              This is great, thanks. I'll have to remember it next time someone makes that bizarre claim.

          • petercooper 2 years ago

            Just running my display at 60Hz vs 30Hz is enough. The pointer feels extremely laggy at 30Hz, despite that being a higher refresh rate than a movie.

            • account42 2 years ago

              Movies always get brought up in framerate discussions but they are a completely different beast compared to interactive computer applications because

              a) movies are not interactive so latency is not a concern, only fluidity is

              b) movies come with pre-applied motion blurring to hide the low framerate (which is different from fake motion blur applied in some games)

              c) 30 FPS is atrocious even for movies and I wish higher framerate movies had gotten more common

            • chris37879 2 years ago

              30 vs 45 fps on my steam deck feels night and day different, it's amazing how much small jumps like that can help.

          • andrewflnr 2 years ago

            Then have an estimation challenge mode, where it picks a random latency and you have to guess within 50ms what it is. Seriously though, that sounds both fun and useful.

          • Fire-Dragon-DoL 2 years ago

            If you had 300ms latency, back when I played League of Legends "your ISP is having problems today and you cannot play". Anything above 70 is considered very bad

          • Scramblejams 2 years ago

            Sounds excellent. I would send that link around to a lotta people.

        • zasdffaa 2 years ago

          That was a bad post. The figure of 300ms was from memory. I guess it's complex but for games shmup games (https://www.pubnub.com/blog/how-fast-is-realtime-human-perce...):

          "

          ...for Massive Multiplayer Online Gaming (MMOG), real-time is a requirement.

          As online gaming matures, players flock to games with more immersive and lifelike experiences. To satisfy this demand, developers now need to produce games with very realistic environments that have very strict data stream latency requirements:

              300ms < game is unplayable
              150ms  < game play degraded 
              100ms < player performance affected
              50ms   > target performance
              13ms    > lower detectable limit
          
          "

          But this is real-time gaming. Typing should be less demanding, I'd think.

          Edit: also https://stackoverflow.com/questions/536300/what-is-the-short...

          • eloisant 2 years ago

            > Typing should be less demanding, I'd think.

            Not really, unless you're the kind of guy working in Cobol and who is used to typing with latency.

            I've seen Cobol developers just ignoring the latency, keeping typing because they know what they've typed and it doesn't matter that it's slow to show up on screen.

            • toast0 2 years ago

              Working with latency like that also requires the system to be predictable. If you're expecting auto complete but not confident in what it'll show, you've got to wait, if you're not sure if the input will be dropped if you type ahead too much, you've got to wait. If you need to click on things, especially if the targets change, lots of waiting.

              If the system works well, yeah, you can type all the stuff, then wait for it to show up and confirm. 'BBS mode' as someone mentioned.

            • NoGravitas 2 years ago

              > I've seen Cobol developers just ignoring the latency, keeping typing because they know what they've typed and it doesn't matter that it's slow to show up on screen.

              I used to do that (not in COBOL), typing into a text editor in a terminal over a 2400-baud modem. Like the other commenter said, you get used to it, but it requires a certain predictability in your environment that you don't get in modern GUIs.

          • viridian 2 years ago

            Generally I think of it in terms of number of frames @ 60 fps.

            Anything below one frame (16.66ms) and whether or not any sort of real feedback is even received (let alone interpreted by the brain) becomes a probability density function. With each additional frame after that providing more and more kinesthetic friction until you become completely divorced from the feedback around 15-20 frames.

          • flutas 2 years ago

            Just a heads up for others trying to read this, I think the < and > are backwards.

        • acdha 2 years ago

          That’s off by about an order of magnitude – highly skilled humans can see and react in less than 120ms. One thing which can complicate discussion on this is that there are different closely related things: how quickly you can see, understand, and react is slower than just seeing which is slower than seeing a change in an ongoing trend (that’s why you notice stutter more than isolated motion), and there are differences based on the type of change (we see motion, contrast, orientation, and color at different latencies due to how signals are processed starting in the cortex and progressing through V1, V2, V3, V4, etc.) how focused you are on the action (e.g. watching to see a bird move is different than seeing the effect of something you’re directly controlling). Audio is generally lower latency than visual, too.

          All of this means that the old figures are not useful as a rule of thumb unless your task is exactly what they studied. This paper notes how unhelpful that is with ranges from 2-100ms! They found thresholds around 25ms for some tasks but as low as 6ms for some tasks.

          https://www.tactuallabs.com/papers/howMuchFasterIsFastEnough...

          Keyboard latency is one of the harder ends of this spectrum: the users are focused, expecting a strong (high contrast, new signal) change in direct response to their action, and everything is highly trained to the point of being reflex.

          When I’m typing text, I’m not waiting for the change to hit a key outside of games but rather expecting things like text to appear as expected or a cursor to move. Awhile back I tested this and the latency difference between VSC’s ~15ms key-to-character was noticeably smoother compared to 80+ms (Atom, Sublime) and the Citrix system I tested at 120-150ms (Notepad is like 15ms normally) was enough slower that it forced a different way of thinking about it (for me, that was “like a BBS” because I grew up in the 80s).

          n.b. I’m not an expert in this but worked in a neuroscience lab for years supporting researchers who studied the visual system (including this specific issue) so I’m very confident that the overall message is “it’s complicated” even if I’m misremembering some of the details.

        • mm007emko 2 years ago

          Not my experience. 300ms is noticeable and very annoying. 120ms does not feel instant to me.

        • NoGravitas 2 years ago

          300ms is a "long press" on a key on Android, and an eternity on an actual keyboard.

        • dan-robertson 2 years ago

          The parent comment may be talking only about the network or Citrix components in the critical path. You also have to wait to get keyboard input (often 10s to many 10s of ms) and for double-buffering or composition (you might get updates and render during frame T, flip buffers to reach the OS compositor for frame T+1, have the compositor take another frame to render that and send it to the screen for frame T+2, though this is a bad case for a compositor, you may be paying the double buffering or flu latency twice). And it can take a while for modern LCD screens to process the inputs (changes towards the bottom of the screen take about a frame longer to display) and to physically switch the pixels.

          120ms end-to-end without Citrix would be quite achievable with many modern systems (older systems (and programs written for them) were often not powerful enough to do some of the things that add latency to modern systems). So if Citrix 120ms we already get up to your ‘not immediate’ number.

          But I think you’re also wrong in that eg typing latency can be noticeable even if you don’t observe a pause between pressing a key and the character appearing. If I use google docs[1] for example, I feel like I am having to move my fingers through honey to type - the whole experience just feels sluggish.

          [1] this is on a desktop. On the iPad app I had multiple-second key press-to-display latency when adding a suggestion in the middle of a medium-sized doc.

        • pleb_nz 2 years ago

          Divide those figures by 10 might be closer to being accurate. 120ms is quite noticable. I know as I need to adjust latency out of Bluetooth headphones for recording. Recording with those latencies sounds like a disaster and is very very much noticable even with sounds let alone vision

          • zasdffaa 2 years ago

            While my post was wrong, in fairness the context was specifically about keyboards. Nothing to do with audio. I suppose I should have been explicit but the context was keyboard entry.

            • pleb_nz 2 years ago

              In my experience visual and feeling type things like typing have even stricter tolerances for timings is what I meant to say. If audio has a delay, visually noticing a delay will at least be as equal if not more noticable at a specific ms

        • imtringued 2 years ago

          We aren't talking about website loading speeds. This is about how quickly your mouse cursor moves in response to mouse movements and that latency needs to be 16ms or less.

          Personally I can get latency down to 200ms over the internet into a remote datacenter with WebRTC. The challenge however in practice is that running a CPU without a GPU will eventually starve the CPU because it has to do intensive things like run a 1080p video at 60fps which aren't feasible on a CPU only machine. This CPU load will then slow down the video encoder and overall responsiveness (no, responsiveness doesn't mean a mobile layout here) of the remote desktop.

        • matheusmoreira 2 years ago

          Anything above 50 ms is absolutely noticeable and should be considered a bug.

        • Nimitz14 2 years ago

          Under 100ms feels immediate. More doesnt.

    • P5fRxh5kUvp2th 2 years ago

      I recently had a bit of a rant about security people and how 70% of the truly dumb decisions in our industry can be attributed to them.

      Your description is exactly why. Security people wedge themselves into the halls of power and then start making decisions that don't actually negatively affect them all that much.

      I've literally seen a CISO that insisted everyone worked in a way they themselves did not.

      • jonfw 2 years ago

        sadly, the job of a CISO typically isn't "make the most pragmatic decisions possible to keep our infrastructure secure and running smoothly". In many industries, it's more lke "join as many compliance programs as possible to expand the ability to capture revenue from regulated markets".

        The CISO didn't make the decision to enforce password rotation- the compliance programs your sales team asked for did

        • P5fRxh5kUvp2th 2 years ago

          To your point, password rotation is considered an insecure practice because it causes people to append 1, 2, 3, etc to the same password.

          But I've seen so many companies that still insist on it.

          • selykg 2 years ago

            I'm the IT guy for a new non-profit. We aren't separated yet from the company that created us, but we're in the process of separating. I get to decide all this fun stuff.

            I had a very brief talk with the IT team for the larger parent company when I started and explained this stupid password rotation thing, as I came from a security background, they wanted nothing of it. Set in their ways.

            For the new non-profit that I'm helping spearhead, I'm not sure I'll get away from the password rotation entirely, but I can certainly set it to something more reasonable, like every 365 days, rather than every 60 days or whatever travesty most are dealing with. I'm pretty pleased about this.

            • zmgsabst 2 years ago

              NIST agrees, as if their update a few years ago.

              > Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically). However, verifiers SHALL force a change if there is evidence of compromise of the authenticator.

              https://pages.nist.gov/800-63-3/sp800-63b.html#memsecret

              • acdha 2 years ago

                This is a really useful thing to keep in mind because even if you aren't directly bound by a requirement to follow the NIST standards, being able to point your policy people at that is handy if you can shift the conversation to “bring our policy in line with NIST” where there's a question about whether they'll later look bad for _not_ having done so. Typically these conversations are driven by risk aversion and things like federal standards help balance that perspective.

              • selykg 2 years ago

                Thanks for the direct link, putting this in my back pocket when the discussion inevitably takes place.

            • als0 2 years ago

              Aside from password rotation being a very questionable practice, it actually can cause productivity loss. In a big organisation like mine it can take up to 48 hours for a password change to synchronise across all the internal services. There's also the issue where some endpoint software still uses the old password behind the scenes and fails to log in too many times - causing your account to be locked. I guess you can see my frustration coming through.

              • structural 2 years ago

                I had the joy of dealing with some endpoint software like this in an organization that had mandated password changes every 30 days. Very predictably, people set recurring "change your password" reminders for the 1st of the month and the organization lost an entire day of productivity each month as they locked themselves out of their accounts en masse. So the beginning of the month was always a panicked, all-hands-on-deck day for the help desk as people were waiting on hold for hours to get their account unlocked.

          • rmccue 2 years ago

            Our penetration testers suggested we add password rotation, and I had to quote them the latest NIST guidelines which state "Verifiers SHOULD NOT require memorized secrets to be changed arbitrarily (e.g., periodically)."

            If they don't know better, it's not surprising other companies don't either.

          • KronisLV 2 years ago

            > To your point, password rotation is considered an insecure practice because it causes people to append 1, 2, 3, etc to the same password.

            A good solution to discourage this would be to have heuristics that'd make sure that the new password isn't too similar to the old one, but doing that without having plaintext in there somewhere is pretty difficult.

            Another solution would be mandating that all of the passwords should be randomly generated, but enforcing that would be difficult, because everyone who isn't used to having 99% of their new account information being in KeePass databases with randomly generated passwords, probably would find that too cumbersome to remain productive.

            This seems like a people problem that makes being secure essentially impossible, due to how people use passwords (e.g. "I just use one password across X sites because remembering multiple ones is too difficult" or "I just add a number at the end of my current password").

            And others also mentioned the productivity loss, for when people are slowed down by the need to change their passwords. You might easily rotate Let's Encrypt certificates thanks to automation but when it comes to people, things aren't so easy.

            At that point, you might just stick with whatever passwords you have, do some dictionary checks in the future, maybe have infrequent password rotation and otherwise stack on more mechanisms, like TOTP through whatever application the user has available, or another means of 2FA, because relying just on passwords isn't feasible.

          • mr_toad 2 years ago

            > causes people to append 1, 2, 3, etc to the same password

            It’s either that or they write them down. Because people are going to forget a password that changes every month, especially a password that has to comply with the complexity rules.

        • patrakov 2 years ago

          > The CISO didn't make the decision to enforce password rotation- the compliance programs your sales team asked for did

          And it's the CISO job to resist unnecessary overcompliance which is just for the happiness of the sales team.

          • plonk 2 years ago

            You don’t make the company lose business just because compliance is unnecessary. You’ll (rightly) get overruled every time.

      • renewiltord 2 years ago

        Isn't that just a characteristic of how they're evaluated? Any security error is the CISO's fault, "heads must roll", etc

        Given that, they're likely to give you what you are asking from them: a brick with no functionality which will do nothing. You can't do anything with Brick, but Brick has zero outstanding CVEs

        • greedo 2 years ago

          CISO often stands for Chief Sacrificial Officer...

    • prepend 2 years ago

      It seems to me that the reason why so many bad enterprise solutions are bought is because the buyer is not the user. It’s such a funny thing to me that people would spend tons of money without firsthand experience or at least someone they trust using it.

    • dboreham 2 years ago

      I've never used Citrix but I remember when I had a T-1 (1.54Mbits for the younglings) and I left a Remote Desktop session open on a laptop. Some days later I went back to the laptop and used it for an hour before I realized I was in a RDP session to a machine in another state. I wonder what Citrix screwed up to make their UX so different. Of course a decent T-1 back then probably had better latency than today's consumer HFC connection.

      • sgerenser 2 years ago

        Yeah the T1 easily had enough bandwidth to smoothly send the 800x600 16 bit color desktop you were probably running at the time (guessing the timeframe based on usage of a T1). Frame to frame diff was probably much easier as well with less shadows and graphical effects that modern Windows or Linux DEs have.

        I don’t doubt Citrix has gotten worse as well but the job it had to do back then was much easier.

    • alpentmil 2 years ago

      > Don't blame Citrix. Blame the bastard in the head office that doesn't give a f%@$ about anyone not him.

      > The guy (and it is a guy) signing the cheques either doesn't use Citrix OR uses it from the head office with the 10 Gbps link.

      If you were sure about this you could have as the consultant told this sentence or made this entire comment as your 'first page' of powerpoint/PDF (to make sure other hn-ers are happy!)

    • icedchai 2 years ago

      How long ago was this? 4 megabits would be pretty good... back in 1998!

      • acdha 2 years ago

        This _very_ much depended on where you are. I had symmetric 10Mbps at home in 1998 but when we moved to New Haven in 2008 Verizon couldn't deliver more than ISDN / T1 to large chunks the city (we literally could have used a WiFi antenna to hit their regional headquarters, too). There's so much deferred maintenance around the world.

        • icedchai 2 years ago

          True, true! I had a 3 megabit cable modem at home, back in 1998 (3 megabits down, 128kbits up, if I recall.)

          My office at the time had dual T1's... a little over 3 megabits shared with roughly 500 people.

      • marcosdumay 2 years ago

        The last time I saw a place migrate the remote offices from a less than 10Mb/s network was around 2015. That same place replaced its mainframe at 2011 because of an enormous price hike.

  • just_boost_it 2 years ago

    I quit a job because of citrix. Exactly like you said, very noticeable latency. It ate into my productivity as a part of my mental energy was going into waiting for feedback to my actions to appear on screen.

    • raxxorraxor 2 years ago

      > part of my mental energy was going into waiting for feedback to my actions to appear on screen

      This should not be underestimated. I was in a situation like this and I though my short term memory stopped working. I forget what steps I already did because some actions took 10-15 seconds. I often switched to another task in the meantime and could not recollect the last step I did 10 seconds ago. Such delays are poison for for intellectual task where you would need concentration for.

      There is no excuse for any modern device to make such pauses. It is also far too expensive for any company. The price for hardware is too low to let any user wait.

      • just_boost_it 2 years ago

        That's exactly it. Instead of tasks going "1, 2, 3" in my head, it was more like "1,...,1,...,1". I had to keep reloading every task into my working memory, with lots of brief pauses to think "did that click register", or "when I typed those words, was the context on that text box?". It's a truly torturous level of friction.

    • lovehashbrowns 2 years ago

      I didn’t deal with citrix but I did have to frequently SSH into cruise ships at a job some years ago. Goodness was the latency frustrating beyond belief. I didn’t last more than 6 months at that job.

      Every single command input/key stroke could take 2-5+ seconds to display on my screen. Imagine trying to troubleshoot something critical in that type of environment. Luckily, I didn’t encounter anything truly critical, it was mostly maintenance tasks and such.

    • hirvi74 2 years ago

      > part of my mental energy was going into waiting for feedback to my actions to appear on screen.

      Sounds like my life as a developer too. =D

  • omh 2 years ago

    [ Disclaimer - I am responsible for a Citrix environment, but I'm reasonably proud of how well it works for our company ]

    The technology behind remote desktops is fundamentally limited but I'm amazed at how good the user experience can be on a modern well-configured Citrix environment.

    - The protocol responds well even on low bandwidth as long as latency is OK. On the office LAN it feels like a local computer.

    - There is offloading for Teams[1], media streams[2] and even entire web browsers[3]. The tech behind this is impressive and it works pretty well (mostly!)

    - For most staff it's easier to use a thin client or a minimal laptop.

    - I can keep the Citrix environment patched and managed much more easily than a proliferation of laptops and home devices.

    It can be a struggle at times and it's definitely not the right fit for developers. But it's got a lot of advantages and most of the time it works amazingly well.

    [1] https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/m... [2] https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/m... [3] https://docs.citrix.com/en-us/citrix-virtual-apps-desktops/m...

  • bonyt 2 years ago

    I am at a law firm that uses a remote system like that. Have definitely gone two Citrix’s deep for some things, so I feel this.

    Honestly, though, it’s better than the laptop the other firms have given me. One took over 10 minutes to boot, iirc. It wasn’t just the hardware, there was just so much … stuff, multiple layers of antivirus seemingly hooking all of the system calls and fighting with each other, and a document management system with blocking I/O everywhere that was somehow so embedded in Windows that it could seem to freeze the whole system.

    The thin client setup may have latency, but at least it is convenient and it gets there eventually. Though I would swear it’s getting slower, or maybe my patience is waning.

  • kmarc 2 years ago

    O.o Rarely see people like us here! :-)

    For me what worked is a setup where I used an Arch linux laptop, ran f5vpn in docker and used the citrix client with some tweaks through that vpn connection.

    It was a lot faster than my colleagues' Mac / Win client, and even better, it was automatable to start up and run everything.

    Ha! I did document this beautiful setup: https://github.com/kmARC/f5vpn-in-docker

    • irusensei 2 years ago

      My employer blocks Linux clients for whatever reason. Even if you pass through the initial checks there is some kind of system on the Remote Desktop that detects your local setup and kicks you out.

      So I use a KVM Windows machine with Virtio drivers. QXL seems to be the best video solution.

  • assttoasstmgr 2 years ago

    Does everyone here just suffer from exceptionally shitty IT departments? I've used Citrix for years and not experienced any of the chronic issues described here. Remember Citrix was developed in the 1990s... the days of Windows NT 3.5/4.0 [1] & Dial-Up connections and to be able to function well in these low bandwidth environments (we're talking kilobits here people, a 10 Mbps LAN was considered glorious at the time). For years ICA was superior to RDP due to its better compression over such connections. It sounds more like whoever setup your environments didn't know what he was doing and the results are what you would expect.

    [1] https://www.youtube.com/watch?v=SNJiWPU4HEU

    • nikau 2 years ago

      Citrix performance can depend a lot on the apps - older win32 apps work really well as the object caching masked the latency on windows and buttons. Newer apps seem to somehow make the caching not very effective.

  • joshxyz 2 years ago

    Traumatically well written lol

  • sshagent 2 years ago

    Ergh! You're giving me PTSD. I still recall those days shudder Best of luck with that

  • AshamedCaptain 2 years ago

    You simply get used to it. In many industry sectors (think CPU architects), multiple layers of inception is the norm (crossing multiple operating systems), and it is not strange for a keystroke to take 2 seconds, and for a menu to open and finish rendering in 10 seconds. This "experience" is probably the reason why I can still comfortably work over a DSL link with just network X (even though I still find NX much more comfortable).

    You really just adapt your way of interacting, and start planning more carefully every one of your actions instead of simply clickety-clacketing everywhere like if you were trying to win a game of Starcraft. It's practically subconscious and it really changes you.

    I always think it must be much, much worse for blind people.

    It also reminds me of people who complain that 5-minute build times "impair their productivity". How do you even work on _any_ mid-sized commercial codebase then ? It's not that uncommon for a build to take hours (e.g. games), and in engineering it is also not that uncommon for builds to take _days_ even on powerful server farms.

    • ilaksh 2 years ago

      You're a CPU architect and you wait 2 seconds for a keystroke? And you stay in that job? You must be one of the dumbest geniuses I have ever met.

      That's absolutely ludicrous that anyone would be expected to work that way.

      • imtringued 2 years ago

        There is a long queue of geniuses waiting for this genius to quit.

  • flanbiscuit 2 years ago

    This is me right now, but only for a short time (I hope). I'm at an agency and currently on my first ever banking client. I'm on a Mac but I use Citrix Viewer to access a Windows 10 machine. The part I dislike the most is the context switching between Mac and Windows. First off, windows doesn't natively let you customize the keys (I can't install anything obviously, it's a bank client). Also, for some reason, the alt key doesn't work in the Citrix Viewer so I have to change a lot of my usual VSCode shortcuts to sone custom ones. I've googled the issue and some people on Mac use a program called Karabiner[1] but I didn't want to install yet another program, I'm just dealing with it for now.

    Our agency has another banking client that I hear sends you a laptop, I much rather have that.

    [1] https://karabiner-elements.pqrs.org/

  • irusensei 2 years ago

    Hah now imagine using Teams through Citrix workspace.

    One thing I've learned about Citrix is that its a startup company with limited resources to handle all the bugs and crusty corpocrapware layers. The client craps on my HDR setup. It install a ton of crap you don't need and it relies on crap like HDX software running on your machine that last time I've checked it didn't had ARM binaries but this tech is also unavailable for the iOS clients. Meanwhile RDP can do semi-decent multimedia stuff without any of this crap.

  • roywiggins 2 years ago

    Maybe I'm immune to it, or just lucky, but two hops (Logging it at home to a Citrix Network Desktop to Remote Desktop to the PC in my office) has been shockingly fine. I live very close to the servers in question though, so speed of light isn't a limiting factor, and I have solid and reasonably fast home internet. It can work fine.

  • m463 2 years ago

    Don't worry, I'm sure that will all be folded into microsoft teams soon :)

    More seriously, I'm reminded of something a friend always said.

    You need to have a response time of 1/10 of a second or less for something to be interactive. I remember that but I wonder if the brain fixes it like it ignores your blind spot.

  • AtNightWeCode 2 years ago

    Worked with Citrix for years at different customers. Think it is more about setup. Server capacity, bandwidth and so on. Often Citrix was used for connecting to a bastion or a jump host. Then an extra hop to the target machine. Some setups were laggy. Some worked just fine.

  • leokennis 2 years ago

    I work in banking. Everything company-hosted I access via a split tunnel VPN. Everything else goes through the normal internet connection, with a company root CA inserted to sniff HTTPS traffic.

    One of the lucky few I guess :-)

  • markandrewj 2 years ago

    This has been my experience with Citrix also, although I have heard it can be set up to work better. Has anyone had experience with HP Anywhere/Teradaci? I am curious how it would compare.

  • boppo1 2 years ago

    What kind of banking? Tell me people aren't doing excel modeling through a laggy pipe

  • caycep 2 years ago

    Or healthcare.....

  • tmaly 2 years ago

    reminds me of the days of dial up modem.

aleksiy123 2 years ago

I would have agreed until I started working at Google. Also, you should completely avoid having Remote Desktop and instead use ssh + an editor that works with remote files.

At Google we have a custom fork of VSCode running in browser and builds can either be distributed or run on my Linux VM to utilize build cache.

I liked it so much I started doing a similar setup for small side projects. Just boot up the Cloud Console on GCP and start coding.

Advantages are:

- Accessible from anywhere (I use my pc, my laptop, etc. The env is always the same)

- More compute (I can attach more CPU + more RAM flexibly)

- Less friction to start (minimum environment setup, most tools are preinstalled)

- Datacenter Network speeds + Artifacts cached (installing dependencies is fast)

Disadvantages:

- network dependence

There are some adjustments that need to be made to your workflow. And for some applications you are dependent on having the correct tooling. However, my personal prediction is most companies will move to this type of development workflow as the tooling improves.

  • aleksiy123 2 years ago

    There is one more point I would like to add to the "less friction to start." This is the killer feature for education.

    No more do students have to set up their own envs. Figure out PYTHONPATH. No more do educators need to debug installation issues on 3 different OSes.

    Teachers distribute a pre setup env which they know works for what they are trying to teach. Students go straight into writing code and building things with minimum friction.

    Learning to set up your env can be punted to a problem for later.

    • ctoth 2 years ago

      Learning to set up my environment to compile my first MUD back in 1999 was my first introduction to Autotools, ./configure, make, etc. I was motivated to solve the problem because I wanted to tinker. I'm sure someone will say I'm gatekeeping, but seriously these low-level skills have served me my entire career, and I wonder what happens when every dev environment is just a docker pull away. Who learns how to build new environments?

      That said, I have recently been tinkering with the Flipperzero firmware, and compared to the old Rockbox days when you had to install an arm-elf-gcc toolchain yourself and pray, embedded development has gotten way easier and this doesn't seem like a bad thing. I don't have an answer, just something to think about!

      • JamesSwift 2 years ago

        Its not necessarily gatekeeping, its just assuming that all others who come through are also motivated in the same way you are. Reality is that the same experience likely discouraged countless others and prevented them from pursuing development.

        I'm the same as you, but I recognize not everyone is.

      • vlan0 2 years ago

        >Who learns how to build new environments?

        Feels like this is happening already. There is no incentive to learn the fundamental concepts. We need more people interested in the why, and not just a quick buck. Folks interested in the why is the only reason we are here to begin with!

        On the other hand....it's great job security! Less and less people understand networking everyday it seems!

        • mikebenfield 2 years ago

          The kinds of things people are talking about - fiddling with autotools or PYTHONPATH - are not “fundamental concepts” or “the why” behind anything. It’s just tedious, boring nonsense. There are plenty of intellectually curious people who would be turned off by this stuff.

          • vlan0 2 years ago

            I agree.

        • bckr 2 years ago

          In my experience, "tinkering" with an environment before getting any code working was a recipe for believing "this is too hard, maybe I'm not smart enough to do this".

          Getting a feedback loop early on is so important. Once you're convinced that you can write software, setting up an environment starts to look like a tractable problem.

      • madrox 2 years ago

        Think of it like a product funnel. If the goal is to get people to learn X, making them learn Y as a barrier first is unnecessary. You risk people abandoning the funnel that way. It's still entirely possible to learn these tools some other way.

        • Kamq 2 years ago

          Yes, but if learning X beforehand required learning Y, then we might be expecting the value of learning X to be the combined value of learning X + Y.

          It should be noted when adopting this approach, that you're lowering the expected value from the people who make it through the funnel. This may be counteracted by more people making it through the funnel, or it may not. The delta in total value is completely unknown.

          • conductr 2 years ago

            Those who make it through the funnel with only X are likely more than the X+Y scenario simply because it's less learning and new concepts so drop out will be reduced. This cohort are probably more likely to go on to learn Y if we further assume it was so valuable as to have been "required". In reality requiring X+Y backfires as step 1 because X is all they want and X may be worth 10Y to the student at step 1. The value of Y may increase to >X after learning X.

            When I learned basic HTML, CSS and JS. On step 1, JS had essentially no value. CSS sounded like it had value but was a bit nebulous and optional sounding. HTML seemed like an obvious requirement for building a website, so that's where I started. Once I learned HTML, the value of CSS became obvious and I pursued it. JS still seemed esoteric but later once I got HTML/CSS down and wanted to do more JS kept coming up as the solution. So I learned it too. Now, I'd say JS is way more valuable to me than HTML/CSS even though I learned it last.

            This was in the 90s BTW and the path wasn't very clear back then. The first book I bought was actually a Perl book and quickly realized it was too advanced of a place for me to start. I learned about Perl after noticing the /cgi-bin/ portion of URLs and wondering what that meant. It had to be something most sites where doing similarly. This was before Google so whatever random SE I used back then told me cgi-bin was associated with Perl scripting for web development. I continued to struggle with the backend stuff (JS was FE only back then) until PHP3 came around. I'm self taught if that wasn't clear.

            • Kamq 2 years ago

              I mean, if you're self taught, you can take whatever path you want.

              My comment was in the context of students, which generally is backed by public funding. I'm worried about getting less value for the same number of public dollars. That being said, maybe it does still work out as an increase in total value, we should probably do a study on that.

              That being said, I'm self-taught too, so I actually benefit if these classes provide less value, as it's easier to convince business folks to hire me instead, but it's the principal of the matter in this case (also I'm not really competing against recent college grads anymore).

          • aleksiy123 2 years ago

            I think the problem here is ordering. At the end we probably want them to know X and Y but we would rather they learn X then Y.

            But to learn X they need to do some steps from Y.

            If we can remove those steps we can make more people learn both.

            But also in practice I don't think anyone struggling to set up their env before writing a line of code is learning much. They are just following arcane instructions until it works.

            • imtringued 2 years ago

              Learn how to deploy your app before you write it or learn how to write your app before you deploy it...

              Guess what, if there is nothing to deploy then learning how to deploy is just a waste of time.

      • andrewflnr 2 years ago

        > wonder what happens when every dev environment is just a docker pull away

        They learn later, when/if they actually need it. And for those who turn out to be unwilling or unable, well, there wasn't much chance they'd have taken the same path as you, anyway.

        Speaking of paths, specifically path dependence, autotools and friends really are monstrosities and should rightfully be relegated to history. I hope that simpler, easier environment building really are the future, as you hinted.

        • WWLink 2 years ago

          In my experience, it would become "You're not allowed to deviate from the standard environment" lol.

          • andrewflnr 2 years ago

            If they're so helpless they can't figure out how to even get to an environment other than their corporate standard one, then they have, again, already exited the pipeline before this conversation becomes relevant. Someone incapable of setting up a linux vm was never going to do original ops work, regardless of what the computing ecosystem around them looks like.

      • lmm 2 years ago

        > Autotools, ./configure, make, etc. I was motivated to solve the problem because I wanted to tinker. I'm sure someone will say I'm gatekeeping, but seriously these low-level skills have served me my entire career

        What skills? 95% of using autotools etc. is tedious memorization or copy-paste, not something you learn anything useful from.

      • bombcar 2 years ago

        Meanwhile I'm here completely able to build basically anything from source but for some reason unable to understand docker at all.

    • WWLink 2 years ago

      > No more do students have to set up their own envs.

      That straight up sounds dystopian though. Speaking as someone who makes software for Linux machines, I'd really hate to hire someone that doesn't know how to play around with the OS side of things.

      Handing someone a pre-built dev environment and playing god with them sounds like a great way to get stupid programmers who can't architect.

    • TYPE_FASTER 2 years ago

      Check out Repit.com. They’ve added the ability to add dependencies. Having an easy way to hack on some Java with Maven dependencies already installed is pretty great.

      I think this is possible using GitHub Codespaces as well…if you have a Java project with a Maven POM, the deps in the POM will get loaded at runtime.

      I haven’t tried Python yet, but would imagine it’s similar. It’s all containerized, so you effectively have CI/CD you just don’t see it.

    • hulitu 2 years ago

      > There is one more point I would like to add to the "less friction to start." This is the killer feature for education

      So now the children have 2 entities spying on them: the school and the cloud provider.

      • Huh1337 2 years ago

        Huh? It's not like the children aren't using Google and Microsoft services by the truckload already. Every single one has Android/iOs phone. They use Gmail and YouTube and browse the web. And the schools use it too - MS Windows, Office, Google Classroom, Drive... Practically nothing changes with a cloud desktop.

        • WWLink 2 years ago

          You mean a fully locked down device that allows no tinkering at all. Then they graduate to college and they're again suggested to use a fully locked down environment with no tinkering at all.

          • Huh1337 2 years ago

            Can't speak about Apple devices but there's more than enough tinkering available on Android.

    • gibspaulding 2 years ago

      Teacher here. This is absolutely correct. I can't imagine teaching CS, especially at a high school level without Replit.

    • pradn 2 years ago

      The flip side of this is that it's hard to do anything beyond the standard library for some languages. Using PythonAnywhere is great for std-in/std-out programs, but I couldn't figure out a way to import graphics libraries and such. The web is pretty limiting when teaching with anything other than JS/HTML/CSS.

    • asdff 2 years ago

      env management is pretty easy considering tools like conda are out there, that will carry you pretty far especially in terms of education. you could create a conda env specific for your class and just email the yaml. its also easy to write a script that will install miniconda and build these needed environments

  • kfajdsl 2 years ago

    This is exactly the alternative that was presented at the end of the post. He was specifically against remote desktops.

    • danans 2 years ago

      Remote desktops can work great if you constrain and structure what you do in them.

      For me, this means not using traditional desktop environments like gnome/kde/windows/macos, but instead a full screen tiled window manager (i3 in my case) with different virtual desktops assigned to different work items.

      Each virtual desktop is split in half between the IDE (vim in my case) and a terminal session for the code under development in that IDE. That's it. No silly weather or chat widgets (all that lives in my local laptop's traditional desktop).

      The result is I never have to search for a dev task related tab or be unsure if the code I'm running is the code I'm editing (which can happen if you are working on many concurrent changes).

      I liken the setup to the keyboard-driven text terminals you still see in some shops, hotels, and airport check-in desks.

      In general, I think smartly crafting your workflow matters a great deal more than the particular tool you might use.

      • yjftsjthsd-h 2 years ago

        > Each virtual desktop is split in half between the IDE (vim in my case) and a terminal session for the code under development in that IDE. That's it. No silly weather or chat widgets (all that lives in my local laptop's traditional desktop).

        I don't disagree, but if your editor is vim and you run/test in a terminal, why not do the whole thing in tmux in SSH?

        • danans 2 years ago

          > you run/test in a terminal

          Some of the software has a GUI, and some of the tools (i.e. unit test runners) display their output in a locally (to the remote desktop) running web app viewed via a browser.

          If I were doing strictly text-based work, I might use tmux, but to be honest i3 scales down so well to the text-only use case that I probably wouldn't bother, since there is no upside to doing so.

          • aleksiy123 2 years ago

            For tools with webapp output. You can use a proxy to access it in your local browser.

            Something like ngrok would work well.

            The advantage of having tools with APIs

            • danans 2 years ago

              Sure, but it's a lot easier just to open a browser in the remote desktop session. It's not like the FPS of the browser matters much when looking at unit test results.

    • aleksiy123 2 years ago

      Somehow I missed it at the end. However, he mentions the code still being present on the local env and only doing builds in the cloud.

      I'm more in the camp of the code living in the cloud, building in the cloud, executing in the cloud.

      You're editor running locally providing a view of the code. Which requires some specific tooling.

      As always theres a sliding scale here and people need to decide what works best for their use case.

      • hulitu 2 years ago

        > I'm more in the camp of the code living in the cloud, building in the cloud, executing in the cloud.

        Until you see how slow it is and you regret it.

        • asdff 2 years ago

          On the 'cloud' (really our own cluster) we use at work most of the nodes have 16 cores and 128gb of memory, its a much beefier system than what is available locally.

  • outworlder 2 years ago

    > Also, you should completely avoid having Remote Desktop and instead use ssh + an editor that works with remote files.

    That's kinda stretching the definition of a 'desktop', isn't it? The sort of tasks someone uses Remote Desktop for seldom overlaps with what someone uses SSH for. Also it doesn't seem to be the point of the article:

    Article: > I'm also going to restrict this discussion to the case of "We run a full graphical environment on the VM, and stream that to the laptop"

    • aleksiy123 2 years ago

      Yes, somehow I missed this. My mistake.

      Though I think it could still work if the applications are built with this in mind. Think Google Docs, Sheets etc as cloud replacements for local Word, Excel.

      SSH and Video aren't the only two protocols to interact with a remote machine.

  • pradn 2 years ago

    There's not really a way around network dependence when coding at Google. You're not allowed to have code copied to a laptop, unless you get special permissions. Only teams that work on client code like desktop apps are likely to be given access. So this means to even look at code, you have to be connected to the network (via Chrome or a Fuse-based VFS.)

    If you're using a desktop, there's not really a case for going offline, so being dependent on the network is ok.

    I get nearly all my work done off a laptop, but I do find its weak CPU/memory a mismatch for my heavy use of Chrome.

  • gimme_treefiddy 2 years ago

    Damn, when you said Google I thought you were gonna talk about Cloudtop, etc. +1 to your recommendation, but they do a pretty good job Cloudtop too(for non-power users it is pretty usable).

    Also check out https://www.mightyapp.com/

    • aleksiy123 2 years ago

      Yeah, to be clear for Google work I am talking about the combination of Cloudtop (VM), Cider (IDE), Blaze/Bazel (Builds).

      In addition you also need a version control / file sync system.

      It's also nice to have some kind of network proxy especially if you are doing web dev. Tools or web services run on the VM and you just access it directly through the proxy on your local browser.

      The integration/combination of these is what allows things to work.

      For personal code this is Google Cloud Console. You can actually just jump into it . It has a built in VS Code editor.

      But at home it would be GCP VM + VS Code + Git.

      GCP also has built in proxy. The only problem I have had so far is it doesn't rewrite URL's which can be an issue for web apps. I think it's solveable I just haven't really tried yet.

      Theres some other solutions in the other comments as well.

      • cbHXBY1D 2 years ago

        You also should mention the use of CitC. With CitC, I can build/write code from my work machine at the office and then go home and gmosh into a cloudtop that uses the same network mounted filesystem.

        • mattnewton 2 years ago

          I thought network filesystems were a terrible idea until I used citc + piper, really two incredible pieces of engineering infra. So many problems are reduced to just writing files to disk if you have a magically disk that acts like it is infinitely sized and everywhere all at once with low latency and versioned by the second. Whatever promotions they offered those authors and maintainers, and whatever black magic they had to invoke, it really was worth it.

        • aleksiy123 2 years ago

          Yep, I sorta glossed over it with file sync. But I guess CitC is more than that. Its more like a workspace sync.

          It acts like a view of the monorepo and holds whatever changes you make. Additionally it integrates with your version control and holds its state as well. For example any local commits or branches.

          And this can all be accessed from the browser or the CLI on any connected machine.

  • ilaksh 2 years ago

    That's not a cloud desktop. It's a cool setup, but you framed your comment as a disagreement with the post. But you are actually agreeing.

  • asdff 2 years ago

    There are many ways to approach this sort of thing. For example I work similarly, all code running on a powerful slurm-managed cluster that i can access using anything with ssh, but I just use stuff like tmux and cli editors that are already installed on the server rather than use a gui based editor. we have an lmod system with prebuilt packages of a lot of software we typically use with various versions represented, and we also use environmental managers such as conda/mamba and workflow control tools which are straightforward to use. Seems simple enough imo.

  • novok 2 years ago

    IMO you don't really need it to be a full vnc style remote desktop although, or even have the editor run in the browser. You can get equivalent results with bazel[0] remote execution + cache servers and get a similar horizontally scaling build system, without vnc style jank or full network dependence for your actions as a developer.

    Another reason why google likes the remote dev experience is because it doesn't download code to the developers laptop, because they don't trust them.

    [0] yes i know bazel / blaze is made by google

  • jdelman 2 years ago

    Out of curiosity, how much are you paying a month using Cloud Console that way?

    • aleksiy123 2 years ago

      It's free tier. But storage is limited to 5GB.

      I'm pretty sure you can pay for more storage but I just haven't hit the limit yet.

    • paxys 2 years ago

      Why would they be paying for it out of pocket?

      • jdelman 2 years ago

        They said they were using it for side projects. Even if Google is subsidizing the bill, I'm interested to know how much something like this would cost.

        • aleksiy123 2 years ago

          Google does not provide any discounts for employees personal projects.

          Cloud Shell is free up to 5GB additional storage is $0.02/GB a month.

          Alternatively someone linked https://www.gitpod.io/ which also has a free tier.

      • petercooper 2 years ago

        I don't know about Google, but as far as I heard, AWS folks pay for their own AWS resources for side projects, at least.

        • rfoo 2 years ago

          Google too.

          Microsoft does give cloud credits per month for employees tho.

  • vbezhenar 2 years ago

    What languages does that setup support in Google? JS/TS I guess obviously.

    How do I replicate it with self-host? What keywords should I google?

    • aleksiy123 2 years ago

      All Google supported languages C++, Golang, TS and prob some other ones I'm missing.

      Someone else mentioned https://www.gitpod.io/ and they actually have a self host option.

  • flerchin 2 years ago

    It's not that it can't be done well, it's that it's not likely to be done well, especially at a large org.

  • throwaway413 2 years ago

    assuming ssh and not scp, replacing traditional ssh with mosh in this setup could prove for some interesting benefits wrt network dependency. If the connection was less brittle, and directories could be opened and cached and rewritten later on, after the connection was disrupted and re-established…that’d be awesome.

  • systemvoltage 2 years ago

    > Accessible from anywhere (I use my pc, my laptop, etc. The env is always the same)

    This is a feature in search of a problem for me. I never wake up in the morning and go “Gee only if I can code on random computers”.

    It sounds super nice but then I think about it a little more and it’s just fluff.

    • vineyardmike 2 years ago

      It’s probably the minority, but I think there are people who it’s helpful for.

      For example, I travel with an iPad Pro and a work laptop. If I want to tinker one evening, I can use my iPad instead of bringing a personal laptop. (This also applies to cloud gaming for some people, but I haven’t done that personally).

      My partner is also a software engineer and we have a toy server at home with various things running in it (eg game server, home assistant, vpn etc). We have a VSCode instance running so either of us can grab a browser and update the configs, without deal with being in sync. (Imo this is the “most obvious” use case - modifying remote files without worrying about sync).

      At work, I also have this setup. (My school had something similar too, just less polished because it was a while ago) The benefits there are big too. Besides everything mentioned before, it also means that there’s basically zero setup time. If you break your laptop, or forget it, or whatever, IT just hands you a temporary chrome book which you log into your work from a browser.

      By comparison, my old job pushed a bad MacOS update that bricked my work computer, and they made me remote into a windows VM (AWS workspace) from my personal laptop to do work until a replacement arrived. I lost all my work/files/etc since it bricked unexpectedly, and that job had remote Linux VMs so I had two levels of indirection. Then I had to set everything up again, so I easily lost 3+ weeks of work due to that incident.

    • arcbyte 2 years ago

      I have a website with a worker process doing RSS parsing that occasionally fails. It would be quite nice to be able to spend 10 minutes fixing trivial bugs from my phone while I'm out and about. Or from my iPad. I'm not doing feature development but this would be nice to have for things that are so easy I could do them now but instead must wait hours or days until I'm back in front of my dev machine.

      And actually I have a desktop and a laptop that I do dev work on. More than once I've started a branch on the desktop one night but don't quite get far enough to push it up and the next day I take my laptop to the coffee shop and realize the code is still at home.

      • aleksiy123 2 years ago

        I think this is it. As soon as you go from working on one computer to 2 you do have this problem.

        I myself work on 2 laptops and a desktop all running different OS.

      • systemvoltage 2 years ago

        I mean there are always going to be niche uses for this.

        My point is about corporations adopting this sort of a thing because executives got sold on a fancy feature "Accessible from anywhere" but no one really materializes it for 99.99% of the time they are not coding from anywhere but have to suffer latency the entire time.

        It sounds so damn good on paper vs. reality.

        • vineyardmike 2 years ago

          > executives got sold on a fancy feature "Accessible from anywhere" but no one really materializes it for 99.99% of the time they are not coding from anywhere but have to suffer latency the entire time.

          This isn’t the pitch. Not at all. The pitch is “no code (or IP) on local machines that can be stolen” and “no downtime if laptop breaks… IT desk can keep a stack of chrome books ready for backup”. Combined with something like gmail and google docs, the laptop at some employees WFH house contains no business secrets ever.

          I’ve never experienced the slightest drag of latency with this approach. If you’re running a compiled language, the compiler is surely the bottleneck. If you’re doing it for work, they’ll probably set it up so it’s always regionalized close to you from a cloud. Maybe fly.io should pitch this.

        • arcbyte 2 years ago

          As a director in a previous job a few years ago I almost introduced apache eclipse orion to our organization strictly to reduce issues with onboarding and junior devs.

          I love when senior devs can set up their workspace how they like, but juniors and onboarders often need lots of handholding. Being able to spin up an IDE with exactly what they need with zero effort is incredibly valuable. We lost days and days of productivity because some developers didn't understand how to manage having both a jre and jdk on their machine.

    • paxys 2 years ago

      This is a very useful feature the moment you have > 1 development machine. Complex build environments, dependencies, personal preferences all syncing seamlessly between all of them is a godsend.

      • systemvoltage 2 years ago

        I already do that in pycharm. Everything else needs to be set once. But you’re actually arguing about multi-machines and I’m arguing about “code from anywhere” even on computers that don’t belong to you. That is an overblown feature that I’d never use.

        Latency is top priority for me. It shall not be sacrificed for any multi machine inconveniences.

        • vineyardmike 2 years ago

          > Latency is top priority for me.

          VSCode is super popular and performant. It runs in a browser (electron) natively. Running it in chrome remotely is literally no different if you have a performant network.

          I always found VSCode to perform better than IDEA based tools fwiw, especially if you want to keep a laptop on battery. Latency has never been an issue.

        • mox1 2 years ago

          What happens when your machine breaks / stops working / gets stolen?

          Is your pycharm config and setup backed up properly? Is your pycharm config versioned?

          How do you manage credentials / secrets?

          How often do you update pycharm? Does this updating require any refactoring?

          How do you build (locally? remotely?)

  • ranger_danger 2 years ago

    Why not just use sshfs and edit locally with whatever you want?

  • samirsd 2 years ago

    very cool! do you have a good guide you could share for setting that up?

    • mox1 2 years ago

      If you don't want to set it all up yourself, GitPod basically has this up and running, with a pretty generous free tier. Think VSCode in the browser, with a docker container (controlled by you!) bash prompt at the bottom.

      https://gitpod.io/

simfoo 2 years ago

Nothing beats working directly on a fast but quiet workstation sitting next to my table.

At least for me, the productivity gains associated with quicker builds, IDE resyncs (CLion, looking at you) or just being able to have email, chat, calendar and an active video conference running without making the system crawl to a halt or long latency spikes are huge. 3-4k for a machine that will likely last 2-3 years is nothing in comparison.

  • GiorgioG 2 years ago

    For the life of me I don’t understand why folks default to laptops for development. Yes portability is great, but most of us park our behinds at the same desk everyday. If I’m going to be out of the office (away from home) I take a laptop and remote into the desktop! Even M1 macs (I have one and love it), while powerful, just can’t hold a candle to a workstation class machine.

    • senko 2 years ago

      I'd love to do that, but my laptop's and workstation's state inevitably get out of sync leading up to "wait why doesn't this work ... spend a couple of minutes .. ah yes I did X on the other device".

      (Before someone suggest "use docker": then I'd need a more powerful workstation and laptop :-)

      And VNCing into my workstation from the laptop has all the drawbacks that Matthew described in the article.

      • dugmartin 2 years ago

        If you do your work in VSCode you can setup the following pretty easily and it works really well (I use it to connect to my workstation in my home office when I work in a coffee shop on my laptop or the rare occasions I go into the office):

        1. Install Tailscale (https://tailscale.com/) on your laptop and workstation and enable the "MagicDNS" (https://tailscale.com/kb/1081/magicdns/) feature. This sets up a VPN mesh between your machines (and any other you add).

        2. Setup you SSH keys so you can login via the "MagicDNS" domain name from your laptop to your workstation.

        3. Install the VSCode Remote Development extension (https://code.visualstudio.com/docs/remote/remote-overview) on your laptop and then open a workspace on your workstation via its SSH feature using the "MagicDNS" domain.

        It is surprisingly fast - you are just sending keystrokes/commands over the VPN and not rasterizing the host screen like you would with VNC.

        • IanCal 2 years ago

          I have recently started doing this and it's excellent. I can just wander away from my desktop, take my laptop and go work somewhere else in the house, and I used something very similar going away abroad.

        • senko 2 years ago

          Yeah, already Tailscale / WireGuard user and SSHing all around.

          Didn't know VSCode had a headless mode that can be driven over the net (which is what your description sounds like), will definitely check it out, thanks.

        • digdugdirk 2 years ago

          I'm a little confused as to what benefic the "MagicDNS"/Tailscale aspect is adding?

          Is this faster than a traditional VPN? Is the VSCode remote development extension not able to function with a traditional VPN?

          • dugmartin 2 years ago

            It just makes sshing between the machines by name very easy. You can do the same thing by assigning permanent IPs to all machines in the mesh and then updating all your host files across all the machines in the mesh. Life is too short for that.

            • Yeroc 2 years ago

              Bizarre. Most VPNs used in a workplace would take care of name-based host resolution (ie DNS) for you. This is not a new thing unique to Tailscale.

              • senko 2 years ago

                Having set up OpenVPN a few times, and troubleshooting Cisco VPN, the new thing unique to Tailscale is that it just works.

                It takes a few seconds to connect each new machine. It took me way more just to find out how I should configure Cisco's VPN client, and I do not ever want to even think about OpenVPN again.

                I've also maintained a WireGuard mesh where I distributed keys and set up /etc/hosts via ansible: add a host to the inventory file, run the playbook - simple.

                Yet Tailscale is even simpler than that. And (for my purposes at least) it's free.

          • itintheory 2 years ago

            Not OP, but I think the purpose of tailscale w/ magicdns is to create a VPN connection directly between the laptop and desktop, regardless of the underlying network locations of either. I believe tailscale uses connection brokering so all connections can be outbound (no firewall policy / port forwarding). MagicDNS is probably just a quality of life improvement here.

          • corobo 2 years ago

            Tailscale saves time in this. It does things like busting through NATs for you to get the VPN established, useful when on varying networks with the laptop, but yeah it is a wireguard VPN after that.

            (Not op, just butting in)

        • isignal 2 years ago

          If your work already uses tailscale, you can use enclave to achieve the same thing. Enclave and tailscale seem to coexist fine.

      • itintheory 2 years ago

        > (Before someone suggest "use docker": then I'd need a more powerful workstation and laptop :-)

        Why do you believe that to be the case? Docker performance overhead is so minimal I highly doubt you'd be able to tell any difference compared to native processes.

        • senko 2 years ago

          I currently work on a project that involves 28 docker containers (edit: on Linux, so no extra VM overhead like on a Mac), and I definitely can tell the difference compared to native processes.

        • thfuran 2 years ago

          Isn't it a full vm on Macs?

      • prmoustache 2 years ago

        My main professionnal OS in a Linux booting up from an nvme drive with an usb3 case adapter. The pro laptop I have been given only has a screen resolution of 1366x768 and 16GB of ram. I don't mind too much when using it as a desktop because I have 2 fullhd 24" screens but if I plan to be more mobile and work on one screen, or if I need lots of memory to boot containers and VMS I boot it on my personnal Lenovo. I also boot it sometimes on a desktop in my office while I boot the original windows installation of the pro laptop updating itself and not be forgotten.

        I use adhesive velcros so that the drive is secured on the backside of the screen and don't hang from the laptop.

      • asdff 2 years ago

        You could use ssh to connect to the workstation and do all your work there, preserving state.

    • BozeWolf 2 years ago

      I bring my laptop with me to other teams for questions. I also bring my laptop for presentations, demo’s and sometimes for refinements. I work at home 2 out of 4 days.

      And I am no exception. The whole company I work for does this. I cannot imagine working with a workstation.

      A big part of my team has crappy laptops and work just fine with a citrix client. I am a developer and do not understand how they deal with it, but the ba’s are ok with it!

    • methyl 2 years ago

      > Even M1 macs (I have one and love it), while powerful, just can’t hold a candle to a workstation class machine

      Not really true. My M1 Pro is performance-wise very close to my previous desktop based Ryzen 5900X, but: 1. It doesn't take space on or below my desk 2. Auxiliary screen it provides is useful 3. I can unplug it any time and continue working from anywhere with the same performance, without having to sync up the development environment.

      Before M1 Macs I would concur, but right now the major reason to pick desktop is Linux availability (which is a subject to change with Asahi), not performance.

      • jackcviers3 2 years ago

        My System76 laptop with Nvidia graphics is faster than the M1 Pro I have for work, has more cores, and 64G of ram means I can run all my communications stuff and an ide and a compiler and a local k3s and the machine won't break a sweat. However, the battery will drain in an hour and a half doing all that. Performance is definitely not an issue on non-arm machines. Battery life is.

      • vetinari 2 years ago

        I have M1 (the original) MBP and Ryzen TR2920X desktop (with oddles of RAM, multiple NVMe drives and 10 Gbps networking). The Mac, while significantly better than any Intel laptop I had before, still cannot hold a candle to the desktop, sorry.

        The desktop is much more power hungry, though.

        • TheTon 2 years ago

          It depends on your workload and codebase. I have a Ryzen 5600X in my desktop and for C++ work, my M1 Pro is quite a bit faster for clean or incremental builds. The desktop is still useful/required for some of my work (using x86_64 windows with an nvidia gpu) but I default to the Mac for anything that could be done in either place. It also helps that I prefer the Mac tools so it’s not just about the CPU speed.

          That said, I’d rather find a new job than trade either system for a cloud desktop. I count myself fortunate that I’ve always been in a position to choose my computer and tools.

    • indymike 2 years ago

      > I don’t understand why folks default to laptops for development.

      I think there's a lot of cost/benefit that comes down to: depends on what you are building. I had lunch with a VR dev last week. He needed a big machine for huge MS builds. I do a lot of web and network programming, and a $1200 LG Gram (i7/32GB, 17" screen) is way more than adequate. The important thing is that employers understand that slow computers cost them a lot of money when they hobble developers with them.

      • vsareto 2 years ago

        >The important thing is that employers understand that slow computers cost them a lot of money when they hobble developers with them.

        Sadly, that's not going to fly these days with those same employers thinking people slack off more often remotely vs. in the office.

        • gjm11 2 years ago

          If people are slacking off more when working remotely, then measures that make doing the job less frustrating seem likely to have outsized positive effects, by reducing that slacking-off.

          (Maybe I'm assuming I'm more typical than I really am. I know that when the work I'm supposed to be doing is frustrating and annoying I feel much more temptation to do other things instead.)

          • vsareto 2 years ago

            I'm saying they're not going to see the logic of saving time with better equipment when they're complaining about people slacking off remotely.

      • raxxorraxor 2 years ago

        True, employment costs out-compete costs for hardware very quickly. If your employee takes 10 minutes per day waiting on tasks because the system is too slow, you can instead buy a pretty decent rig every year.

        • ThunderSizzle 2 years ago

          How do you truly report that in a corporation? I haven't seen a way to disclose that a good amount of my time in a project is waiting on thr crappy system they've constructed.

          It's almost like the just assume the costs or pretend like it doesn't exist.

          • indymike 2 years ago

            Don't make the problem be about people or process, just show the exact problem and how you could get more done if you had a way to continue working during a build. Put the problem on trial, and not people (don't call anyone in IT stupid, don't blame anyone for sucky processes, and do not under any circumstance indict the choice of tooling).

            Take your slow laptop to lunch with your manager. Explain that you are starting a task you have to do three-four times a day that prevents you from working because your computer is maxed, compiling. At the end of lunch, let the manager know when the build stopped, and then discuss getting a faster or a second machine so you can work while building.

    • temac 2 years ago

      > Even M1 macs (I have one and love it), while powerful, just can’t hold a candle to a workstation class machine.

      My M1 Pro is faster in some workloads than a small Dell tower sold as a "workstation". Of course I could buy a huge workstation with a 250W CPU or some kind of insanity like that, but then I suspect its power efficiency will be 4 times worse than the M1 Pro. The Dell tower already makes quite a good amount of noise under load while being beaten by a mostly silent M1 Pro.

    • afavour 2 years ago

      > Yes portability is great, but most of us park our behinds at the same desk everyday.

      Got to take exception to that. I'm a developer developer an I'm still required to get up from my desk to attend meetings etc, and I need my laptop in them. Or pair programming. It is usually a different kind of work so there's probably a world where I could have a desktop and a laptop but inevitably I'd end up needing to do something the iPad can't do and get frustrated.

    • romeoblade 2 years ago

      I have a work laptop, but do 99% of my work on an identical VM that sets on my homelab proxmox cluster. Working this way allows me to work from any device, even my phone or iPad from anywhere. It's encrypted, has all the standard security tools required for work, etc. Our VPN suite checks for all of that on connection. I have the added benefit of being able to provision with massive amount of resources that it'll only use when needed due to ballooning and quick backup and rollbacks due to LVM thin provisioning.

      I do have everything sync back to the work laptop so in the rare case I lose internet or have a hardware issue with the cluster I can continue working. But that's only happened once in the last two years when a completing fiber provider cut a fiber line on my property laying their own fiber. (Not their fault, my current provider had the markings off by 50', and even then the foreman gave me a gift certificates for the trouble)

    • scarface74 2 years ago

      Even in the bad old days when I did work in an office, we worked some days in the office and some at home.

      Even now that I work remotely, I still go home to see my parents for a week at the time and work from there. I definitely wouldn’t want to be dependent on the internet.

      Not to mention in less than a month, my wife and I will be doing the digital nomad thing working while flying across the country for a few years.

      My set up includes a portable USB C powered external monitor as a second display and my iPad as a third display. Of course I have a Roost laptop stand.

      If I need to spin up resources, I use my own (company provisioned) dev AWS account and it’s just there.

      Even the last 60 person startup I worked at would let us set up dev AWS accounts with the appropriate guardrails for development.

      We had CloudFormation templates to spin up environments as needed and we could just tear them down.

    • Test0129 2 years ago

      You nailed it. Portability. Also if you're working professionally it's far easier to collect your property as a company when you don't have to pay oversized shipping costs for a desktop.

      Though rarely is a laptop in clamshell mode as good as a desktop. For certain things, I don't think they'll ever be. For example, graphics work and a lot of scientific work just isn't sufficient unless it's done on a desktop.

    • terinjokes 2 years ago

      I've asked for a workstation from corporate IT, because I'm nearly always working from the same spot, and would be okay with a Chromebook on the rare situations I'm working remote.

      The cost of a beefy (but properly cooled) workstation + cheap Chromebook isn't much different than a corporate laptop. It's just not an option being considered anymore.

    • paxys 2 years ago

      "Most of us" absolutely don't do that. My company has 3000+ people, and I can say with certainty that every single person works away from their desk at some point in the day. I would quit my job in an instant if I had to be tied to a desktop at a particular spot all my life.

    • alkonaut 2 years ago

      Becsuse the business only provides you with one machine so if you need a portable one one day of 100 then it has to be a laptop.

      Buying or maintaining two devices per developer is too costly regardless of whether the pair (a cheap laptop and decent desktop) is cheaper than an expensive laptop.

    • moduspol 2 years ago

      What development are you doing where there's a notable difference between an M1 Mac and a "workstation class machine?"

      Is it just running a bunch of VMs? I don't doubt the tasks exist but it's got to be a list that's been dwindling for the last few years.

      • glandium 2 years ago

        Not GP, but:

        Building Firefox on a macbook air or pro M1: half an hour. Fastest M1 max is around 15 minutes.

        Building Firefox on my Threadripper workstation: 5 minutes.

        • simfoo 2 years ago

          This is also my experience. Large Rust/C/C++ code bases will easily compile 3-4x as quickly with a fast workstation as with a top end laptop. I blame thermal design and power limits

      • ohgodplsno 2 years ago

        Android development, a clean build of our project on an M1 Pro is 15 minutes, a clean build from our build server (which is ultimately just a thick 11700K or something along those lines, so still relatively old) is 3 minutes.

        Thermal throttling is a bitch.

      • wongarsu 2 years ago

        I think anyone who wants a large amount of RAM or who works on projects in compiled languages will easily notice the difference.

      • patrick451 2 years ago

        At my job, our main software is a multi-million line c++ codebase. It takes most devs 45 minutes to an hour to compile without a fresh ccache, and this is a workstation with 8 physical cores. On my laptop, a fresh compile takes over 2 hours. This can be brought down to under 10 minutes with enough cores. Partly due to poor internal dependency management, it's pretty common that every git pull or rebase requires recompiling ~1/3 of the codebase and waiting multiple minutes to compile in an edit/compile/test cycle is common.

    • prepend 2 years ago

      It’s sort of the same reason people drive big trucks into work every day.

      5% of the time I need to travel with my laptop. I’d rather not maintain two machines.

      My daily dev is an 8cpu MacBook Pro. It’s not as fast as a proper workstation but I can take it anywhere with about 5 seconds prep time.

    • zmmmmm 2 years ago

      I guess this is why I ultimately went for a fully spec'd Macbook Pro. It's the price of a car but the value of having workstation class performance anywhere I go makes it easily worth it.

    • brundolf 2 years ago

      It depends what you're doing. For a normal web dev workflow, I have yet to see my M1 MBP be anything but flawlessly responsive. I'm sure there are other workloads where it's different

    • duxup 2 years ago

      Are most development environments high performance / high computer utilization environments?

      I was under the impression they were not.

      I couldn't tell you if I was on a laptop or desktop when I'm docked ...

    • make3 2 years ago

      wtf do you do with your CODING workstation that a M1 computer can't do?

      • lupire 2 years ago

        Compile large code base.

  • scarface74 2 years ago

    What type of laptop are you use to using where that would make your computer crawl? I have none of those issues with my current M1 MacBook Pro 16inch.

    I have never heard the fan on my current MacBook. Now my older x86 one is a different story.

    • hbn 2 years ago

      I have the 2019 16-inch i9 MBP for work, and even that has served me pretty well for almost 2.5 years. I'm fairly conscious about what I have running at any given time, I force-quit out of apps that I only open occasionally to free up resources. Sometimes the fans will get going quick if I'm doing a lot (running Java services, in a Teams call, etc - on top of whatever the hell processes are being used by jamf, VPN, and zscaler) but I can't recall it ever "slowing to a crawl." It mainly just gets hot until I'm done with one of the big tasks the laptop is currently doing.

      • scarface74 2 years ago

        So the issue is the corporate mandated malware. I usually lay the blame of performance issues on corporate malware for any modern Mac or Windows PC.

        But all video conferencing software sucks. I have to use them all on occasion depending on the client and usually the only one I actually keep installed instead of using the web version is Chime (yeah I know how do you say where you work without saying where you work).

        Oh I just notice you said you had an x86 Mac, yeah they all suck when it comes to fan noise and throttling

  • asdff 2 years ago

    Having access to a cluster certainly beats having a local workstation imo. Why settle for 1 node when you can have many?

gw99 2 years ago

Yep. Having worked in these environments, this solution is almost always sold to companies that are working around shitty hard to reproduce software stacks, staff trust issues, scale up difficulties and checkbox security cargo cults. The resulting outcome is usually increased staff turnover, increased cost and decreased productivity. Most of this they are having trouble rationalising or acknowledging still.

You don't want to work for those companies.

It's notably different if you have a cloud VM running linux and you're connecting to it with VScode or something over SSH. That's borderline acceptable. The reality is usually some horrible AWS, Azure or Citrix portalised solution however.

  • clarge1120 2 years ago

    It’s a miserable experience from top to bottom. Onboarding a new developer takes much longer and is far more tedious than one might expect. There are multiple layers of security employees must navigate. And when something breaks, anywhere, it’s a huge pain to sort out the source of the problem, find the right person responsible, and get something fixed.

    If you find yourself in an organization that thinks this remote desktop environment is a great idea, do yourself a favor, if you can, and leave. You’ll give other devs more incentive to push back and make this a thing of the past, like “thin clients”.

    • lupire 2 years ago

      Huh? If your corporate resources are synced to your laptop, you have no security.

  • spaniard89277 2 years ago

    RDP works fine for Windows honestly, but in Linux the only decent solution is nomachine and sometimes not even that.

    Anyway, in my company they decided to hand us out a company laptop and connect through VPN to the corporate network, with shared drives, and it's the best solution IMO.

    • gw99 2 years ago

      I worked over RDP for a couple of years. It's not terrible but it's not too good either. You pretty much have to have a wired Internet connection and there are still problems with Alt+Tab and high DPI displays.

      That's a reasonable compromise from your org. Good on them. I was suffering with corporate OneDrive. Fortunately everything I do ends up in git anyway so I just turned it off and don't use it.

      • ale42 2 years ago

        I work almost daily with RDP when I'm working from home, and I have to say that most of the time I almost can't tell if I'm on the remote machine (I'm working full screen) or on the local one. Unless I'm playing a video or use a graphically-heavy application. But it is very true what a good wired connection is needed (at least 100 Mbps, with low latency ~15 ms). Tried this on a LTE connection (a few Mbps and quite some latency >150ms), and it's a pain.

      • pbronez 2 years ago

        OneDrive and SharePoint have been mostly ok for me… but there are real limits. They’re fine for a large number of medium sized files that you collaborate on with a handful of people. But then when you have a 100MB PowerPoint with a dozen contributors it falls over and can’t get up. I’m so annoyed they killed Slide Library…

    • rcarmo 2 years ago

      See my other comment regarding xorgxrdp.

  • vineyardmike 2 years ago

    > issues, scale up difficulties and checkbox security cargo cults.

    > You don't want to work for those companies

    I can’t defend the likes of Citrix but I’ve been the guy who has to tell an intern on their last day to hand over the flash drive with code we know they copied over the day before. Sometimes avoiding those issues is easier.

    Also weird tech stacks are a real issue (but there are lots of developer-native tools for the job).

    > It's notably different if you have a cloud VM running linux and you're connecting to it with VScode or something over SSH. That's borderline acceptable. The reality is usually some horrible AWS, Azure or Citrix portalised solution however.

    100% agree. VSCode and VM is my only accepted solution now.

  • raxxorraxor 2 years ago

    Absolutely same experience here. The pay is often nice because they also have difficulties to attract developers. Absolutely not worth it in my opinion.

  • mr_toad 2 years ago

    > It's notably different if you have a cloud VM running linux and you're connecting to it with VScode or something over SSH. That's borderline acceptable.

    Layer 4 beats layer 3 in my experience.

    I also find that remote workspaces have advantages that offset the latency and performance issues.

    Being able to quickly spin up or clone new workspaces and isolate software dependencies is a huge advantage. It can help a lot when dealing with multiple Python environments or JavaScript dependency trees.

cardanome 2 years ago

I don't get the use case. Why would you even consider using a cloud desktop?

Even a very low-spec laptop is going to run a simple graphical desktop environment like Xfce just fine. Watching a youtube video, browsing the web and even video conferencing can be handled with any new-ish laptop.

And in reality, you still want a reliable laptop with decent keyboard, long battery life, good display and so on. So you won't end up on a low spec machine to begin with.

For computation heavy dev stuff a simple SSH access is good enough. It can be a very smooth experience with a locally running VS Code or something.

  • nefix 2 years ago

    Disclaimer: I'm part of the IsardVDI project (https://gitlab.com/isard/isardvdi)

    In my opinion, developing is not a really good use case. Some of our team develops using VSCode + SSH against a remote VM.

    One of the best use cases we've found is education and, specifically, trade schools. There are some trade school courses that require really specific software (image and sound, designing electronics, interacting with proprietary robots, etc.), and it's a painful experience to manage all of that, add new programs, etc. (some trade schools have 60+ courses, each one having different subjects and different software through the year!) By having cloud desktops, the teacher can create a template with their requirements and share that template with the students, and if the requirements change, it's as simple as modifying the template and sharing it again.

    Also, most of the public schools here are underfunded, so they end up with really old machines, and the cost of renewing a whole classroom gets really high: let's say a new machine costs between 600€ and 1000€ (depending on the trade course requirements). If you have 30 machines for each classroom, it's something around 24k. (Then there are lots of classrooms, you get the idea)

    By having "cloud" desktops, there's no need for renewing old hardware, since you can have something like xfce + the viewer, and all the systems can easily manage that load (we even have classrooms with RPIs), and this can be a huge money saving

    In the end, cloud desktops aren't the best option for all the use cases, as the author puts it:

    > Overall, the most important thing to take into account here is that your users almost certainly have more use cases than you expect, and this sort of change is going to have direct impact on the workflow of every single one of your users. Make sure you know how much that's going to be, and take that into consideration when suggesting it'll save you money.

  • JumpCrisscross 2 years ago

    > Why would you even consider using a cloud desktop?

    I've travelled to and through countries (e.g. in the Gulf, France, India) with my work laptop where I was deeply uncomfortable having those data on hand. Taking a clean machine and remoting into that one when needed removes a lot of paranoia points.

    • jhickok 2 years ago

      I don't think you need a remote desktop in order to keep the data off the machine.

      • NoGravitas 2 years ago

        Yeah, I'd much rather deal with a local desktop and a remote filesystem, for sure.

        • gcatalfamo 2 years ago

          I'd like that too, but can you give me a working example that is not based around using a VPN, or editing documents on the browser?

          (which are both viable, but there is a different number of drawbacks to each)

          • NoGravitas 2 years ago

            A remote filesystem over a VPN ought to be reasonable. I can't think of another reasonable way to do it.

            • jhickok 2 years ago

              Technically any files-over-https might work? I'm thinking Sharepoint, etc.

    • cardanome 2 years ago

      Why not just encrypt your hard disk?

      Please don't say something like you don't trust encryption. We have known cases where even state actors could not crack encrypted devices. Not to mention the remote communication you have would be easier to monitor and possibly decrypt anyway.

      Sure, in theory you would need a kill switch in case some special forces come through your window while you are working on you laptop and force you to remove your hands from it but but I doubt you live such an interesting life for that to be a realistic treat model.

      • petercooper 2 years ago

        Why not just encrypt your hard disk?

        While this is a good idea, note that in some countries it is an offence to not hand over keys or passwords when requested (or can rapidly become one - like in the UK) so not carrying data with you in the first place can defend against that.

        • cardanome 2 years ago

          What is stopping them to force you to give access to your cloud providers though?

          I think there are solutions to make hidden partitions. You would have have to create a clean, plausible system to show potential attackers.

          Still feel that clouds providers are a bigger attack surface than encrypted local data. To get you cloud data an attacker would just need to be able to compel you to give the password. With local data, they also need to get physical access to it. You could for example decide to not take your laptop to a potential dangerous meeting and store it somewhere safe.

          Plus, cloud provider have way more attack surface area. They get regularly hacked. Some state actors already have back doors or can otherwise compel the provider to hand out your data.

          The more I think about the more I think storing sensitive data in the cloud is not a good idea for privacy and security.

          • petercooper 2 years ago

            What is stopping them to force you to give access to your cloud providers though?

            Here's my thinking: If you're travelling to a country with nosy officials and you needed access to a lot of sensitive data, if it were on your regular (but encrypted) hard drive then it would be more visible if they asked to see the machine. With that data online, it could be in a system you only access by a URL you remember which they can't see. You can show them a normal desktop.

            Still feel that clouds providers are a bigger attack surface than encrypted local data.

            If you are actively being targeted, I agree. I was thinking more the "curious official" folks seem to run into when travelling. Since the mere possession of certain plain text documents is a criminal offence in my country, this has the potential to catch people unawares.

            I think there are solutions to make hidden partitions. You would have have to create a clean, plausible system to show potential attackers.

            This is a good tradeoff and would probably be fine unless they're really out for you - a whole other ballgame.

          • lmm 2 years ago

            > What is stopping them to force you to give access to your cloud providers though?

            The fact that a) the cloud provider is in a different jurisdiction b) many countries have very broad "anti-hacking" laws that they'd be breaking. It's not by any means a "naturally safe" way of working, but under the current hodgepodge of laws it has some benefits.

      • dodobirdlord 2 years ago

        Many jurisdictions around the world have laws that allow police to compel you to decrypt data, particularly at border crossings.

  • Jorge1o1 2 years ago

    This isn’t the intended use case, but one upside of cloud desktops is that if I ever forget to bring my work laptop with me, I can RDP from a friend’s computer, etc.

    In one particular industry that is rife with cloud desktops you can be trusted to invest or trade $X mln dollars of someone else’s money on a daily basis, or to model out a $Y billion dollar M&A deal, but God forbid you try to install VSCode or MobaXTerm on your own.

    IT presumably got tired of being bombarded with application install requests, so one solution is to use vendorized cloud desktops that come with pretty easy tools (for them) to install applications.

    • P5fRxh5kUvp2th 2 years ago

      > In one particular industry that is rife with cloud desktops you can be trusted to invest or trade $X mln dollars of someone else’s money on a daily basis, or to model out a $Y billion dollar M&A deal, but God forbid you try to install VSCode or MobaXTerm on your own.

      I feel this so much.

    • guhidalg 2 years ago

      I don't understand when and why so much power was delegated to IT w.r.t installing software. The FSF needs to start fighting IT and device management policies before talking about open source software.

      • origin_path 2 years ago

        It's because people install malware a lot. Usually it comes along for the ride with pirated software, creating a 2x headache. I remember when they introduced similar policies at Google for Windows workstations - the stated rationale was that Windows users would warez literally anything and this was independent of job role or position. Senior engineering managers would be warezing things and it would come bound to malware. So they moved to binary whitelisting, eventually :(

        Linux avoids this problem mostly because it doesn't have much commercial software to pirate in the first place.

  • dijonman2 2 years ago

    Companies with strict data protection policies can force the usage of VDE for sensitive tasks, as part of a broader DLP program.

    • AshamedCaptain 2 years ago

      Corollary: your company might force to use you a cloud VM desktop _even when your laptop is significantly more performant than the entire server holding these VMs_.

    • Frost1x 2 years ago

      My work environment has a set of tasks that need to run exclusively in a tightly controlled cloud desktop environment. It's a nightmare.

  • thesh4d0w 2 years ago

    When your game developer has and needs a $4k workstation, and they work from home half the week. No reason to buy them another machine and have them maintain two seperate workspaces, we just give them parsec.

    All our staff seem happy and we don't get complaints. Author hasn't tried modern tools it seems.

    Another use case: mobile people with laptops, who sometimes want to hop in to a play test or show a game off to a vendor. No need for them to have a gaming laptop 99% of the time, when an X1 carbon + parsec to a beefy box work fine.

  • jabroni_salad 2 years ago

    If your definition of zero trust includes the endpoint devices because they are in an area that the general public can access.

    If you want access to your desktop from multiple locations. Ex, at my local hospital the staff can tap their badge to any computer and instantly reconnect to their desktop exactly where they left off.

    If you are in a multi-site scenario but your big LOB app hates the internet so you need all your clients to be in the same building as the server. This is actually the reason I deploy vmware horizon... im not sure what jack henry and Fiserv are doing to make their overblown CRUD apps so network heavy and inefficient to operate, but I'm happy they are finally rolling out their own cloud-first apps so they can deal with their own garbage instead of outsourcing their support to their customeres IT guys.

    If you literally just can't acquire hardware because of a pandemic and need more compute than you have on hand.

    • marcosdumay 2 years ago

      You definition of zero trust can never include the endpoint devices. Those get to see everything you type on them, and have the same level of access to your services that its user has.

      Instead, with a remote desktop you are only adding a bit more of vulnerabilities. It can never remove any.

      (About those others, there exist some nice works about program portability, that culminated on fully distributed OSes, but those have no adoption. Instead, people prefer to hack distribution over the piles of hacks that are modern OSes. Obviously, it doesn't work well.)

  • dirtyid 2 years ago

    PIA to manage multiple systems / os that don't have functional parity. My steamlink, tablet and occasionally phone remote desktops strait onto my desktop with all the customizations I'm used to for daily tasks. It's just nice not having to adjust. Only problem (relatively new) is DRM preventing many streaming service from displaying video.

  • AtlasBarfed 2 years ago

    I'd like a sync'd work env that does local as well as remote-from-anywhere ability.

    The fact that Microsoft failed at a dozen sync frameworks (as chat is for google, sync was for Microsoft) and in 2020 this is still not really doable.

    Also, I still can't find a good guide for spinning up a desktop in AWS.

  • pjmlp 2 years ago

    Contractors, no way to take code out premises (assuming proper security settings on the VMs), and easy to get new instances instead of waiting for crap Dell and HP dual core laptops, with 8GB and 256GB HDD.

  • NibLer 2 years ago

    I would love a great cloud desktop. No worries about local backup, ability to use older hardware.

ClumsyPilot 2 years ago

With cloud gaming you can stream 4K games at 60FPS, with clarity and quality for fast moving objects.

Why does remote desktop still shit itself when I move around MS Word with a few pictures?

I know a tier 1 financial company that offer 100k / year developers a slow VM and from there you have to log into another VM. The VMs are dual core, 8GB. I watch in horror as each keypress takes more than a second. The amount of lost productiviry is in millions

Shadow offers remote desktop environment with GPU acceleration where you can run games and it feels responsive and decent.

  • prmoustache 2 years ago

    You can use the parsec client, usually dedicated to gaming, to work on a cloud desktop. It works really well.

  • boppo1 2 years ago

    I would also like to know this. I'm interested in doing fintech development... but there's no amount of money someone could pay me to use a consistently laggy environment. This thread is making it sound like it's commonplace.

  • orloffm 2 years ago

    Those companies don't have GPU acceleration in Windows 10.

  • mvdwoord 2 years ago

    Application streaming was and perhaps is still a thing. I remember the softricity days... (Not fondly as the process to prepare application bundles was cumbersome).

  • hasel 2 years ago

    What’s the input latency like in those cloud games?

    • NotHereNotThere 2 years ago

      From a former Stadia user; latency was never an issue or noticeable with a 4K stream. And I've played quite a bit of fast paced shooters in the platform.

      The experience is extremely dependent on location, bandwidth, local setup and availability of close services (in my case, the closest DC was <15ms away according to Stadia telemetry).

      • groovybits 2 years ago

        I've been a Shadow PC [0] user on and off for the past few years. The performance was very good, granted I have a 1 gigabit Internet connection.

        0: https://shadow.tech

    • asdff 2 years ago

      You can't play an fps with them but they are fine for turn based strategy and similar games. I did some beta testing for google stadia with assassins creed oddyssey when it was still being conceived, and while it was mostly playable (single player game though, I would not consider it competitively playable against other humans), even with my wired 1g fiber connection the service would have these huge drops down to almost 144p quality along with framerate issues.

    • jasonlotito 2 years ago

      Not the OP, but i know for something like PlayStation Now, it was acceptable for most games. It was a pleasure to just decide to play a game and well yeah, I was playing the game. The latency wasn't a concern.

      I couldn't see myself playing an FPS on it, but then I prefer kb/m anyways. But for the games I play on console? It was fine.

      There can still be issues, of course, but the latency overall wasn't a deterrent.

    • iosjunkie 2 years ago

      You can try it for yourself with Nvidia’s GeForce Now free tier. Just connect with Ethernet if possible.

      My experience is that it’s definitely playable and beats a low power laptop with an underpowered video card. Latency in the 40ms range, and barely perceptible.

    • thebitstick 2 years ago

      I have 100Mb/s internet and I cannot notice it on Ethernet on my M1 MacBook Air or on Wi-Fi on my iPhone or iPad. Wi-Fi on the M1 Air is garbage so it's very noticable there.

GianFabien 2 years ago

The cost of a new fullly-spec'd workstation + high performance laptop is tiny compared to the salary of good software developers. Managements have a warped sense of how to save money and as a result grossly hurt morale and productivity where it matters the most.

  • VyseofArcadia 2 years ago

    It's not one big pile of money. Business expenditures are treated differently depending on what you spend it on.

    My loose understanding is that capex (e.g. hardware) and opex (e.g. salary) are treated differently in a lot of ways. Some of it is taxes, there are deductions available for opex that don't apply to capex, at least in the US. Also, you can cut your expenses on opex to balance your budget (e.g. layoffs), but it's harder to recoup the sunk cost on capex.

    Cloud desktops turn some capex into opex. Depending on how many employees you have, it can be a sizeable chunk of change.

    I still think it's only worth it in specific edge cases, though.

  • tpush 2 years ago

    > The cost of a new fullly-spec'd workstation + high performance laptop is tiny compared to the salary of good software developers.

    That depends highly on where you are in the world.

    • marcosdumay 2 years ago

      It almost doesn't. Computers are now cheap enough and developers mobile enough that it holds through all of the developing world, and most of the underdeveloped world.

    • asdff 2 years ago

      $3000 every 3 years is not alot of money to spend per head, unless you are considering somewhere with developer salaries in the $10,000 range or something like that.

  • johannes1234321 2 years ago

    Hardware appears as expense on the balance sheet. Lost time not.

  • ClumsyPilot 2 years ago

    Some companies in UK still provide 1080p monitors of the lowest quality

    • yrgulation 2 years ago

      In my whole career i only had one company that provided windows machines and as expected it was a horrible place to work at. Current client wants to do the same. Coincidentally the place is becoming less desirable to work with.

      • wongarsu 2 years ago

        Outside of silicon valley, the business world runs on Windows, no matter how cheap or expensive the machines are.

        Being inflexible can certainly be a red flag though.

        • qwezxcrty 2 years ago

          Ironically, in the fabs making chips for the silicon valley, (almost) everything runs embedded Windows.

        • yrgulation 2 years ago

          My clients are all based in the uk and nearly all run on macos, except those at the bottom.

          • aeyes 2 years ago

            But the problem isn't Windows or MacOS. It's all the corporate spyware, antivirus, network interception and whatever else they come up with making the machines work like its 1995.

            • yrgulation 2 years ago

              That is spot on and indeed a red flag. Imagine hiring people and the spying on them.

    • theandrewbailey 2 years ago

      At least they provide monitors. I imagine that many don't.

the8472 2 years ago

Another one is an unholy confluence of corporate compliance bullshit

Connecting to the remote machine needs to go through corporate SSO (in a browser) that then starts the native remote client. Policy requires MFA, strong, frequently changed passwords and Windows Hello on the laptop. Policy also requires screen lock after 5 minutes. For some reason policy also requires disabling copy-paste to remote machines.

The end result is that the remote session gets locked out every 5 minutes when you do something in the laptop's browser instead. To log back in one either has to enter a long, complicated password (can't paste it from the password manager!) or use an mfa code. Hardware tokens don't work either due to unreliable USB forwarding.

Having to jump through those hoops once or twice a day would be tolerable, dozens of times is grating.

I assume the policies are written for all the worst-case scenarios where people remote in from private, shared devices or use a laptop in a public place. But they add a lot of unnecessary friction when a laptop is used from a lockable home office.

  • r00fus 2 years ago

    Why would they disable paste-into? I can understand not allowing data egress but ingress? I don't understand the use case they're preventing against.

  • vorpalhex 2 years ago

    I had a similar problem that required me having to auth 36 times a day.

    I told them to fix it or I was opening a health claim for back issues from having to 2factor.

    Took them a few weeks but now I auth once or twice a day.

  • tenebrisalietum 2 years ago

    A small Windows utility called caffeine generates fake keypresses (F15 which is not on most keyboards) and/or moves the mouse slightly, which prevents screen lock from kicking in.

  • BenjiWiebe 2 years ago

    Does your password manager have an auto-type feature?

    • the8472 2 years ago

      The remote desktop app captures the shortcut that would normally trigger auto-type. If I invoke it from the password manager's GUI it does work.

mgkimsal 2 years ago

Have generally been skeptical of 'cloud desktop' but... I had a friend who got in to sales for a cloud desktop provider about 6-7 years ago. There was only one real strong use case and she sold to that niche. Some specific cad/modeling/rendering vertical had software they used, and it was a CPU bear. Running that 'remotely' in the cloud was much faster than anything they could have locally. Managing all the licensing and security/perms there was an added benefit, but she was also mostly selling to smaller firms that didn't have full time staff to handle that.

For the market she was in, at the time, there was a moderately clear win. I watched a pitch, and the speed diff was real. The productivity gains in many folks saving an hour or two in rendering time was easily worth the... I can't remember - $200/month/seat maybe? Outside of those types of use cases, the benefits were harder to justify. And... in 2020+... unsure if local desktop CPU caught up enough that the benefits were lower.

  • VyseofArcadia 2 years ago

    I worked in architectural CAD for a number of years (as a software engineer, not an architect), and this surprises me. Sure you needed a PC with a little bit of beef to work in it smoothly, but not unaffordably beefy. A not all that recent macbook was good enough for pretty much all of our customers. I left the company in 2019, and plenty of people were still working on a 2012 macbook on the latest version. My own dev machine was a 2015 macbook.

    We did offer cloud rendering as a subscription service, but most people just did big renders overnight, and that was usually animations, not single-frame renders.

    I'm curious which CAD software is such a bear that cloud desktop was worth it. Whether the particular industry was just that incredibly complex, or if the product was just slow and inefficient.

    • mgkimsal 2 years ago

      The details are hazy, and our paths don't cross any more, so I can't say for sure. Yes, overnights were still done in some cases, but they were able to do more 'in day' smaller renders (IIRC) that was making it worthwhile.

      IIRC... she done some consulting work for a particular firm, and they were investigating cloud stuff. When they chose that vendor, she saw how much of an impact that was making, and contacted the cloud vendor and became a salesperson/evangelist.

      You were on a MacBook. They were all in the Windows/MS world, so perhaps there was something about that software that just was 'better on windows in the cloud'. Again, sorry I can't remember too many more details. I do suspect times have changed some, so the ROI may not be there any longer (and maybe wasn't there at the time for many folks).

      • VyseofArcadia 2 years ago

        The mystery deepens. I mentioned my mac because it's an easier comparison to make, but we were cross platform and I also had a contemporary ThinkPad. In fact, I worked mostly on the ThinkPad because I preferred Visual Studio to XCode.

        Most of our customers did stick to mac, though. A lot of architects fancy themselves designers and really buy into Apple's marketing towards creatives.

        • Dracophoenix 2 years ago

          I would've thought architectural CAD like Revit would be computationally heavy. What rendering engine(s) do you use?

  • geoduck14 2 years ago

    This sounds like a good use case!

    I'm considering cloud desktop at work right now for something similarish. We have a fat pile of data and want to let people use the data. If we give them VMs on the same network as the data (with super high bandwidth and CPU amd GPU), they can manipulate the data quickly.

mwcampbell 2 years ago

I'm glad he brought up accessibility. My company has been working on a remote desktop product [1] that addresses this issue, particularly for blind users. The connection carries audio output from the remote machine, and the keyboard input handling code on both sides is designed to work with the quirks of screen readers, so running a screen reader on the remote machine works well. Beyond that, if the remote machine isn't running a screen reader, there's a way to get speech output on the controlling machine using the open-source NVDA screen reader for Windows, without requiring audio output on the remote machine. We still need to work on Braille output and screen magnification, and we've only started thinking about alternate input methods, so this doesn't cover everything, but the problems are solvable.

[1]: https://getrim.app/ I don't normally self-promote commercial products like this, but this is relevant to the article, and I thought people might find it interesting.

Rygian 2 years ago

It is good that these "water is wet" statements get written down so we can point humidity-skeptical people to them from time to time.

The deeper problem is the sad state of affairs of distributed computing for the end user:

* Application instances expect to be the only ones modifying the files that underlie the document being edited. Most of them simply bail out when the files get modified by another application.

* The default is "one device = one (local) filesystem" which is the exact opposite to what everyone needs: "one person = one (distributed) filesystem."

* The case for local-only filesystems only addresses corner cases, or deficient distributed file systems that fail to uphold basic security constraints (such as "my data is only in my devices" or "no SPOF" for my data).

* Whatever gets pushed to the cloud becomes strongly dependent on devices and vendors. Users end up handcuffed to a specific hardware (iCloud) or software (Android) if they want to have any chance of interacting with their own documents from their own devices.

* What we need is not cloud desktops, or cloud storage. We need local desktops with a decent distributed filesystem, and vendor agnostic access to that filesystem from all our devices.

  • josephg 2 years ago

    I couldn't agree more. I've been working on CRDTs the last few years, and there's a huge opportunity here if we can reinvent the concept of the filesystem. Ideally, we'd replace files with CRDT-backed objects in the operating system. Then instead of fread / fwrite commands (which wastefully overwrite the entire file), applications could express semantic changes which get saved in a log.

    Those changes can be transparently replicated between applications, between devices and between users. We'd get better performance on-device, and automatic, transparent device-to-device replication. And we could trivially enable realtime collaborative editing between users. Better still, if it happened at the OS level, we could make it work in every application on the system.

    Right now "linux on the desktop" is slowly and inevitably dying in the face of cloud services. How would OpenOffice even compete with Google Docs? Do opensource application authors need to run their own web servers? (And if so, who pays for that?). If we replaced the filesystem with CRDTs, openoffice (and every other program on the desktop which edits "documents") could have best-in-class collaboration features, right there out of the box.

    There's an opportunity here to build a really amazing computing system.

  • jasode 2 years ago

    >What we need is not [...] cloud storage. We need [...] a decent distributed filesystem,

    Distributed files to where exactly? You need to be more concrete about the remote location of non-local data that normal people can use. Ok, so you want "distributed filesystem" to not mean "cloud storage" ... So is it p2p? Something else?

    In other words, we want Windows, macOS, Linux, iPhone, Android, etc operating systems... to have a file system that all points to the same "distributed filesystem" and see the same files -- and for other collaborators to see those files.

    But we don't want those os configurations to point to DropBox / MS OneDrive / Google Drive / Backblaze, etc. So, we need to be concrete on the alternative common remote location that those file system APIs would point to. What would the topology of that solution look like?

    • jagged-chisel 2 years ago

      > Distributed files to where exactly?

      My desktop computer, my laptop computer, my tablet computer, my pocket computer. Whether that’s cloud or p2p doesn’t matter to me, the user. I should be able to start working on a spreadsheet or presentation on one and, without the ceremony of “save to a shared location, close the app, switch devices, open the app … now where’s that file again?” switch to another and continue editing.

      First we need to specify a distributed fs. THEN we can decide the “to where” bit.

      • generalizations 2 years ago

        That sounds like syncthing. Dunno about phones and tablets, but I have that functionality among my computers.

        • pluijzer 2 years ago

          I have no affiliation but want to second this.

          If you want to keep your filesystem in sync across many devices Syncthing fully enables this.

          It is a use case where you would expect a payed service would be easier or more reliable but with Syncthing it is the exact opposite. Just install it on your devices and select the folders you want to keep in sync ... done.

          I had never had any problems with it something I cannot say about Dropbox which can be terribly slow, hogs my PC and resulted in lost files on some occasion.

          • Rygian 2 years ago

            My filesystem consists of [checks du -h . | tail -1] 189 GB.

            I don't think Syncthing (which I love) can cram 189 GB on my 64 GB phone.

            Yet I expect to have access to my filesystem from my phone.

            Synthing is a nice "pump-hose system" between reservoirs of data. What I was arguing above is to stop having separate reservoirs of data to begin with.

            • jcelerier 2 years ago

              I use seafile (similar to syncthing) and it allows to browse your data libraries without requiring a local copy for these cases

              • Rygian 2 years ago

                The issue is not so much "can I browse a virtual file system" but more "why should I depend on one local file system sitting on a remote server, probably owned by a third party, as being the single source of truth for my own files."

                • jcelerier 2 years ago

                  ? With seafile the data is only on computers I physically own

        • josteink 2 years ago

          I use syncthing. It's awesome for computers. Coming from Dropbox, then Nextcloud, I find it solves all my needs much, much better, at least on well-supported platforms.

          I love how I can decide what to sync where, and even create my own topology of sync-devices if I like. That may sound like crazy complex stuff and over-engineering and what not, but it was a solution I landed on organically, just through normal use.

          That said, it's not entirely smooth on iOS and you sometimes needs to manually launch the (third party) app to force a sync after changing some files.

        • 0xCMP 2 years ago

          Syncthing doesn't solve the problem OP is talking about. It's amazing software that works as long as you don't have to sync the same file edited on two machines before they have a chance to sync. There is no logic, besides something using CRDTs, that can reliably resolve the conflicts in every situation when you just have two sets of bytes and nothing else.

          Even if you maintain "last synced" copies + the current latest version and use those to compare against the server there are still simple situations where the conflict resolution doesn't work and/or requires user input. Anything that requires user-input like that can't properly sync binary files without resorting to making a new "File (Conflicted 1).sqlite" which you now manually need to compare.

          It just isn't the same thing

          • generalizations 2 years ago

            But that's pretty much a fundamental issue - you can't create information about the system that just isn't there.

            I think the only way around that limitation is to have a node that's always on - make sure it's always known what order the changes were made in.

        • solarkraft 2 years ago

          I just wish Syncthing would allow for deferred sync, i.e. you see the file, but it only gets fetched once you access it.

          That's, imo, the only way to sync large folders. I don't need all my Documents/Photos/Movies/Whatever on my phone at all times, but I do wish I could access them when I need them.

          • rcarmo 2 years ago

            OneDrive does that on Windows and Mac.

        • navane 2 years ago

          i have it running on an ancient android (4.4) phone, perfectly

      • franga2000 2 years ago

        The "to where" bit is really important when specifying the fs. If it includes at least one high-bandwidth high-storage high-uptime device (like a server), the requirements and capabilities change drastically compared to if it's composed of a bunch of battery-powered portable devices on limited data plans.

      • resizeitplz 2 years ago

        Funny you use a spreadsheet as an example. That's been the default for Excel for years. Save to SharePoint/Teams/OneDrive (whatever MS is calling it these days, it's all the same backend) is the default option - and multi-user live editing (or one user in multiple sessions) just works.

    • Rygian 2 years ago

      To where exactly: my devices. And if I don't want to buy my own devices, then to a cloud service that offers opaque storage of binary blobs with an API that my filesystem can abstract for me.

      So the topology is a mesh network of my devices, and perhaps optionally a few defined remote endpoints that the opaque blob storage service provides me, and that I enter as part of the config of my filesystem.

      • lupire 2 years ago

        That sounds like Cloud Storage to me (Dropbox, Google Drive Backup/Restore/Sync/whatever it's called this year).

        • Rygian 2 years ago

          Cloud storage is the opposite of distributed.

          With cloud storage, you must have one single fixed central location (often a third party) that contains the real data, many satellite locations with a partial replica of the data, and hit-or-miss mechanisms to notice changes in replicas and propagate them to the central location. If the central location is down there is no more synchronization. If the central location is not yours, they can shut you down anytime.

          A distributed filesystem does away with the need for a fixed central storage by storing data across all locations with a configurable level of replication. A strong, consistent cascading of changes (eg a crdt semantic) brings all replicas in sync whenever connectivity allows. No third parties need to be involved, no single device is a point of failure.

    • oriolid 2 years ago

      This is the problem that needs to be solved. Cloud storage and p2p are solutions looking for a problem, but it would be nice to let them distract us too much.

  • api 2 years ago

    > What we need is not cloud desktops, or cloud storage. We need local desktops with a decent distributed filesystem, and vendor agnostic access to that filesystem from all our devices.

    That's absolutely spot on. The problem is: who is going to pay for it?

    No vendor will do this because it would break lock-in, and building something like this and making it polished enough for widespread adoption is far beyond what pure volunteer open source can reasonably accomplish.

    The problem is economic, not technical. There is no business model for user-empowering software anymore.

    Software is extremely costly to produce but we pretend it's free and won't pay for it directly, so instead the industry has deeply wrapped itself around business models in which we are the product or that use lock-in to force payment eventually.

    • ElFitz 2 years ago

      Quite a few complex solutions (mostly unknowingly) used by vast amounts of people have come from pure volunteer open source work.

      How much we can expect that to continue is a whole other matter though.

      • api 2 years ago

        If you dig deeply you’ll see that a large fraction of that is actually employees at big companies, universities, and governments. In other words it’s subsidized. Any on the clock OSS work is a subsidy. It’s not pure volunteer.

        This tends to be done when there is a strong common interest, but it’s almost always for deep tech and dev tooling stuff. I have never seen an open source consumer product subsidized in this way because consumer lock in is where the money is.

        You will never see an open Uber, Ring, or Alexa unless a way can be found to charge for it. As it stands free means “as in beer” more than freedom and nobody would pay for such a thing.

        I have played with stuff like Home Assistant. It’s not bad if you are technical. A non-techie could never deploy it.

  • Shorel 2 years ago

    > We need local desktops with a decent distributed filesystem, and vendor agnostic access to that filesystem from all our devices.

    I am very happy with pCloud. One of the reasons I got it is: it works very well on Linux. It works on Android. It works on Windows.

    And it works in the browser, for things like video and photos.

    Also: no risk of trigger-happy account deletion like with Google, if pCloud dies my email still works.

    Previously I used OVH online drive service, but it was EOL and pCloud is the replacement.

    • Rygian 2 years ago

      > Previously I used OVH online drive service, but it was EOL and pCloud is the replacement.

      So a vendor has the power to disrupt you whenever they feel like EOLing the service you depend on. I understand that's as good as it gets today, but it's not good enough.

      For me, the only acceptable level of impact is as follows:

      * vendor X sunsets their service by date D

      * before date D, I sign-up for a new account with vendor Z and configure it in my filesystem settings

      * I set vendor X as "deprecated,EOL-date=D" in my filesystem settings.

      Then my filesystem takes care of everything else for me transparently, with zero downtime and zero effort. Date D comes and goes and I haven't noticed a thing.

      • Shorel 2 years ago

        That's how it happened. OVH announced the EOL about a year before the system was shut down.

        pCloud is a replacement I choose, nothing to do with old OVH service.

        • Rygian 2 years ago

          Do you imply that the change was fully transparent to you, except for a config change?

          • Shorel 2 years ago

            I had to remove the old client and install the new one. Login in the different account.

            In all devices.

            Apart from that, yes.

    • atentaten 2 years ago

      How does pCloud differ from Dropbox?

      • Shorel 2 years ago

        I paid for a lifetime subscription.

        One payment and so far no complaints at all.

  • ElFitz 2 years ago

    It’s incredible how many systems are basically two silos that sometimes somehow sync in a totally custom manner, when we have so many ways of keeping distributed systems in sync.

    Especially true for mobile.

  • hdjjhhvvhga 2 years ago

    > * What we need is not cloud desktops, or cloud storage. We need local desktops with a decent distributed filesystem, and vendor agnostic access to that filesystem from all our devices.

    While I agree, this by itself doesn't solve the problem when you depend on such a FS for your work in a way that when you lose network connectivity, you can no longer work.

    • Huh1337 2 years ago

      Ideally that would be handled on the FS layer and completely transparent to all apps. Things would get synchronized once connection is restored.

      • viraptor 2 years ago

        You can't just synchronise things without knowing the file formats. You can't do a seamless distributed FS which allows offline changes.

        Or more precisely, you can, but by choosing the most fresh file and people have lost work that way. There's a few "dropbox ate my files" stories out there.

        • marcosdumay 2 years ago

          That is usually handled by exposing a "driver" API where the relevant programs can install merging components.

          And yes, default into choosing some one, with an extended interface for displaying and managing conflicts.

        • jeffreygoesto 2 years ago

          I assume that "decent" in the comment meant to address that?

          • viraptor 2 years ago

            It would have to be magical not just decent. This is not solvable on the level of the file system. You can't add a blue line to an image on one system and a red line on another system and expect the filesystem to somehow figure out how to handle that on its own. The best you can count on is the conflict being flagged with both versions exposed.

        • Huh1337 2 years ago

          CRDTs?

          Perhaps the idea of plain text/binary files is a little outdated too.

          • viraptor 2 years ago

            CRDT is extremely format-specific. File systems don't operate at that level. And that's before we even decide if the merged edits is what you actually want.

      • Multicomp 2 years ago

        I mean, isn't that a potential usecase for Syncthing? If I go offline on one of my devices, my files are still locally on the system. When it comes back online, it re-syncs the other devices to use my latest files.

        • hdjjhhvvhga 2 years ago

          This is a very simple use case when you are working alone. Think about a team of people and hundreds of potentially conflicting changes to review manually.* Sometimes a tangible divide between online and offline is extremely useful.

          *) Unless you believe this can be resolved by software - I'm afraid we're very far from that point yet.

      • rand49an 2 years ago

        But then you have to deal with conflicts in a sensible way that won't lose users files and make it simple enough for people to choose which files to synchronise.

      • gw99 2 years ago

        That would have been nice. Cached 9P perhaps.

        Instead we got OneDrive, Dropbox and iCloud. Ugh.

    • Rygian 2 years ago

      Couple of buzzwords that address this point:

      * CRDTs

      * "Intelligent Edge Platforms" as Ditto [1] calls them

      [1] https://www.ditto.live/

bhauer 2 years ago

Using a third-party cloud only ensures that all work scenarios enjoy the same lowest-common denominator.

My preference is to select one of the work contexts (e.g., the office) as primary and to put a workstation there, then remote to that workstation from secondary contexts (e.g., at home). This configuration gives me first-class computing where I need it most, in the primary context, and a decent second-class option when I need to work in other contexts.

I happily worked with this configuration for more than a decade and found it served all of my local and remote needs.

cdkmoose 2 years ago

This also requires access to a stable fast network at all times. Local internet goes down, AWS/AZURE/GCP goes down and I'm stuck. With my laptop setup, I can work anywhere anytime, as long as I have power. I'll need network access at some point to commit code or pick up changed libraries, but that can be managed.

  • anxiously 2 years ago

    Maybe that is not a bad thing? If you don’t have internet then go for a walk. Touch some grass.

    I hope always being in a work state isn’t seen as a plus.

    • cdkmoose 2 years ago

      I can work remote from Maine for a week with limited internet access. I'm not working any extra, I'm just choosing what environment I want to work in.

    • lupire 2 years ago

      With my laptop, I can work before, during, or after touching grass, instead of taking a long wasteful commute.

sascha_sl 2 years ago

I've done this for a long while, and I always come back to the only two viable competitors in the space (that don't require enterprise licensing).

Nomachine and ThinLinc.

Everything else is fine for the occasional remote desktop administration, but they all have a combination of bad video quality, no audio, no keyboard shortcut capture or bad scaling options.

  • rcarmo 2 years ago

    You should really try xorgxrdp. It is now part of most modern distros, has good audio support, and excellent video quality (I can watch YouTube on a Pi running Remmina, using nothing but stock OS packages on both client and server).

    One catch is that some distros (like Fedora), for some reason use the Xvnc backend to xrdp by default, which is idiotic. Just go into xrdp.ini and enable the Xorg section (get rid of the Xvnc one) to get things to work properly.

    I personally cannot abide NoMachine (it was a spectacular fiddly failure for me many times, especially on Mac and Windows clients) and never found ThinLinc to beat the simplicity of RDP, especially considering the client software (I have very configurable RDP clients like Jump Desktop for iOS and Android that work perfectly with Bluetooth mice and keyboards, so I seldom pack a laptop these days).

    • spaniard89277 2 years ago

      Does it work out of the box? multiple screens? doest it work ok with subpar connections, like 4G? Just asking, didn't know about it.

      Currentry using Linux Mint, I guess I could try.

      • rcarmo 2 years ago

        Yep. Once you edit the xrdp.ini file to enable the Xorg backend (which I've done recently on Fedora 36 and Ubuntu LTS), you're good.

        Multiple screen support depends on your client, I've had no issues with Windows and Mac clients. Audio depends a lot on your server distro (clients have it all sorted). Fedora uses pipewire, so YMMV.

        When using mobile connections I trim it down to 16-bit color and it is perfectly usable, although if you're doing that all the time I'd also remove wallpaper and shadows (I prefer using something like XFCE when doing that - https://taoofmac.com/space/blog/2022/04/12/2330).

        • sascha_sl 2 years ago

          I had to switch back to Pulseaudio and compile the audio plugin for XRDP myself (that was on Fedora 35).

          • rcarmo 2 years ago

            You're probably right about that. My Fedora container is 36 upgraded from 35, so I might have done that a while back and never looked back.

  • AshamedCaptain 2 years ago

    I happen to agree with NoMachine, specially because it actually supports transporting X rendering commands, which is still miles better than transferring video (and you can have e.g. 8K resolutions without requiring a server farm for encoding).

    I also like that they allow rootless mode (i.e. without the desktop), which can of solves the problem of "local browser, remote desktop" that TFA is complaining about. Local windows are practically indistinguishable from remote ones.

    Many of this also applies to X2Go/FreeNX, albeit it seems to be a bit more regression-prone.

alexeiz 2 years ago

It's about two things: 1) latency, 2) cost.

For the latency: 100ms is where the threshold is. Above 100ms, you start to really notice the latency and it becomes annoying to the point that you even start making mistakes while typing. Let's take an example: the average latency from my home laptop to a server in the AWS cloud is 20ms. If I add a GUI remote solution (such is Xpra, which is pretty good wrt latency), the latency increases to 60-80ms (and this is just for remoting a single GUI app like VSCode, but not the whole desktop). Now you add a latency of the app itself, which for VSCode is about 50ms. The total latency becomes 110-130ms. So latency-wise the experience of working with a cloud desktop is noticibly worse than my local developer laptop.

For the cost: my developer laptop costs about $1500. 16 cores, 32 GB of RAM, 1TB SSD. The equivalent cloud desktop setup would probably be around $400 a month. So in just 4 months the cost of the cloud desktop will exceed the cost of the laptop.

In my opinion, cloud desktops only make sense when you're not sure how much capacity you need. Is 4 or 8 cores enough for your work? 16 or 64GB of RAM? The cloud desktop setup is flexible. You need more you allocate more. But once the capacity is known, you should switch to your own hardware to significantly reduce the cost and actually improve the experience.

8organicbits 2 years ago

> Modern IDEs tend to support SSHing out to remote hosts to perform builds there, so as long as you're ok with source code being visible on laptops you can at least shift the "I need a workstation with a bunch of CPU" problem out to the cloud.

I'd mention SSH port forwarding in this section. For webdev you'll want to run your server on the remote host and use the local web browser. SSH port forwarding works great for this. I recently used this setup to get some extra RAM for a short project that could only be run as a collection of memory hungry microservices. This way I could get the whole thing running on one box; I spun down the server once the project was done.

  • qw 2 years ago

    Jetbrains has support for remote development environments. I haven't tried it myself, but it looks promising.

    https://www.jetbrains.com/remote-development/

    • MarkSweep 2 years ago

      I can confirm that using JetBrains Gateway to run IntelliJ is more pleasant than using Chrome Remote Desktop, X11 forwarding, JetBrains projector, or Xpra.

      • nsm 2 years ago

        Is there a way in JB Gateway to re-use an existing project?

        I am in the office 50% of the time, working on my desktop with CLion. The other 50% I'm remote. The one time I tried Gateway, it asked me to create a new project, which meant building another index of a giant codebase, and not having any of my per-project settings. The time and disk space of another index turned me off right away. I couldn't find a way to just re-use the existing project. So for now I just suffer through using NoMachine to remote in, and then operate CLion.

      • sz4kerto 2 years ago

        Except it's still buggy like hell.

        • MarkSweep 2 years ago

          I should have said, being better than those alternatives is a very low bar.

orloffm 2 years ago

Cloud desktops were fine a few years ago in Windows 7 times when the desktop was 2D and was remoted as GDI instructions.

Windows 10 made everything 3D, so now not having a GPU assigned to a virtual machine means everything is first rendered into a bitmap and then sent over wire as a movie. This causes additional delay, JPEG-like artifacts and instability.

free652 2 years ago

I work at the company the gives a chromebook and a cloud desktop. Works great for me.

VS Code for the development, SSH for the remote desktop, most of operations can be done via VSCode anyways. Chrome RDP is slow, I agree. I never use it anyways.

  • jopsen 2 years ago

    I have a fast laptop and solid desktop...

    But I frequently use vscode+ssh+tmux on the desktop when working from home.

    Then the powerful laptop just has to run chrome.. which to be fair, it barely does without crashing :)

GianFabien 2 years ago

Give me a powerful workstation at work. Keep the laptop, I'm not doing extra work from home. Of course I have an even more tricked out system at home, but that's for playing games and working on my side-hustle.

teeray 2 years ago

All these dev-machines-in-a-cloud sound wonderful from a security, compliance, and onboarding perspective. What is often forgotten is that this is now a service you’re operating and a massive SPOF. If it goes down (and it will), productivity drops to exactly zero. It’s like sending your devs home until it’s fixed.

  • snotrockets 2 years ago

    Not that much different than a power failure at the office, or your uplink going down. Both more frequent, in my experience, than cloud zonal outages.

    • origin_path 2 years ago

      Laptops have batteries that can ride out a power failure of typical duration, especially for non-dev workers. People can get a lot done without the internet.

alchemist1e9 2 years ago

I don’t see Xpra mentioned in comments yet.

Works pretty well for us since remote windows and local are seamlessly integrated and managed by local WM. Solves the multi monitor issues. Definitely lower latency than vnc or rdp or nomachine from our testing. Windows, Mac and Linux clients all work well.

  • rcarmo 2 years ago

    I have xpra installed but seldom use it since it has no real benefits (for me) over RDP (which does multimonitor, audio, etc.), but I also need to access Windows desktops.

martinald 2 years ago

The later versions of RDP is miles ahead of any other remote desktop protocol in my experience. I used to use it for gaming years ago (from Windows machine to Mac) - it really isn't that bad if latency and bandwidth is acceptable.

As others say, it is very hard sometimes to detect what is local and what isn't with RDP. Everything seems to just work, even using the Mac client.

Compare this with everything else I've used and it's a real janky JPEG compression mess.

  • qwezxcrty 2 years ago

    RDP is indeed highly performant, but the experience may highly depends on the latency.

    I'm working daily in a nanofabrication center and routinely RDP to workstations running Windows Server from computers with 12 years old i5 (which is supposed to be single purpose for tool billing). I wrote code with VSCode and Matlab, view GDS with KLayout, run ANSYS and COMSOL. Everything works so well even with all the 3D, despite of the ancientness of the terminal computer. However this depends on a decent LAN with a <1 ms ping delay (physical distance ~500 m)...

    When working at home through VPN, the experience degraded to fluent but with seldomly noticeable latency. And when using the shitty public wifi at train station, then every keystroke take a noticeable time to echo...

    • rcarmo 2 years ago

      Turn it down to 16 bit color in those situations.

mickeyk 2 years ago

As a software dev, I like using whatever as a local env but SSH'ing into something more powerful to perform any heavy lifting. There's also tools like VSCode Remote that make it almost like developing locally. That said, the most taxing tools that I use regularly are things like video conferencing and "collaboration" tools like Miro. These things are hell.

  • nicoburns 2 years ago

    > the most taxing tools that I use regularly are things like video conferencing and "collaboration" tools like Miro

    Do you mean computationally taxing or mentally taxing?

    • p_l 2 years ago

      In my experience, both?

      And combine computationally heavy result of running stuff like Miro adding to any inherent complexity of collaboration task... Ehhh

  • maeln 2 years ago

    That is not what the article is talking about:

    > I'm also going to restrict this discussion to the case of "We run a full graphical environment on the VM, and stream that to the laptop" - an approach that only offers SSH access is much more manageable, but also significantly more restricted in certain ways. With those details mentioned, let's begin.

0xCMP 2 years ago

Using something like Coder to provision workspaces and VS Code, SSH, and Wireguard/Tailscale are an absolute dream.

I hope for many in these comments to experience this (especially Coder V2 which is far more flexible in provisioning workspaces) instead of the RDP non-sense that others need to suffer through.

While Parsec is good (and Nvidia GameSteam + Moonlight seems better in my experience) it really isn't good enough to use instead.

Also, honestly, with the advent of things like Tailscale I think it'll become more and more common to have a desktop + a nice, but weaker/cheaper device (Chromebook, MBA, etc.) that you can securely access at your desk or remotely if you want. It's what I personally do with my Desktop and M1 MBA right now.

Also want to add that dedicated servers aren't that expensive by comparison and you can get a lot of value paying like $100/month and using that remotely.

rcarmo 2 years ago

Funny that I should be reading this on such a "cloud" desktop.

I have a Raspberry Pi[1] running Remmina and accessing a number of different machines via RDP - A personal Fedora 36 desktop running in an LXC container[2], a Windows VM on Azure, and various other similar environments. I am typing this on that Pi, through that Fedora session, pushed to a 2560x1080 display. Typing and typical browsing is almost indistinguishable from "being there". Coding too. It is only noticeable (on the Pi) when large parts of the screen update and the little thing has to chug along, but I'd rather have this completely silent setup than an Intel NUC.

For work, I do have spanking new a work-issue laptop, but it is fairly recent and the fans spin up whenever I launch anything of consequence, so I am still logging in to a virtual desktop environment for everything up to (and including) audio calls (RDP has pretty decent audio support these days). Video and display sharing I still do locally, mostly because it's usual to switch environments during a call, but I have full multiple display support, and the connection can handle my 5K2K and 4K displays just fine.

I've been doing this for a decade or so, ever since I could use Citrix over GPRS. The user experience is fantastic - even at that time I could literally close my session in the evening, take a morning flight to Milan, pop open my laptop and continue where I had left off, over a piddling sub-64Kbps link.

With the right setup (and experience), latency issues mostly vanish. These days you can push a full 3D rendered desktop over DSL with either optimized RDP or game streaming, so the real constraints typically come from IT restrictions and people wanting to micromanage their environments.

That said, I also use VS Code Remote, and it works great for me as well over SSH. But it's just easier to spin up a VM/container and do that from my iPad :)

[1] - https://taoofmac.com/space/blog/2022/08/14/2030 [2] - https://taoofmac.com/space/blog/2022/04/02/2130

Edit: Remembered I shot this video of it running over Wi-Fi, unoptimized: https://twitter.com/rcarmo/status/1561397639215665153?s=20&t...

nefix 2 years ago

Disclamer: I'm part of the IsardVDI project (https://gitlab.com/isard/isardvdi)

With RDP, in our experience, the latency issue is nonexistent. We've even have successfully run workstations editing 4k video with 0 issues. Yes, for those extreme cases you need a GPU (and the only option is NVIDIA Grid, which are really expensive cards with licensing on top of that), but for the most part, if the hypervisor has a good CPU, it's more than enough, we have clients that even use RDP through the browser.

You don't even need to have a really good internet connection. Also, SPICE is really good too, with really good desktop integration

Havoc 2 years ago

vscode remote ssh is a decent compromise - the interface is "local" the stuff is remote. Hardly noticable that it isn't true local. The second the entire thing is piped through a VNC/RDP type setup it becomes shyte to use.

theptip 2 years ago

I have never got this set up, but I think a hybrid approach could be quite good; something like pytest-xdist remote SSH builders perhaps. (Maybe you can rsync the diffs in the background though, instead of when you hit the "test" button, to speed things up?)

https://pypi.org/project/pytest-xdist/#sending-tests-to-remo...

Running a local-first setup is nice for things like iteratively step-debugging your latest changes on a single test case, but being able to push the diffs to a fast remote build server (elastic cluster?) to speed up the "run all the tests" action would be nice.

I think you can do this with Clang remote builders too. I hear Bazel has this.

Is this something that anyone has experience with? It seems like it could be the best of both worlds, from a compute performance standpoint.

(As others have noted, the other big benefit of a cloud desktop is that you don't have to spend time setting up your dev environment, which is constant toil for new developers; Github mentioned this as a big contributor of friction in https://github.blog/2021-08-11-githubs-engineering-team-move....)

throwaway0x7E6 2 years ago

nobody thinks that. not in their sane mind. except for the "you vill ovn nothing, und we vill be happy" technocrats, but their reasons for that are their own.

kstenerud 2 years ago

Cloud desktops are not a cure-all, but they do have their uses. I have "cloud" desktops hosted in my home (on a NUC server) and also on cheap VPS instances, depending on my particular needs, whether that be an isolated environment for dangerous work, or just having a portable desktop that I can connect to from whatever device I have at hand, continuing from where I last left off.

GPU-intensive desktops are pretty much a no-go, but Mate desktop works beautifully and does what a desktop should: Manage my environment and get the hell out of my way.

Browsing on the remote desktop is anything but smooth, but it's good enough for development. I'm not going to stream video on it, though.

There's a tiny bit of keystroke latency, but not enough to matter IMO. I'm using Chrome remote desktop so YMMV. Running Steam on a cloud desktop is possible, but it's an exercise in madness.

I do it all using LXD to keep things relatively distro agnostic. I've posted the Python script I use here: https://github.com/kstenerud/lxc-launch

bluedino 2 years ago

I've been using this in one form or another for a very long time. Starting when we had Windows desktops with bad Linux support, and back then you couldn't run both OS's on the same machine at the same time.

So I stuck a workstation with Linux on it in a closet. Fired up VNC and I could hit it from home, my cubicle, the road, wherever. It's evolved over the years as things became faster and more secure. It became a co-located server, then a VPS, and now it's a shared setup on a beefy server.

It maintains it's state no matter where I go. I can open up two ore more sessions for two ore more monitors. But it's more useful to just surf the web or open PDF's or whatever on the local machine. Copy and paste is pretty seamless these days. And wherever it's located, has a much better network connection than I do.

You still have a latency problem with large files (CD .iso in the old days, a 10GB package these days). I don't play games so I don't really know how that goes. But for development it works great, as well as just a general workstation.

zerop 2 years ago

I am studying if cloud IDEs are better option than giving laptops to team (From cost and managing PoV). Any experiences around cloud IDEs for teams.

  • thehappypm 2 years ago

    I believe this is how Google does development.

netfortius 2 years ago

A combination of AWS Workspaces and Appstream solutions worked fine for us, with a few hundred developers and data scientists, spread all over the world, as FTEs (for the first category of products) and contractors on various continents (for the latter), including some M&As we conducted and continue to undergo, which require(d) very short time to bringing new teams up to speed.

forrestthewoods 2 years ago

It’d be nice if the author mentioned what cloud desktops and how good people think they are. Don’t tell me it’s worse than I think without specifying what you think I think!

I know numerous gaming companies that swear by Parsec. Except the author doesn’t appear to be talking about Parsec tier cloud desktops. But then again it’s not clear what the author is talking about

  • thesh4d0w 2 years ago

    Yep, we have remote developers using parsec with unreal engine, maya, 3ds, etc.

    Author seems to be stuck in the tech mindset of 5 years ago.

fideloper 2 years ago

I think I found a good remote dev environment recently - basically "just use Mutagen to sync files to a server close to you". That keeps the source of truth (code files) local but outsources the compute.

I started working at Fly.io ~4mo ago and quickly realized I could setup a nice remote dev environment since there are regions close to me (super low latency).

I setup a VM to run SSH to sync/forward ports. It turn off when I'm not using it (after a configured timeout, it sniffs for SSH connections and exits if there are none - which stops the VM), and uses Mutagen to sync files. The source of truth is my local files, so my local IDE's work great (they're working against the local file system).

I wrapped it up in a little tool I'm calling Vessel https://github.com/Vessel-App/vessel-cli, which talks to Fly's "Machines API"

robertlagrant 2 years ago

I know this sounds gross, but I wonder if Chromebooks could benefit from being able to trigger local browser actions in the remote browser. So you can click "open in new tab" in your remote and it opens in your local browser. Bonus points if the remote session is also in a tab, so it just switches you away and you can come back easily.

  • easton 2 years ago

    Well thought out Citrix and RDP setups have something like this, where certain apps are able to break through and use the local GPU (usually video conferencing apps, like Teams and WebEx).

overspeed 2 years ago

> Modern IDEs tend to support SSHing out to remote hosts to perform builds there, so as long as you're ok with source code being visible on laptops you can at least shift the "I need a workstation with a bunch of CPU" problem out to the cloud.

JetBrains has Gateway[1] and VSCode has remote Dev tools[2]

Gateway's performance is very dependent on the network connectivity. If you have bad ping, you're going to curse the world seeing the input delay.

VSCode seems to be caching the files locally and updating them separately. With bad internet, you still get the native input lag.

[1] https://www.jetbrains.com/remote-development/gateway

[2] https://code.visualstudio.com/docs/remote/ssh

symlinkk 2 years ago

VSCode Remote SSH’d into a cloud desktop is superior to local development. With the hardware being remote you can afford to get something that’s ultra fast and can run 24/7, and it still feels just as snappy and responsive as running it locally. I think this will be the standard for development within 3 years.

  • lioeters 2 years ago

    I agree - for a few years my primary display has been permanently full-screen VSCode with Remote SSH, either to a remote server or a local virtual machine.

    Occasionally I open another window to edit files locally, but that's become rarer. I love having everything in a container that I can always destroy and recreate, or spin up as many as I want for isolated environments. It taught me to write automation scripts and configs for reproducible setups. It's perfect for education and onboarding new team members too.

hackrbrazil 2 years ago

Remote code development tools like Gitpod and Codespaces may be a good answer to the issues from the post. They sit in the middle between purely using SSH and full remote desktop experience, so feel like using your local machine while giving you access to computing power from the cloud.

open-source-ux 2 years ago

Privacy is also terrible in cloud desktops (and cloud apps) but many (most?) developers do not see it as a concern. It's too late to push for privacy in cloud software - especially when it's developers who are the strongest advocates for user-tracking in cloud desktops.

  • raxxorraxor 2 years ago

    Really depends on the environment. Almost all developers I know are pretty big on privacy and critical of the negative implications of users sharing too much. If you work at Google or Facebook they will plant different ideas into your brain.

    IT security might be slow, but they also found out that it isn't the users that pay them, it is management. Some concerns have validity, but there is also a lot of crap sold as security.

    • open-source-ux 2 years ago

      Almost all developers I know are pretty big on privacy and critical of the negative implications of users sharing too much. If you work at Google or Facebook they will plant different ideas into your brain.

      I wish I could share a similar sentiment. But my impression is that privacy is low down the priority of many developers. Even this thread has 300+ comments and yet has not one single comment raising the privacy implications a cloud OS/desktop.

rc_mob 2 years ago

My company did this to us. I'm thinking of quitting.

  • politelemon 2 years ago

    What do you mean by 'this' - there are several examples given in the article. Can you share some of the problems you're facing too?

anxiously 2 years ago

I don’t use a graphical cloud setup, but I do use a vps for all of my development.

It is nice having a single cloud based machine that is accessible via ssh on any of my physical devices.

I have a dev environment closer to production, ssl and publicly accessible urls for testing services and sharing to compare designs and UI changes, etc.

Fantastic setup for anyone that likes a vim+tmux workflow. Only a single environment to keep up to date and configured. Daily snapshots and backups.

Keeps the cost of other hardware down as well.. I can work effectively on cheap hardware which certainly offsets the server costs. I did a cost rundown before and it was like ~15 years of my vps and cheap hardware equal to a single entry level MacBook Pro.

d--b 2 years ago

Been working remotely for 6 years. My vm is in New York and I live in rural France. I connect over a 4g connection. I have a 24" monitor.

The stuff is seamless. I mean it. I hate lagging, I hate stuff that doesn't work. But this does work. Really well.

  • 300bps 2 years ago

    Similar setup for me and I’ve been working in a virtual desktop environment at two companies for the past ten years. Would not trade the flexibility it gives for anything and it Just Works for me.

    Most of the cons listed in the article either don’t apply (VR) or just work fine for me (video).

  • leoh 2 years ago

    Who do you purchase your VM from? What are the specs/cost?

Maxburn 2 years ago

Multiple times I've caught coworkers starting a gotomeeting/zoom meeting in their VDI and they can't figure out why they can't use their LOCAL USB conference microphone/speaker array.

Yet another use case where VDI falls down.

  • easton 2 years ago

    Most of the video conferencing apps of choice have a version where they run something on the thin client to reduce latency (going from the thin client to Zoom or Teams or WebEx directly instead of through the VDI). They have support for the mics and stuff too.

    https://support.zoom.us/hc/en-us/sections/4404192199053-Virt...

    • Maxburn 2 years ago

      Interesting, I wasn't aware of that.

      I just copy the link and bring it to the computer I'm using to open directly there.

rr888 2 years ago

I've had this in the last few jobs. One reason for it is that to move desks in NYC requires a union employee to move the computer which ends up costing a few thousand dollars. With terminals and a cloud PC you avoid this.

  • rcarmo 2 years ago

    As a European, I find this befuddling. A union employee of whom?

    • rr888 2 years ago

      Doesn't matter who's paying them as long as they're union. There are certain things you just can't yourself.

      • aerostable_slug 2 years ago

        Back when I worked for $MEGACORP, we wanted to mount some monitors to the wall in the infosec space for Big Cool Dashboards. The quoted price from the union contractors that had a lock on all HQ building modifications was frightfully high (five figures to just mount some TVs). They spec'd asbestos inspectors for a space that had been certified asbestos-free, had safety observers that had to stand around and be sure the 'job site' was safe, etc.

        We went and just bought rolling stands instead. We thought about sneaking the Best Buy techs in but decided we liked our jobs too much to do that. When they found out we just bought stands instead of having the screens mounted they tried to throw a hissy fit but were unsuccessful (we didn't actually break any rules).

        This same job required me to get an exemption from two unions because the devices I worked with communicated with RF energy (radio union) and I needed a screwdriver to get inside of them (IBEW). Sigh. It didn't matter than the wire techs down the hall from me didn't know JTAG from JPEG — I couldn't risk getting a grievance because someone thought I, a mere unrepresented member of "management" (as everyone who wasn't union-represented was called), was takin' their jerbs.

woeh 2 years ago

I've had good results with offloading work to a cloud based server where I ran my docker containers during development. Just CLI though, I left the graphical part on the client side. As mentioned by others, VSCode with remote SSH was a blessing for such a scenario.

There are benefits; I could scale up my workstation even for an hour or so, with more memory or a fancier cpu. And it was easier to share my work with other (remote) colleagues; because they were at another timezone I could leave the server up for them when needed, while I shut my laptop down for the day and see their feedback the next day.

jcalabro 2 years ago

For the past 5 years I've used a macbook to ssh in to a Linux VM as my development environment. It was great for the work that I was doing (distributed web systems).

Now that I've changed jobs and I'm developing a desktop app again, I'm back on a physical Linux box under my desk, and I really miss the old experience. It was great to never care about a mac change tanking your productivity (i.e. I was totally unperturbed by the m1 switch), and it was also great not to have to run a Linux desktop environment, which it turns out is still a big pain.

  • ricardou 2 years ago

    I have had the same experience, although I've only been out of school for two years. So far my daily work stream has me doing ssh to a Linux box and then doing my work there. All the CI/CD, version control, building code, and storing the code is done remotely.

    So far it's been great (again, I've never had an alternative) and the power of virtualization has really come in handy. A couple of times in the past I accidentally borked my Linux box and started to figure out how to fix it, but then I had my d'oh moment: just scrap this one and get another!

    There's no "it works on my machine" and most importantly, I can work from anywhere, with a stable internet connection (even using a phone to do USB tethering worked fine), from a Mac, a Chromebook, windows, whatever computer.

    One caveat is that I program backends in C++ so a lot of the issues mentioned in the article don't really apply to me. Regardless, it's been great!

pmontra 2 years ago

I thought about buying a desktop again after almost 30 years, bury it in some room at my home and use my laptop as remote desktop. I work in different rooms as seasons go by (so no air conditioning), sometimes even morning vs afternoon. Not a common use case I guess but that's exactly the point of the article.

I think I'll keep using my laptop as primary and only machine because many of the scenarios in the article also apply to me and what if I have to visit a customer? It never happened again since the pandemic but it could.

pavon 2 years ago

Looks like I'm in the minority here. I use a VMWare Horizon VM as my primary desktop environment, and I love it! Working from home, VMWare Horizon has much better performance than using a VPN for many things including X11 forwarding to Linux computers on-site, RDP to Windows computers on-site, and accessing CIFS/SMB file shares. And when I do go on-site, I can connect to it from any computer, either using a kiosk, or a colleague's computer if we are collaborating, or any conference room computer.

PaulKeeble 2 years ago

Its the latency that really gets me. Having all my text be delayed and having moving windows around be just painfully slow is annoying. Windows does a lot of tricks to hide desktop latency and that is with the CPUs right there and GPUs accelerating it. I have also run into lack of dedicated IO issues too that resulted in bad performance combined with CPU performance being subpar because cloud CPUs tend to be low frequency and a lot of desktop software depends on single thread performance still.

artisanspam 2 years ago

This is the terrible norm in the semiconductor industry. VNCs everywhere. Almost all EDA GUIs are only designed for Red Hat or CentOS so IT makes everyone connect to a datacenter and start an X server. Having interviewed at/worked at these companies, I know that Intel, NVIDIA, and Apple all do this.

It sucks. Your productivity plummets because each keystroke lags and it makes you lose your train of thought. When there's an outage, no one can do any work at all.

jonnycomputer 2 years ago

Perhaps others have had better experience, but whenever I've used a remote GUI VM (e.g. through the Guacamole web browser interface), the latency is painfully noticeable when typing. So much so that I've usually dropped down to a local terminal connecting to the vm by ssh and doing everything in Emacs (or Vim, if that's how you're bent), or having JetBrains edit stuff remotely. But then you're back to needing a decent workstation.

  • rcarmo 2 years ago

    Guacamole is excellent for managing servers, but hardly a reference implementation of remote access. The display protocol is getting shoehorned through a websocket and rendered into a browser canvas, whereas if you use Remmina or another native RDP client things have much less overhead.

lupire 2 years ago

Not bad article, but intro deeply confuses the issue. If you want fast builds for your org, use a build server farm that is much faster than the fastest workstation, and sma incremental builds locally or on your cloud desktop. A local UX machine (15" Mac not built in the Dark Ages of 2016-2020, + desk monitor, good for 5+ years) with a FUSE mounted remote storage and a build farm is a great combo.

  • anxiously 2 years ago

    I used to think so, but I have yet to own a Mac that survived more than 3 years. However, that may have been partially due to the "Dark Ages", but more like 2014-2021.

    I'm happier with something Linux friendly (thinkpad x220, pinebook pro, etc) and a remote system I use over ssh (sshfs, ssh+vim+tmux, etc). All at the great cost of ~$220 + $5/month?

  • GoOnThenDoTell 2 years ago

    What are options for the fuse mounted remote storage?

yrgulation 2 years ago

Not sure who thinks these are good. Perhaps for basic programming or in slow moving orgs but outside these two use cases cloud desktops are horrible.

hoistbypetard 2 years ago

Reading his description of the issues, it sure sounds like they're as good as I'd think. I'm more optimistic about the gitpod-style remote development environments, honestly. But that post describes exactly what I expected from a "cloud desktop." I wouldn't expect someone to get much done if I inflicted that kind of work environment on them.

CarbonCycles 2 years ago

Not a fan of cloud desktops or any "virtualized" desktop. Experience is typically subpar and the worst part is that it requires a stable internet connection. What's the point of that when many of us are working remote and mobile?

leoh 2 years ago

I’ve been developing for the last few years on a cloud VM and love it. Latency has never been a serious issue for me.

It lets me use Linux as my daily driver, I have a highly capable machine with large L2/L3 cache, a lot of RAM, many CPUs — and it’s totally portable.

Not to mention that the internet speeds on the cloud VM are incredible — easily 1gbps+ wherever I am in the world. This is a selling point folks forget.

The combination of speed (hardware and network) and always being on (can leave compilation tasks etc. running) is very nice.

I’ve used Citrix and the modern Chrome Remote Desktop experience is generally an order of magnitude better.

Working on a bus with wifi, typically fine. Even working from Asia with the VM in California, great.

The only issue I have with cloud is that for personal it’s expensive. Google compute VMs are a lot more than equivalent workstations per year for similar hardware afaict.

That’s the question I’m curious how folks work around.

tinodb 2 years ago

> But even a fast laptop is slower than a decent workstation, and if your developers want a local build environment they're probably going to want a decent workstation. They'll want a fast (and expensive) laptop as well, though, because they're not going to carry their workstation home with them and obviously you expect them to be able to work from home.

What kind of builds require more than one of the new MacBook Pro’s?

And what about using cloud development environments instead of a fully remote desktop? I haven’t properly tested GitHub Codespaces, but it seems to me that a lightweight laptop (ie cheapest MB Air, if Apple) with MDM plus codespaces can work really well.

Sure, not everyone is using these tools, but to state that devs in general need both a beefy workstation and a laptop sounds a bit outlandish to me.

  • thrwyoilarticle 2 years ago

    I work on a C++ project where a(n x86) Pro takes over an hour to build the universe. And that is less work than a browser or compiler.

Spooky23 2 years ago

It’s hard to take this seriously when it doesn’t explore the why. Skimping on MacBooks is a pretty niche use case for cloud development.

Imo security drives this decision, and being able Work remotely is the benefit.

BeFlatXIII 2 years ago

> aren’t as good as you think

Who thought they were any good in the first place?

kanzure 2 years ago

Is there a cloud desktop product where I can select a development environment and instantly RDP into it pre-configured and ready to compile code with libraries installed etc?

spookierookie 2 years ago

IMHO remote desktops (cloud or DaaS) is a terrible idea with even more terrible executions. I never tried one that could measure up to a local environment.

jjtheblunt 2 years ago

> aren't as good as you'd think

I always find titles like this clickbaity, because the author has no idea what anyone in the audience would think.

NoGravitas 2 years ago

I wouldn't think cloud desktops would be good at all, so if they're not as good as I'd think... they must be pretty darn bad.

ryukafalz 2 years ago

This is where I wish Plan 9 had caught on. It lets you run remote graphical apps more seamlessly than any remote desktop I know of today.

  • jjtheblunt 2 years ago

    better than X11 did? better than Display Postscript did?

    (genuinely wondering since i've not had a chance to play with Plan 9, kinda randomly)

    • ryukafalz 2 years ago

      X forwarding involves granting the remote system access to your X socket, which is very powerful. I don’t know for sure that Plan 9’s model avoids this (I’ve only had small chances here and there to play with Plan 9), but given its heavy use of namespacing I suspect it’s at least possible.

      Performance-wise X forwarding was always pretty slow for me in a way that Plan 9 seems to avoid, though I’ve gathered it used to be more efficient prior to modern graphical toolkits that want to draw a bunch of bitmaps to the screen. It’s possible they were more evenly matched in the past.

      Not sure about Display Postscript, never had a chance to try that one.

      • jjtheblunt 2 years ago

        Display Postscript was really great, and X11R5 at least was super snappy, predates the Motif and other window managers slightly, I think. But i also would use super fast window managers and usually not be pushing bitmaps all over.

nickdothutton 2 years ago

Citrix HDX 3D-Pro on a GPU enabled VM works pretty well in my experience. Even driving multiple screens on the end user system.

olliecornelia 2 years ago

I thought they'd be pretty fuckin bad, you're telling me they're worse than that?

vegai_ 2 years ago

Whoa, I thought they are absolutely pointless. Are they even worse?

  • mrweasel 2 years ago

    I don't know if I think they're pointless. The idea is reasonable, for some use cases. Personally I won't use a cloud desktop, or even remote desktop on a local LAN for any real work. Just the ever so small lag on remote desktops on a local network drives me absolutely nuts.

    The idea is pretty good, for light office work, but as it stands now, I wouldn't subject anyone to using Cloud desktops for any extended period of time.

    But the title is pretty confusing, when you think that cloud desktops are pretty terrible and now some one claims that they are even worse.

hansel_der 2 years ago

i feel like not much has improved when working with teamviewer or anydesk over using x11-forwarding (plain and with compression) or vnc derivatives

remindes of an old document by stuart cheshire

taylorius 2 years ago

I find it hard to imagine they're less good than I think.

NexRebular 2 years ago

Makes me miss SunRay... it just worked

EugeneOZ 2 years ago

Different workloads require different tools.

I work with Rust and TypeScript projects - MBP M1 Pro 32Gb RAM is 110% enough.

  • izacus 2 years ago

    The companies deploying cloud workstations usually want to save money and not pay for your 32GB MBP.

wwarner 2 years ago

tilt.dev looks very promising and addresses some of these issues

kkfx 2 years ago

Few personal notes:

- workstation model means working on a good (physical) desktop setup, large main monitor, eventual other(s) monitor(s), good keyboard, perhaps a thumb trackball instead of a mouse etc, oh, sure potentially the same can happen with a docked laptop but...

- ...laptop model means being able to move. If we WFH there aren't much reasons to move, well except when moving means relocate elsewhere. In practice MOST laptop users do not use their computers to be operational on the go but as a desktop replacer in suboptimal improvised setups, while those who need a good laptop can hardly find one.

The real issue came out in lack of knowledge from most about how remote works should be done. We have seen a big PR campaigns for more than a decades about nomadic workers who works on unstable and limited mobile network with PCMCIA/3G modems cards, then USB stick/HSPA stuff, portable hotspot, mobile tethering etc in a bar (so with potentially hostile and distracting surroundings) or on a beach (added to a potentially hostile climate/environment for mobile devices) and this model who push from "big notebook" to netbook to ultrabooks etc obviously fails miserably since it can't really work. We can work in such setup for a limited period of time for limited tasks but nothing more.

Now many start to admit that the solution is going back to the classic desk BUT this means every home need a room with a proper setup and so is an effort on both the worker and the company. A thing most reject.

Substantially: it's about time to tell things clear. The modern web is CRAP made to sell services instead of empower users through IT plus an admission that classic commercial desktop model is also CRAP. We damn need real desktops with document-based UIs, working locally and syncing just data that need to be synced. As we do as humans, anyone who do a certain job with a significant degree of independence in a single company.

To do so from remote we need a damn room per worker, well equipped, rented to the employer for a fair rate and establish clear contracts on that work paradigm.

Try to keep up the crappy surveillance capitalism business who can be translated in "rent someone else services, own nothing", in the trace of WEF/2030 https://youtu.be/Hx3DhoLFO4s famous video it's a very expensive absurdity. Try to keep up hybrid craps to avoid real capex is another absurdity.

Those who are eligible to work from home and want such paradigm should offer a proper room for that, companies should be clear "you are hired for remote works AND REMOTE ONLY, eventual travel to meed in persons must not happen more than once in a while" where the timeframe vary depending on the company and workers geographical distance.

Let's do that and we all benefit, companies and workers together in a win-win move those only loser will be GAFAM and friends (from Citrix to Bomgar). Avoid doing so and we will keep an inconsistent liquid situation that can be trivially called like the famous Full Metal Jacket Sg. Hartman definition on the most common amphibious thing so called ...

aaron695 2 years ago

Not sure if this is the Cloud not Thin clients but in a school licensing killed any Thin Client attempts.

It was just too hard.

I never got to the stage they hint at, if a tiny amount of things won't work, does it means the whole idea fails?

If you only have Word/Excel/internet etc in one lab inevitably someone will ask for X,Y,Z. Is the money saved on computer and maintenance and benefits of instant installs/upgrades worth more or less than the property & teacher/student time costs of that lab running at 90% useability.

But licensing stopped the experiment.

sbf501 2 years ago

Who uses a physical workstation anymore? (Besides artists.)

OP is suggesting a complete remote desktop for Office applications, like Video Conferencing. Ironically, for all the crap X takes, it could actually pull this off. Moreso for Wayland. I'm surprised there isn't a graphics client/server model out there as good as X after ~40 years. But I think the problem is too much layering: trying to put a VM in the cloud as an office desktop requires way too much bandwidth & latency through a remote desktop without a client/server graphics mode. The tools are there, they just aren't being used because they are missing a security layer.

I haven't used a physical workstation at my desk since 1999, and I was a designer/architect at Intel for decades. Everything was done via VNC. Back then it was called "distributed computing" with AFS, so it was a "proto-cloud". And before that I used a sun workstation to telnet into beefier computers. This was AIX/SunOS/LInux based.

Granted, I was not videoconferencing, but there's no reason why the desktop needs to be rendered in the VM (including the video stream!!!), then encoded, then decoded, then rendered again. It's just dumb.

  • leoh 2 years ago

    X for wire protocol is truly awful compared to chrome Remote Desktop. I’ve been eager to use it and have played with all manner of settings, but it’s very slow pretty much no matter what. Apparently it sends many more frame updates — ie is very network and compute heavy compared to other modern protocols that send rolling images.

  • politelemon 2 years ago

    A lot of people use workstations. As described in the article, they are more powerful and many people need that power. You may have a specific, limited set of anecdotal data that you're using to ask that question rhetorically probably?

    • sbf501 2 years ago

      Yep. All I have is have anecdotal data.

      I believe desktops are on their way out. The trend has been happening for ~30 years. Even Android is working on virtual phone OS.

      The only applications I've encountered that need a workstation are: Audio Engineering, Video Editing, and 3D mock-up (prior to sending to render farms). All arts.

      What are some other examples? Everything I know about science and engineering (academic and commercial) uses a cloud/farm/distributed.

      • mattnewport 2 years ago

        Game and VR development generally need powerful desktops both for programmers and content creators. Our entire team of (fully remote) VR developers have high end gaming desktops. If cloud workstations worked well they'd be interesting for us but none of them really support VR development at the moment.