So, I think the main sticking point here is the lock-in of Hyper-V. By making a new popular feature completely dependent on a technology that explicitly disables the use of competitive hypervisors, they're giving with one hand and taking with the other.
If I was on VM-Ware's executive team, I'd be seriously thinking about filing an anti-trust complaint and the open source community should be thinking about whether submarining virtualbox is worth what Microsoft is doing here.
Hyper-V is already required for many security features like memory isolation. Also, Windows has an open hypervisor API (iirc QEMU is working on support for it) to allow other apps to leverage Hyper-V. VirtualBox hasn't really been competitive with VMWare or Hyper-V for a long time and many of it's key features are under the nonfree extension pack anyway. As a level 1 hypervisior--which by definition is a core part of the OS--Hyper-V will always have the advantage over third-party level 2s.
The Level 1/2 distinction is almost purely marketing, especially with hardware level virtualization abilities like VT-x and VT-d. In no meaningful way does the fact that Hyper-V is somehow a "level 1" hypervisor make it better at being a virtual machine host.
There is the Windows Hypervisor Platform[1] component recently added in windows, that at least allows other VMs to share the hypervisor using a common API. IIRC, VirtualBox 6 supports it.
To me, it seems like the use of Hyper-V was probably necessary to get the tight integration they needed to make the use of WSL 2 as seamless as it is today with the lightweight containers.
Heads up, as of Windows 10 1903 (latest update), VirtualBox QEMU, and friends will fail when attempting to use WHP. It appears to be critically broken in these builds of Windows. (It's fixed upstream in 20H1 builds.)
No. It's a fairly common use case to i.e. virtualize several ESXi hypervisors underneath ESXi itself, in order to test clusters or other setups. VMWare calls the feature VHV, "virtual hardware-assisted virtualization". More generally, it's typically just "nested virtualization". It does look like it needs some CPU features, but which are now old enough to be essentially commodity.
(and yes, ESXi is a type 1 hypervisor--it's definitely something Microsoft could enable more broadly, instead of limiting the scope to Hyper-V nested setups)
You are either in or under, not side by side. Yes, it is possible to run a hypervisor in a hypervisor, the limitation is usually in the parent hypervisor. In this particular case it would be useful to run side by side, but it is not possible and it also negates the idea of multiple hypervisors: why not have a single one running all the VM's?
While running, yes. On Windows if you enable Hyper-V support, even when you're not using it at the moment, you can't use VirtualBox without reboots and shit.
This is because Hyper-V is a level 1 hypervisor (similar to Xen) while VirtualBox is a level 2 (similar to QEMU). Hyper-V is always running because the Windows "host" OS is actually running under it's control as well.
> Hyper-V is always running because the Windows "host" OS is actually running under it's control as well.
Wait really? That doesn't seem right, why would they have the host running under part of hyper v when hyper v is optional? This would mean the entire foundation shifts when you enable or disable hyper v
Yeah, that's why the reboot is required, it's like changing kernels on linux. I think they are moving to have it on by default, and eventually they may just make it permanently part of the OS install. Kind of like how they take control of the TPM on install even if you don't setup BitLocker, I imagine they will just have the core hypervisor permanently enabled and then it's up to the user setting what actually uses its features.
This system has resulted in issues in the "host" and with an unresponsive control plane for me the few times I've tried to use it. Compared to no issues with type 2s.
To say nothing of the madness of using it on a laptop with several network adapters when the guest OS only wants to acknowledge one.
It is a VM host: when Hyper-V is active, Windows itself is just another VM. (That's why you have to reboot when you turn Hyper-V on or off). NT isn't actually privileged compared to the Linux kernel in WSL2!
Most likely in the next few years Hyper-V will become non-optional and Windows will always be virtualized when you run it. That's a big architectural shift that will enable lots of cool new scenarios, but it requires a level 1 hypervisor.
One interesting rumor that has been around for years is that Xbox games could always start requiring Hyper-V on Windows. On an Xbox One, Hyper-V is already in charge and games (mostly) run in extremely lightweight VMs. Supposedly there is interest in bringing those game VMs directly to Windows for power-user game performance.
This is our big problem. Our development "workstations" are virtualized in VMWare. That's incompatible with Hyper-V, so MS is locking us out of this technology.
If your dev workstations are virtualized already, why not ask your tooling team to provide virtualized Linux workstations? If you deploy to Linux this is something they should provide anyway, for dev/prod parity.
Modern VMWare still doesn't support Hyper-V? That's unfortunate to hear. My old version doesn't so I have to reboot anytime I want to get into my VMWare VMs, yet I'd made the (sounds to be incorrect) assumption that would be fixed in newer versions.
My understanding is that it isn't so much "VMware doesn't support Hyper-V" but that "Hyper-V doesn't permit other virtualization platforms to run alongside it" and that may be less of a technical limitation than one might assume.
I'm referring to an architecture with ESX server running the VMs. I'm not sure exactly which version. But we've tested it recently and found that using Linux Docker on Windows 10 still doesn't work.
You can do what I do in WSL 1, which is to forward the Docker daemon port from the VM and point your local CLI to it via the TCP port.
I've been doing that for years now (I originally built my own Docker client binary for the Mac and pointed it to a Linux box, did the same when I ran Parallels on my Mac, and still do it occasionally with ARM boxes at home). People forget that Docker was designed as a client-server solution, and that the server bit works fine.
You can ask your sysadmins to set up a VMware instance running boot2docker and link it directly to your Windows workstation.
Oracle will manage that on their own regardless of what Microsoft does. They already started by making the extension pack free to download but licensed so they can catch unsuspecting users in one of their famous audits.
WSL 2 is a full Linux kernel running in Hyper-V rather than an emulation layer on top of NT.
The article is about two things:
1. WSL 1 was not enough to run docker on it
2. WSL 2 is what was needed and with it Docker can use it and replace the Linuxkit based bits it currently ships, and get other improvements along with it.
You still have to build it from source in VS 2017 or 2019, and there are a few rough edges (currently only middle click for copy/paste), but it's a great start. They should have some official binaries up pretty soon.
You can grab build artifacts from builds of the master branch off their CI. You need to enable developer mode to allow the loading of an unsigned binary.
As far as I know, it's not possible to install these binary artifacts as of now, as they don't have any certificates with the msix package. You'd just get an error `error 0x800B0100: The app package must be digitally signed for signature validation.`
Interesting, I've never had an issue with WSL1 startup time. Meanwhile WSL2 startup time is supposed to be strictly greater due to its use of a VM. Maybe you have a configuration issue?
If you're using Docker on Linux in WSL2, how well are volume mounts to the host OS? I try not to use them in Docker Desktop as, for example, a sqlite3 file in the host and the guest access will eventually corrupt the file in my experience. MSSQL in docker also performs at about 1/100th the throughput if I put the data files in a volume.
I've hence been avoiding volumes for things other than static files or backup/restore directories.
> Also, bind mounts from [Docker on WSL2] will support inotify events and have nearly identical I/O performance as on a native Linux machine, which will solve one of the major Docker Desktop pain points with I/O-heavy toolchains.
Finally there's progress on this front. I've been spoiled by Magit and using git any other way takes a lot of effort, but it's a non-starter on Windows when it takes 2 minutes to do magit-commit because of the number of git subprocesses it runs. I almost thought it would be unfixable due to the different process model of Windows. The official docs[0] on this issue also depressed me for a time...
I'm still stuck with Cmder/ConEmu, but it does something worse for me than breaking copy/paste: Breaking arrow-keys in insert mode in VI. I'm only a very light VI user, but it's still very bothersome.
Thanks, but it's not quite my issue - arrows work everywhere else but not in vi. Also, cmd.exe doesn't have that same problem. So far no .vimrc setting has helped me. I've resigned to just wait for Microsoft's new cmd.exe.
I switched from Mac to WSL last year and yes, in some ways it is smoother. Having for example apt-get on top of the NT kernel is better than Homebrew on Darwin. It's worse in other ways that are slowly getting fixed, such as the sad state of terminal emulators on Windows. At least one thing will probably always be better on Mac: Settings/Defaults system with Plist files vs. Windows settings and the Registry. But to me that doesn't justify the lack of good laptop hardware choices anymore - PC laptops are so much better nowadays.
> Having for example apt-get on top of the NT kernel is better than Homebrew on Darwin
Can you go into more detail about why that's the case — package selection, versioning policies, etc.? The main thing I've typically found is testing version-matched deployments and Docker has made me care about that a lot less.
Sure, we're heavily docker based too, but for development toolchain it's really valuable to have such a well supported package manager as apt. It's not just about package availability but community support - you have the same tooling locally that is widely used server side, so lots more documentation exists for it. Same goes for Linux userland vs. BSD.
It's not easy for most... In the past I'd run a headless linux in a VM, do a network mount to my host environment, to edit local and run in the VM over SSH. It works, but setup is a pain.
I mean setting up file sharing and getting the mounts working properly inside/outside the container so you can use a GUI editor on the actual desktop, while running programs in the VM... it's not something easily setup ahead of time.
I've found it to be the most consistent and performant. Still not the same as being in a native Linux environment, of course, but it's the closest I've been able to get in Windows.
My main issue with ConEmu was output performance. Paging through files in Vim over SSH was painfully slow. I recently switched the wsltty [0] and after changing some display prefences (Font: Consolas 9pt, Theme: dracula, Cursor: blinking block) I'm really satisfied with it as a terminal. Vim paging performance is greatly improved, and it's smaller and faster than Hyper.
Also, copy/paste works perfectly and I found that fewer of the default keybindings conflict with those in my shell, tmux and vim.
Starting a new terminal with wsl1 takes so long? It takes less than 2 seconds on my machine. I guess if you have some file-intensive process, such as starting a rub/python interpreter to run a script that would slow things down due to the slow file access.
I use wsltty for the terminal, it's pretty minimal, but better than the windows terminal.
The middle of the copy pasted string is sometimes missing.
So if I copy `echo hi && cat /etc/hostname` it's gonna paste `echo hitc/hostname`
It has 4 different paste methods in the settings, I encountered the bug on all at work. I've used it without encountering the bug before though, but I haven't figured out what causes it.
I use Ubuntu inside Virtualbox at work, but it has caused almost everyone on my team issues at some point - the whole thing freezing and needing to be restarted, very laggy GUI (scrolling in VS Code gets ugly), network connectivity issues when switching WiFi hotspots (also doesn't work with Starbucks), running out of hard disk space if you didn't allocate enough to begin with, to name a few. Sure, there are always workarounds, but the time and productivity it has costed my team would have easily justified
getting a Mac, which IT thinks is too expensive :shrug:.
First, VirtualBox's performance has never been great, you'd want to use something like VMware.
Second, why use a cross platform app inside VM? Using the VM only for command line interface through SSH will get rid of any lag. You can share the host OS file system inside the VM and use GUI app on the host OS against native file system and you won't have to worry about filling up VM disk.
Connecting some $1k MacBook Air to an external monitor + keyboard would get you a decent desktop experience which you can even take it out. Not sure how that is any expensive.
VMWare or Hyper-V should perform better... though, why not go headless and just mount your linux filesystem in Windows, or vice-versa over a local network share, and edit in windows, run in an ssh prompt? That's how I've done it in the past... lately I've just gotten used to the windows-isms in git bash and Docker Desktop... looking forward to the WSL2 based release though.
It's not. It's possible Microsoft is making an exception for WSL2, but I highly doubt the user will be able to run arbitrary VMs on HyperV without a Pro edition of Windows.
According to others in this thread, if HyperV is enabled then the windows host actually partially runs on HyperV (which is part of the reason you can't use other hypervisors with HyperV turned on).
The running theory is that they want HyperV on for all installs (like turned on by default).
Competition at the OS level is good. Windows getting better will only make others try harder.
As a former Windows user, then macOS convert, I look at what's happening on Windows with envy. macOS is getting prettier and perhaps more consistent, but buggier. It's very frustrating. I've had so many problems with security updates, with both the latest and other supported versions. Apple support is terrible these days. I filed a bug report with them and weeks later they give me absolutely lame replies.
The only thing that gives me pause, still, about Windows 10: it updates when it wants to and that is difficult to turn off (you can set your network to "metered" and I hear that will do it). Also the telemetry. I feel like it spies on its users more than Apple's product does.
The updates btw got a lot better even for Windows 10 Home users. It can be argued if letting users turn off the updates is a overall good idea or not but at least you have more flexibility in postponing them with 1903.
I kind of feel the same way about using macOS. At this point in time if your hardware works with Linux you're missing out on a lot of things by using macOS. Windows/WSL2 removes the hardware compatibility part leaving macOS to catch up.
There are lot of small things that to me personally add up to a much bigger annoyance when using macOS. Lack of official hypervisor means I have to use 3rd party stuff - few times I used it I had kernel panics on wakeup. Old versions of all command line utilities - need brew to install that which has broken on me at least once. Updates are slow - both in terms of availability and installation times. If I need new stuff, I am forced to upgrade to entirely new major release.
Compare that to a rolling release distro like Arch or fast moving one like Fedora or slower moving Ubuntu LTS that gets HWE stack upgrades life is really easy. You get a built in hypervisor (KVM) that can be used by the Android emulator, your libvirt VMs and docker. Want to run dtrace/bpftool - no need to compromise security. Need latest GCC - easy enough to get - it goes on and on. Not to mention the package managers are pretty good on all Linux distros now a days.
With Windows the downside was that WSL 1 was slow and macOS was faster than that although slower than Linux for FS operations etc. But now with WSL2 pretty much matching native Linux performance and allowing you to use docker natively - it suddenly becomes a much better option - can buy cheaper hardware that does what you want to without relying on Apple to give you the right keyboard, # of ports etc.
Yea I'm really not looking forward to switching to macOS from Linux at a new job I'm starting. It's all these minor little things that add up to a decent-sized headache.
I am optimistic for WSL2 because while Linux was an option I was given it was very much a "Do so at your own peril we have a lot of homegrown tools that are not tested outside Windows and macOS". In a year or so maybe I'll switch to Windows w/ WSL2 or just back to Linux but there was no good reason to rock that boat at this time.
This is great news. No more CIFS bind mounts required for volume sharing with Windows drives. This is a particular issue with on corporate / Enterprise windows boxes.
It was also pretty unstable. Low memory conditions inside the docker VM => SMB times out => I/O terminatedn => File corruption. At least it did support mmap, which was an issue with the original virtualbox-backed docker for Mac...
From what I understand, this will only affect Linux containers running on Windows. Is that correct? What about Windows containers? I know they're not nearly as popular, but we did evaluate them at one time for running a legacy Windows application. I doubt we're alone in that use case.
So... will WSL2 need to be installed as per WSL? because WSL will not install on my work machine as the machine is locked down by the firm.. So will I lose local Docker?
I'm very skeptical of this new technology. I used VirtualBox for years and I will continue to do so. I just don't see any reason to switch to that new technology. Mounts? VirtualBox can do that. Fast boot? Well, VirtualBox boots in seconds, that's more than enough for me. Dynamic memory allocation? VirtualBox has memory balooning, if I ever would run into memory limits, I can always do that. And zero problems with all those Hyper-V conflicts, like Android emulators, snapshot support, VM import/export, actually any OS rather than just Linux, etc.
I used VMware since 2007, HyperV and VirtualBox for more than 5 years and I don't have such strong opinions, I use the one that is available and does the job. Because I had less problems with VMware and HyperV, I almost gave up to VirtualBox, YMMV.
Because they want a laptop with decent power management that doesn't freak out when connecting or disconnecting additional displays, but they don't necessarily want a Dell XPS or a ThinkPad. They might also appreciate the ability to quickly nuke their Linux environment and create a new one if they really mess up their configs somehow.
None of the issues you speak of are relevant to running linux on a laptop instead of windows. Linux has decent PM and I havent had glitchy issues with displays in years.
The idea of nuking the linux environment gets easier when you run one natively as you start to understand the system better.
I understand the idea where everyone wants to have the same ecosystem to run things on (docker) but the best part of linux was the fact it wasnt the same as windows
This convergence of the operating systems is concerning
I dual boot Linux and Windows 10 on my personal Lenovo X1 and my Linux setup definitely has its moments with multiple displays... It mostly works until it does not then it's a headache.
Power management seems OK but I dunno how that relates to the fan controller because in Linux my laptop sounds like a jet engine 90% of the time while in Windows it rarely spins up.
Probably could use some configuration on fan control or CPU throttling :)
I have more issues with windows than I do linux but I have been using linux on a desktop for ~15 years or more now ( would say 19 but there was a BSD period in there )
For me, personally, I just play way too many games to have running Linux as a daily driver a viable option. I've tried multiple times to move (Manjaro with Lutris was the closest I've gotten, but to be fair I haven't tried Steam Proton yet), but it's never stuck. Yea I could dual boot and just keep Windows around for gaming, but I'm lazy and flip-flop between coding and gaming too often. For my uses, I've been able to use Windows as a decent dev environment with no major issues, but WSL2 would make things even better.
Note: this is all for personal projects on my home desktop, my work laptop is Windows and I have no problems with it for my current position.
Only reading about video driver issues and sleep/hibernation configuration puts me off Linux desktop. Windows is a great and modern user-friendly OS. Office 365 is awesome too. WSL is the latest missing piece.
Hyper-V machines do not have access to GPU, previous implementation is legacy and cannot be selected anymore. No one knows when the new implementation is going live.
Not true. Hyper-V supports Discrete Device Assignment [1] and therefore GPUs, USB hosts, etc.
Where you'll run into issues is on the NVIDIA side. They deliberately cripple their consumer drivers to prevent use in virtual environments to extract licensing dollars.
(Haven't tried AMD but last I remember, their cards don't support being pushed around by the IOMMU. Could be wrong.)
So it sounds like maybe-yes? My use case is machine learning. I have an AMD GPU and tensorflow is supported through ROCm, but ROCm only runs under ubuntu. Currently I dual boot Windows/Ubuntu but would love to be able to stay in Windows.
Direct GPU access requires way more than just virtual machine.
You would need PCIe pass through, so the VM could control the GPU. For that, you would need either separate GPU, or a GPU that supports SR-IOV, and a board that supports IOMMU.
ROCm specifically is not only userland, but also a kernel driver (amdkfd/amdgpu). To use it, you are going to dual boot into native Linux for a while.
This is a big step toward the full merger of Windows and Linux on the desktop, which is the most logical way forward for both platforms. Kudos to Docker for taking a primary role and following Microsoft in this direction.
Can you explain how this is the most logical way forward for Linux?
For me, it feels it could be the opposite with Windows potentially impacting how Linux moves forward by having more devs using WSL instead of actual Linux and not actually optimizing for native Linux support.
For an example, Valve's Proton vs. native linux games, if I was a game dev, I wouldn't bother porting any Windows games to Linux, just rely on Proton instead.
>For an example, Valve's Proton vs. native linux games, if I was a game dev, I wouldn't bother porting any Windows games to Linux, just rely on Proton instead
I see what you mean, but on the other hand, most game development studios already didn't bother to port anyway.
`ls` and many other traditional unix command names are aliased to PowerShell equivalents that don't take the same arguments and often behave very very differently. In `ls`'s case, it's `Get-ChildItem`. It is my opinion that this was one of the worst decisions made by PowerShell and everyone would be better off if they'd never done that.
So, I think the main sticking point here is the lock-in of Hyper-V. By making a new popular feature completely dependent on a technology that explicitly disables the use of competitive hypervisors, they're giving with one hand and taking with the other.
If I was on VM-Ware's executive team, I'd be seriously thinking about filing an anti-trust complaint and the open source community should be thinking about whether submarining virtualbox is worth what Microsoft is doing here.
Hyper-V is already required for many security features like memory isolation. Also, Windows has an open hypervisor API (iirc QEMU is working on support for it) to allow other apps to leverage Hyper-V. VirtualBox hasn't really been competitive with VMWare or Hyper-V for a long time and many of it's key features are under the nonfree extension pack anyway. As a level 1 hypervisior--which by definition is a core part of the OS--Hyper-V will always have the advantage over third-party level 2s.
> iirc QEMU is working on support for it
It's great and seamless, just pass `-accel whqx` (instead of `-accel hax` or `-accel kvm`).
There are working binaries here: https://shadycode.com/qemu-binaries-for-windows-64-bit-with-...
The Level 1/2 distinction is almost purely marketing, especially with hardware level virtualization abilities like VT-x and VT-d. In no meaningful way does the fact that Hyper-V is somehow a "level 1" hypervisor make it better at being a virtual machine host.
There is the Windows Hypervisor Platform[1] component recently added in windows, that at least allows other VMs to share the hypervisor using a common API. IIRC, VirtualBox 6 supports it.
To me, it seems like the use of Hyper-V was probably necessary to get the tight integration they needed to make the use of WSL 2 as seamless as it is today with the lightweight containers.
[1] https://docs.microsoft.com/en-us/virtualization/api/
Heads up, as of Windows 10 1903 (latest update), VirtualBox QEMU, and friends will fail when attempting to use WHP. It appears to be critically broken in these builds of Windows. (It's fixed upstream in 20H1 builds.)
I tried VirtualBox with Hyper-V and it's really slooow.
VBx also have some extra goodies like the built-in DHCP-server, host-only networking and so on which Hyper-V seems to be lacking.
Those features do exist, but you may need a certain level of Windows licensing, and get ready to crack open a Powershell terminal to configure it.
Don't all hypervisors disable other hypervisors?
No. It's a fairly common use case to i.e. virtualize several ESXi hypervisors underneath ESXi itself, in order to test clusters or other setups. VMWare calls the feature VHV, "virtual hardware-assisted virtualization". More generally, it's typically just "nested virtualization". It does look like it needs some CPU features, but which are now old enough to be essentially commodity.
(and yes, ESXi is a type 1 hypervisor--it's definitely something Microsoft could enable more broadly, instead of limiting the scope to Hyper-V nested setups)
Is it possible to run other hypervisors under Hyper-V as well? What about next to ESXi (as opposed to underneath it)?
I'm learning a lot from this conversation, thanks.
You can run Hyper-V inside Hyper-V on Intel CPUs, but, not on AMD EPYC/Ryzen yet.
(Not sure about nesting other hypervisors.)
You are either in or under, not side by side. Yes, it is possible to run a hypervisor in a hypervisor, the limitation is usually in the parent hypervisor. In this particular case it would be useful to run side by side, but it is not possible and it also negates the idea of multiple hypervisors: why not have a single one running all the VM's?
While running, yes. On Windows if you enable Hyper-V support, even when you're not using it at the moment, you can't use VirtualBox without reboots and shit.
This is because Hyper-V is a level 1 hypervisor (similar to Xen) while VirtualBox is a level 2 (similar to QEMU). Hyper-V is always running because the Windows "host" OS is actually running under it's control as well.
> Hyper-V is always running because the Windows "host" OS is actually running under it's control as well.
Wait really? That doesn't seem right, why would they have the host running under part of hyper v when hyper v is optional? This would mean the entire foundation shifts when you enable or disable hyper v
Yeah, that's why the reboot is required, it's like changing kernels on linux. I think they are moving to have it on by default, and eventually they may just make it permanently part of the OS install. Kind of like how they take control of the TPM on install even if you don't setup BitLocker, I imagine they will just have the core hypervisor permanently enabled and then it's up to the user setting what actually uses its features.
Hence the reboots.
This system has resulted in issues in the "host" and with an unresponsive control plane for me the few times I've tried to use it. Compared to no issues with type 2s.
To say nothing of the madness of using it on a laptop with several network adapters when the guest OS only wants to acknowledge one.
Hypervisor Framework and VirtualBox seem to cohabit just fine on MacOS.
Yeah. VirtualBox installs its own kernel extension at this point in time.
But can you run VBox and (say) Docker at the same time?
Yep. I ran docker and Fusion at the same time and it works great.
https://www.virtualbox.org/ticket/14217 seems to indicate not so much unless VirtualBox has switched to using it as a backend.
That ticket is just a wishlist request — there's no indication of a problem there.
No.
I concurrently build vagrant boxes (using Packer) for VirtualBox, VMWare and Parallels on macOS.
All of those are level 2 hypervisors. Hyper V is a level 1 hypervisor.
My point is that it's fundamental to the way the tech works.
Great, but that's a design decision Microsoft made when implementing a solution for desktops and workstations.
Sure, a level 1 HV makes a lot of sense for a VM host.
For this, not so much.
It is a VM host: when Hyper-V is active, Windows itself is just another VM. (That's why you have to reboot when you turn Hyper-V on or off). NT isn't actually privileged compared to the Linux kernel in WSL2!
Most likely in the next few years Hyper-V will become non-optional and Windows will always be virtualized when you run it. That's a big architectural shift that will enable lots of cool new scenarios, but it requires a level 1 hypervisor.
One interesting rumor that has been around for years is that Xbox games could always start requiring Hyper-V on Windows. On an Xbox One, Hyper-V is already in charge and games (mostly) run in extremely lightweight VMs. Supposedly there is interest in bringing those game VMs directly to Windows for power-user game performance.
When I said “vm host” I meant a server in a rack.
But sure, keep down voting because someone said your dumpster fire OS is crap.
This is our big problem. Our development "workstations" are virtualized in VMWare. That's incompatible with Hyper-V, so MS is locking us out of this technology.
If your dev workstations are virtualized already, why not ask your tooling team to provide virtualized Linux workstations? If you deploy to Linux this is something they should provide anyway, for dev/prod parity.
Modern VMWare still doesn't support Hyper-V? That's unfortunate to hear. My old version doesn't so I have to reboot anytime I want to get into my VMWare VMs, yet I'd made the (sounds to be incorrect) assumption that would be fixed in newer versions.
My understanding is that it isn't so much "VMware doesn't support Hyper-V" but that "Hyper-V doesn't permit other virtualization platforms to run alongside it" and that may be less of a technical limitation than one might assume.
Hyper-V works in nested virtualization, the limitation seems to be in VMware in this particular case.
I'm referring to an architecture with ESX server running the VMs. I'm not sure exactly which version. But we've tested it recently and found that using Linux Docker on Windows 10 still doesn't work.
It should work if you enable nested virtualization in ESX. They call it VHV (Virtual Hardware-Assisted Virtualization).
You can do what I do in WSL 1, which is to forward the Docker daemon port from the VM and point your local CLI to it via the TCP port.
I've been doing that for years now (I originally built my own Docker client binary for the Mac and pointed it to a Linux box, did the same when I ran Parallels on my Mac, and still do it occasionally with ARM boxes at home). People forget that Docker was designed as a client-server solution, and that the server bit works fine.
You can ask your sysadmins to set up a VMware instance running boot2docker and link it directly to your Windows workstation.
>If I was on VM-Ware's executive team
You mean the product they've all but abandoned?
https://arstechnica.com/information-technology/2016/01/vmwar...
>submarining virtualbox
Oracle will manage that on their own regardless of what Microsoft does. They already started by making the extension pack free to download but licensed so they can catch unsuspecting users in one of their famous audits.
Aren't they replacing Hyper-V for the next version descriped in the post? I thought it now runs natively on top of WSL syscalls and the NT kernel.
WSL 2 is a full Linux kernel running in Hyper-V rather than an emulation layer on top of NT.
The article is about two things:
1. WSL 1 was not enough to run docker on it 2. WSL 2 is what was needed and with it Docker can use it and replace the Linuxkit based bits it currently ships, and get other improvements along with it.
WSL1 is syscall emulation on the NT kernel. Docker does not use WSL1. WSL2 is a Linux VM on Hyper-V which will be used by Docker Desktop.
I'm forced to use Windows at work and am so looking forward to wsl2.
Starting a new terminal with wsl1 takes so long and I'm less than happy with all terminal emulators.
ConEmu has broken copy paste, alacrity doesn't have proper tiling... Heck, I've started using hyper of all things.
I realize that I probably never be happy on Windows when I can actually use i3wm at home, but if it could just suck less I'd be so ecstatic.
The new Windows Terminal is looking very nice so far:
https://github.com/microsoft/terminal
You still have to build it from source in VS 2017 or 2019, and there are a few rough edges (currently only middle click for copy/paste), but it's a great start. They should have some official binaries up pretty soon.
You can grab build artifacts from builds of the master branch off their CI. You need to enable developer mode to allow the loading of an unsigned binary.
https://dev.azure.com/ms/Terminal/_build/results?buildId=203...
As far as I know, it's not possible to install these binary artifacts as of now, as they don't have any certificates with the msix package. You'd just get an error `error 0x800B0100: The app package must be digitally signed for signature validation.`
It's easy. unzip the appx, and run Add-AppxPackage -Register AppxManifest.xml.
This way you are registering the app directly instead of installing the appx.
They just released an early preview build in the Windows Store:
https://www.microsoft.com/en-us/p/windows-terminal-preview/9...
Interesting, I've never had an issue with WSL1 startup time. Meanwhile WSL2 startup time is supposed to be strictly greater due to its use of a VM. Maybe you have a configuration issue?
I would hope it is not a new VM each time?
I would think not, but the point is that it has to be booted at some point (whereas that wasn't a concern with WSL1)
It boot pretty fast, just 2 seconds here.
I just got WSL2 from the Windows Insider updates and it is great so far. Things like git are way faster and I can run docker on Linux.
If you're using Docker on Linux in WSL2, how well are volume mounts to the host OS? I try not to use them in Docker Desktop as, for example, a sqlite3 file in the host and the guest access will eventually corrupt the file in my experience. MSSQL in docker also performs at about 1/100th the throughput if I put the data files in a volume.
I've hence been avoiding volumes for things other than static files or backup/restore directories.
From TFA:
> Also, bind mounts from [Docker on WSL2] will support inotify events and have nearly identical I/O performance as on a native Linux machine, which will solve one of the major Docker Desktop pain points with I/O-heavy toolchains.
Oh god yes, git status finally works in less then 10 seconds. That's actually the only thing I wanted from Microsoft.
Finally there's progress on this front. I've been spoiled by Magit and using git any other way takes a lot of effort, but it's a non-starter on Windows when it takes 2 minutes to do magit-commit because of the number of git subprocesses it runs. I almost thought it would be unfixable due to the different process model of Windows. The official docs[0] on this issue also depressed me for a time...
[0] https://magit.vc/manual/magit/Microsoft-Windows-Performance....
I'm still stuck with Cmder/ConEmu, but it does something worse for me than breaking copy/paste: Breaking arrow-keys in insert mode in VI. I'm only a very light VI user, but it's still very bothersome.
This might help you:
https://github.com/cmderdev/cmder/issues/1832
Thanks, but it's not quite my issue - arrows work everywhere else but not in vi. Also, cmd.exe doesn't have that same problem. So far no .vimrc setting has helped me. I've resigned to just wait for Microsoft's new cmd.exe.
Are you opening vi with `vi` or `vim`? And does it behave the same whichever you use?
both the same. pressing left arrow in insert mode shows "E388: Couldn't find definition", but only in Cmder. Same WSL with cmd.exe - vi works fine.
I am also pretty happy with WSL, but I'm even happier that I was able to get a mac at work lol.
Although I admit, sometimes the BSD toolchain poses some rather hilarious speedbumps. At that point WSL can be smoother? Weird.
I switched from Mac to WSL last year and yes, in some ways it is smoother. Having for example apt-get on top of the NT kernel is better than Homebrew on Darwin. It's worse in other ways that are slowly getting fixed, such as the sad state of terminal emulators on Windows. At least one thing will probably always be better on Mac: Settings/Defaults system with Plist files vs. Windows settings and the Registry. But to me that doesn't justify the lack of good laptop hardware choices anymore - PC laptops are so much better nowadays.
> Having for example apt-get on top of the NT kernel is better than Homebrew on Darwin
Can you go into more detail about why that's the case — package selection, versioning policies, etc.? The main thing I've typically found is testing version-matched deployments and Docker has made me care about that a lot less.
Sure, we're heavily docker based too, but for development toolchain it's really valuable to have such a well supported package manager as apt. It's not just about package availability but community support - you have the same tooling locally that is widely used server side, so lots more documentation exists for it. Same goes for Linux userland vs. BSD.
So, why don't people just install native Linux in VMware or something instead of keep fighting with the host OS?
It's not easy for most... In the past I'd run a headless linux in a VM, do a network mount to my host environment, to edit local and run in the VM over SSH. It works, but setup is a pain.
Do you mean as a personal pain when you're not used to setting up a Linux environment?
If the IT department prepared a pre setup image, all you need to do is boot it up.
I mean setting up file sharing and getting the mounts working properly inside/outside the container so you can use a GUI editor on the actual desktop, while running programs in the VM... it's not something easily setup ahead of time.
Just install coreutils, findutils, and gnu-sed from Homebrew. Your muscle memory will thank you.
Have you tried the mintty solution for WSL?
https://github.com/mintty/wsltty
I've found it to be the most consistent and performant. Still not the same as being in a native Linux environment, of course, but it's the closest I've been able to get in Windows.
My main issue with ConEmu was output performance. Paging through files in Vim over SSH was painfully slow. I recently switched the wsltty [0] and after changing some display prefences (Font: Consolas 9pt, Theme: dracula, Cursor: blinking block) I'm really satisfied with it as a terminal. Vim paging performance is greatly improved, and it's smaller and faster than Hyper.
Also, copy/paste works perfectly and I found that fewer of the default keybindings conflict with those in my shell, tmux and vim.
[0] https://github.com/mintty/wsltty
Try mobaxterm. T works great on it's own, but will also hook into your wsl.
Starting a new terminal with wsl1 takes so long? It takes less than 2 seconds on my machine. I guess if you have some file-intensive process, such as starting a rub/python interpreter to run a script that would slow things down due to the slow file access.
I use wsltty for the terminal, it's pretty minimal, but better than the windows terminal.
I'm curious, how is copy paste broken? I've been using Cmder for a while and haven't had any issues
The middle of the copy pasted string is sometimes missing. So if I copy `echo hi && cat /etc/hostname` it's gonna paste `echo hitc/hostname`
It has 4 different paste methods in the settings, I encountered the bug on all at work. I've used it without encountering the bug before though, but I haven't figured out what causes it.
Could you just install a full-fat Linux desktop VM on your Windows workstation and live in it?
I use Ubuntu inside Virtualbox at work, but it has caused almost everyone on my team issues at some point - the whole thing freezing and needing to be restarted, very laggy GUI (scrolling in VS Code gets ugly), network connectivity issues when switching WiFi hotspots (also doesn't work with Starbucks), running out of hard disk space if you didn't allocate enough to begin with, to name a few. Sure, there are always workarounds, but the time and productivity it has costed my team would have easily justified getting a Mac, which IT thinks is too expensive :shrug:.
First, VirtualBox's performance has never been great, you'd want to use something like VMware.
Second, why use a cross platform app inside VM? Using the VM only for command line interface through SSH will get rid of any lag. You can share the host OS file system inside the VM and use GUI app on the host OS against native file system and you won't have to worry about filling up VM disk.
Connecting some $1k MacBook Air to an external monitor + keyboard would get you a decent desktop experience which you can even take it out. Not sure how that is any expensive.
VMWare or Hyper-V should perform better... though, why not go headless and just mount your linux filesystem in Windows, or vice-versa over a local network share, and edit in windows, run in an ssh prompt? That's how I've done it in the past... lately I've just gotten used to the windows-isms in git bash and Docker Desktop... looking forward to the WSL2 based release though.
That's basically what VSCode does, except without all the setup hassle [0].
[0] https://code.visualstudio.com/docs/remote/remote-overview
In the meantime, have you checked out MSYS2? It's the best option I've found for a *nix toolchain until WSL2 arrives. :)
WSL2 is dependent on hyper-v, though.
We work with a lot of independent contractors, many of whom get by fine with Window Home, so this is great news:
> ...because WSL 2 works on Windows 10 Home edition, so will Docker Desktop
Glad to see this too:
> bind mounts from WSL will support inotify events and have nearly identical I/O performance as on a native Linux machine
I didn't think Hyper-V was available on Windows 10 Home Edition; has this changed or was it always available?
It's not. It's possible Microsoft is making an exception for WSL2, but I highly doubt the user will be able to run arbitrary VMs on HyperV without a Pro edition of Windows.
According to others in this thread, if HyperV is enabled then the windows host actually partially runs on HyperV (which is part of the reason you can't use other hypervisors with HyperV turned on).
The running theory is that they want HyperV on for all installs (like turned on by default).
Windows Virtual Machine platform is on Home, not Hyper-V itself.
Boy I'm happy Microsoft is making Windows 10 into a great development platform for the web!
One of the rare things to feel upbeat about is that we have great choice when it comes to OSes.
Competition at the OS level is good. Windows getting better will only make others try harder.
As a former Windows user, then macOS convert, I look at what's happening on Windows with envy. macOS is getting prettier and perhaps more consistent, but buggier. It's very frustrating. I've had so many problems with security updates, with both the latest and other supported versions. Apple support is terrible these days. I filed a bug report with them and weeks later they give me absolutely lame replies.
The only thing that gives me pause, still, about Windows 10: it updates when it wants to and that is difficult to turn off (you can set your network to "metered" and I hear that will do it). Also the telemetry. I feel like it spies on its users more than Apple's product does.
The updates btw got a lot better even for Windows 10 Home users. It can be argued if letting users turn off the updates is a overall good idea or not but at least you have more flexibility in postponing them with 1903.
I just wish they would take the advertising right out of Windows 10, as well as Bing in the start menu etc.
It really ruins it.
Many standard Windows apps are currently also quite buggy, including Store and Mail to name a few.
Updates were improved in 1903
Better late than never but I feel sorry to people using Windows for web dev when the tooling experience has been way worse than Mac for a decade now.
And some people can't even try to switch out of Windows.
I kind of feel the same way about using macOS. At this point in time if your hardware works with Linux you're missing out on a lot of things by using macOS. Windows/WSL2 removes the hardware compatibility part leaving macOS to catch up.
What is there to miss on Linux exclusive tooling or even Windows exclusive tooling?
There are lot of small things that to me personally add up to a much bigger annoyance when using macOS. Lack of official hypervisor means I have to use 3rd party stuff - few times I used it I had kernel panics on wakeup. Old versions of all command line utilities - need brew to install that which has broken on me at least once. Updates are slow - both in terms of availability and installation times. If I need new stuff, I am forced to upgrade to entirely new major release.
Compare that to a rolling release distro like Arch or fast moving one like Fedora or slower moving Ubuntu LTS that gets HWE stack upgrades life is really easy. You get a built in hypervisor (KVM) that can be used by the Android emulator, your libvirt VMs and docker. Want to run dtrace/bpftool - no need to compromise security. Need latest GCC - easy enough to get - it goes on and on. Not to mention the package managers are pretty good on all Linux distros now a days.
With Windows the downside was that WSL 1 was slow and macOS was faster than that although slower than Linux for FS operations etc. But now with WSL2 pretty much matching native Linux performance and allowing you to use docker natively - it suddenly becomes a much better option - can buy cheaper hardware that does what you want to without relying on Apple to give you the right keyboard, # of ports etc.
Yea I'm really not looking forward to switching to macOS from Linux at a new job I'm starting. It's all these minor little things that add up to a decent-sized headache.
I am optimistic for WSL2 because while Linux was an option I was given it was very much a "Do so at your own peril we have a lot of homegrown tools that are not tested outside Windows and macOS". In a year or so maybe I'll switch to Windows w/ WSL2 or just back to Linux but there was no good reason to rock that boat at this time.
I’ve been following this longstanding Docker issue: https://github.com/docker/docker.github.io/issues/6910
The comments make it pretty clear how the Github crew feels about it, but I’m curious the response over here.
It seems unrelated to this article?
I usually just report promo emails received after this kind of forced sign-up as spam.
This is great news. No more CIFS bind mounts required for volume sharing with Windows drives. This is a particular issue with on corporate / Enterprise windows boxes.
It was also pretty unstable. Low memory conditions inside the docker VM => SMB times out => I/O terminatedn => File corruption. At least it did support mmap, which was an issue with the original virtualbox-backed docker for Mac...
Now, fingers crossed for transparent inotify...
From what I understand, this will only affect Linux containers running on Windows. Is that correct? What about Windows containers? I know they're not nearly as popular, but we did evaluate them at one time for running a legacy Windows application. I doubt we're alone in that use case.
So... will WSL2 need to be installed as per WSL? because WSL will not install on my work machine as the machine is locked down by the firm.. So will I lose local Docker?
I'm very skeptical of this new technology. I used VirtualBox for years and I will continue to do so. I just don't see any reason to switch to that new technology. Mounts? VirtualBox can do that. Fast boot? Well, VirtualBox boots in seconds, that's more than enough for me. Dynamic memory allocation? VirtualBox has memory balooning, if I ever would run into memory limits, I can always do that. And zero problems with all those Hyper-V conflicts, like Android emulators, snapshot support, VM import/export, actually any OS rather than just Linux, etc.
I used VMware since 2007, HyperV and VirtualBox for more than 5 years and I don't have such strong opinions, I use the one that is available and does the job. Because I had less problems with VMware and HyperV, I almost gave up to VirtualBox, YMMV.
It's sad that all this needs Windows 10 Pro, which makes all learner developers unable to use Docker to learn properly unless they upgrade.
Amazes me how far people go to avoid simply running Linux, why isnt that the preferred option here ?
Because they want a laptop with decent power management that doesn't freak out when connecting or disconnecting additional displays, but they don't necessarily want a Dell XPS or a ThinkPad. They might also appreciate the ability to quickly nuke their Linux environment and create a new one if they really mess up their configs somehow.
None of the issues you speak of are relevant to running linux on a laptop instead of windows. Linux has decent PM and I havent had glitchy issues with displays in years.
The idea of nuking the linux environment gets easier when you run one natively as you start to understand the system better.
I understand the idea where everyone wants to have the same ecosystem to run things on (docker) but the best part of linux was the fact it wasnt the same as windows
This convergence of the operating systems is concerning
I dual boot Linux and Windows 10 on my personal Lenovo X1 and my Linux setup definitely has its moments with multiple displays... It mostly works until it does not then it's a headache.
Power management seems OK but I dunno how that relates to the fan controller because in Linux my laptop sounds like a jet engine 90% of the time while in Windows it rarely spins up.
Probably could use some configuration on fan control or CPU throttling :)
I have more issues with windows than I do linux but I have been using linux on a desktop for ~15 years or more now ( would say 19 but there was a BSD period in there )
For me, personally, I just play way too many games to have running Linux as a daily driver a viable option. I've tried multiple times to move (Manjaro with Lutris was the closest I've gotten, but to be fair I haven't tried Steam Proton yet), but it's never stuck. Yea I could dual boot and just keep Windows around for gaming, but I'm lazy and flip-flop between coding and gaming too often. For my uses, I've been able to use Windows as a decent dev environment with no major issues, but WSL2 would make things even better.
Note: this is all for personal projects on my home desktop, my work laptop is Windows and I have no problems with it for my current position.
Because I'm too lazy and too cheap to have two computers.
Only reading about video driver issues and sleep/hibernation configuration puts me off Linux desktop. Windows is a great and modern user-friendly OS. Office 365 is awesome too. WSL is the latest missing piece.
So WSL2 would use a lightweight VM running a Linux kernel to avoid the slow windows I/O.
Will I have full access to the GPU?
Based on this FAQ, no.
https://docs.microsoft.com/en-us/windows/wsl/wsl2-faq#can-i-...
Note that during the talk they said they heard the cries for GPU access loud and clear (they mentioned AI). It will hopefully come later.
Since it runs on hyperv I would hope so, but hard to find any official sources confirming that or not.
Hyper-V machines do not have access to GPU, previous implementation is legacy and cannot be selected anymore. No one knows when the new implementation is going live.
Not true. Hyper-V supports Discrete Device Assignment [1] and therefore GPUs, USB hosts, etc.
Where you'll run into issues is on the NVIDIA side. They deliberately cripple their consumer drivers to prevent use in virtual environments to extract licensing dollars.
(Haven't tried AMD but last I remember, their cards don't support being pushed around by the IOMMU. Could be wrong.)
[1] https://docs.microsoft.com/en-us/windows-server/virtualizati...
So it sounds like maybe-yes? My use case is machine learning. I have an AMD GPU and tensorflow is supported through ROCm, but ROCm only runs under ubuntu. Currently I dual boot Windows/Ubuntu but would love to be able to stay in Windows.
Direct GPU access requires way more than just virtual machine.
You would need PCIe pass through, so the VM could control the GPU. For that, you would need either separate GPU, or a GPU that supports SR-IOV, and a board that supports IOMMU.
ROCm specifically is not only userland, but also a kernel driver (amdkfd/amdgpu). To use it, you are going to dual boot into native Linux for a while.
Ah bummer. Well hopefully someday soon the Windows software catches up
This is a big step toward the full merger of Windows and Linux on the desktop, which is the most logical way forward for both platforms. Kudos to Docker for taking a primary role and following Microsoft in this direction.
Can you explain how this is the most logical way forward for Linux?
For me, it feels it could be the opposite with Windows potentially impacting how Linux moves forward by having more devs using WSL instead of actual Linux and not actually optimizing for native Linux support.
For an example, Valve's Proton vs. native linux games, if I was a game dev, I wouldn't bother porting any Windows games to Linux, just rely on Proton instead.
>For an example, Valve's Proton vs. native linux games, if I was a game dev, I wouldn't bother porting any Windows games to Linux, just rely on Proton instead
I see what you mean, but on the other hand, most game development studios already didn't bother to port anyway.
I hope they will pick Linux as the kernel and Plasma as the desktop environment in this full merger, and ls as the listing directory command.
> ls as the listing directory command
ls works in Power Shell (though I believe it's just an alias)
`ls` and many other traditional unix command names are aliased to PowerShell equivalents that don't take the same arguments and often behave very very differently. In `ls`'s case, it's `Get-ChildItem`. It is my opinion that this was one of the worst decisions made by PowerShell and everyone would be better off if they'd never done that.