Zambyte 3 days ago

I was skeptical of the claim that it's faster than traditional SSH, but the README specifies that it is faster at establishing a connection, and that active connections are the same speed. That makes a lot of sense and seems like a reasonable claim to make.

  • s-macke 3 days ago

    It is not faster in this sense. However, an SSH connection can have multiple substreams, especially for port forwarding. Over a single classical connection, this can lead to head-of-line blocking, where an issue in one stream slows everything down. QUIC/HTTP3 protocol can solve this.

    • thayne 3 days ago

      Does this implementation do that do, or does it just use a single h3 stream?

      • s-macke 3 days ago

        The answer is yes according to code and documentation [0]:

        > The stream multiplexing capabilities of QUIC allow reducing the head-of-line blocking that SSHv2 encounters when multiplexing several SSH channels over the same TCP connection

        ....

        > Each channel runs over a bidirectional HTTP/3 stream and is attached to a single remote terminal session

        [0] https://www.ietf.org/archive/id/draft-michel-remote-terminal...

    • inetknght 3 days ago

      Fun fact: SSH also supports multiple streams. It's called multiplexing.

      • lxgr 2 days ago

        Multiple streams at the application level, which can be head-of-line blocked due to all being multiplexed on the same transport layer connection.

        The former kind of multiplexing addresses functionality, the latter performance.

      • Aachen 2 days ago

        Doesn't it run over a single TCP connection in all cases, unless you manually launch multiple and manually load-balance your clients across is? As in, it won't/can't open a new TCP connection when you open a new connection in the SOCKS proxy or port forward. They'll all share one head-of-line and block each other

        Not that I've ever noticed this being an issue (no matter how much we complain, internet here is pretty decent)

        Edit: seeing as someone downvoted your hour-old comment just as I was adding this first reply, I guess maybe they 'voted to disagree'... Would be nice if the person would comment. It wasn't me anyway

  • notepad0x90 3 days ago

    Although, dollars-to-donuts my bet is that this tool/protocol is much faster than SSH over high-latency links, simply by virtue of using UDP. Not waiting for ack's before sending more data might be a significant boost for things like scp'ing large files from part of the world to the another.

    • nh2 3 days ago

      SSH has low throughput on high latency links, but not because it uses TCP. It is because SSH hardcodes a too-small maximum window size in its protocol, in addition to the one of TCP.

      This SSH window size limit is per ssh "stream", so it could be overcome by many parallel streams, but most programs do not make use of that (scp, rsync, piping data through the ssh command), so they are much slower than plain TCP as measured eg by iperf3.

      I think it's silly that this exists. They should just let TCP handle this.

      • jlokier 3 days ago

        > I think it's silly that this exists. They should just let TCP handle this.

        No, unfortunately it'snecessary so that the SSH proocol can multiplex streams independently over a single established connection.

        If one of the multiplexed streams stalls because its receiver is blocked or slow, and the receive buffer (for that stream) fills up, then without window-based flow control, that causes head-of-line blocking of all the other streams.

        That's fine if you don't mind streams blocking each other, but it's a problem if they should flow independently. It's pretty much a requirement for opportunistic connection sharing by independent processes, as SSH does.

        In some situations, this type of multiplexed stream blockiing can even result in a deadlock, depending on what's sent over the streams.

        Solutions to the problem are to either use window-based flow control, separate from TCP,, or to require all stream receive buffers to expand without limit, which is normally unacceptable.

        HTTP/2 does something like this.

        I once designed a protocol without this, thinking multipexing was enough by itself, and found out the hard way when processes got stuck for no apparent reason.

        • nh2 2 days ago

          Then:

          * Give users a config options so I can adjust it to my use case, like I can for TCP. Don't just hardcode some 2 MB (which was even raised to this in the past, showing how futile it is to hardcode it because it clearly needs adjustments to people's networks and and ever-increasing speeds). It is extremely silly that within my own networks, controlling both endpoints, I cannot achieve TCP speeds over SSH, but I can with nc and a symmetric encryption piped in. It is silly that any TCP/HTTP transfer is reliably faster than SSH.

          * Implement data dropping and retransmissions to handle blocking -- like TCP does. It seems obviously asking for trouble to want to implement multiplexing, but then only implement half of the features needed to make it work well.

          When one designs a network protocol, shouldn't one of the first sanity checks be "if my connection becomes 1000x faster, does it scale"?

      • lxgr 2 days ago

        You're mixing application layer multiplexing and transport layer multiplexing.

        If you use the former without the latter, you'll inevitably have head-of-line blocking issues if your connection is bandwidth or receiver limited.

        Of course not every SSH user uses protocol multiplexing, many do, as it can avoid repeated and relatively expensive (terms of CPU, performance, and logging volume) handshakes.

      • Operyl 3 days ago

        Off the top of your head do you know of any file transfer tools that do utilize multiple streams?

    • lxgr 2 days ago

      That's not really a common TCP problem. Only when there's something severely weird going on in the return path (e.g. an extremely asymmetric and/or congested return path connection dropping ACKs while the forward path has enough capacity) does the ACK mechanism limit TCP.

      Also, HTTP/3 must obviously also be using some kind of acknowledgements, since for fairness reasons alone it must be implementing some congestion control mechanism, and I can't think of one that gets by entirely without positive acknowledgements.

      It could well be more efficient than TCP's default "ack every other segment", though. (This helps in the type of connection mentioned above; as far as I know, some DOCSIS modems do this via a mechanism called "ack compression", since TCP is generally tolerant of losing some ACKs.)

      In a sense, the win of QUIC/HTTP/3 in this sense isn’t that it’s not TCP (it actually provides all the components of TCP per stream!); it’s rather that the application layer can “provide its own TCP”, which might well be more modern than the operating system’s.

    • fanf2 3 days ago

      Yeah, there’s a replacement for scp that uses ssh for setup and QUIC for bulk data transfer, which is much faster over high-latency paths.

      https://github.com/crazyscot/qcp

    • bcrl 3 days ago

      That's why mosh exists, as it is purpose built for terminals over high latency / high packet loss links.

      • eichin 3 days ago

        But mosh doesn't actually do any of what ssh does, let alone do it faster - it wins by changing the problem, to the vastly narrower one of "getting characters in front of human eyeballs". (Which is amazing if that's what you were trying to do - but that has nothing to do with multiple data streams...)

      • espadrine 2 days ago

        mosh is hard to get into. There are many subtle bugs; a random sample that I ran into is that it fails to connect when the LC_ALL variables diverge between the client and the server[0]. On top of it, development seems abandoned. Finally, when running a terminal multiplexer, the predictive system breaks the panes, which is distracting.

        [0]: https://github.com/mobile-shell/mosh/issues/98

    • xorcist 3 days ago

      Of course it has ACKs. There are protocols without ACKs but they are exotic and HTTP3 is not one of them.

      • IshKebab 3 days ago

        He said not waiting for ACKs.

        • xorcist 3 days ago

          That makes even less sense, unless we are talking about XMODEM every protocol uses windowing to avoid getting stuck waiting for ACKs.

          Of course you need to wait for ACKs at some point though, otherwise they would be useless. That's how we detect, and potentially recover from, broken links. They are a feature. And HTTP3 has that feature.

          Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen. The use case of SSH (long lived connections with shorter lived channels) is vastly different from the short lived bursts of many connections that QUIC was intented for. My best guess is that it could go both ways, depending on the actual implementation. The devil is in the details, and there are many details here.

          Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for. Look at pages like this Linux Tuning for High Latency networks: https://fasterdata.es.net/host-tuning/linux/

          There is also the HPN-SSH project which increases the buffers of SSH even more than what is standard. It is seldom needed anymore since both Linux and OpenSSH has improved, but can still be useful.

          • formerly_proven 3 days ago

            > Is it better implemented than the various TCP algorithms we use underneath regular SSH? Perhaps. That remains to be seen.

            SSH multiplexes multiple channels on the same TCP connection which results in head of line blocking issues.

            > Should you find yourself limited by the default buffering of SSH (10+Gbit intercontinental links), that's called "long fat links" in network lingo, and is not what TCP was built for.

            Not really, no. OpenSSH has a 2 MB window size (in the 2000s, 64K), even with just ~gigabit speeds it only takes around 10-20 ms of latency to start being limited by the BDP.

          • IOT_Apprentice 3 days ago

            Well, you could peruse the code. Then see what it does and explain it.

    • finaard 3 days ago

      Not really that relevant - anybody regularly using SSH over high latency links is using SSH+mosh already anyway.

      • oefrha 3 days ago

        The huge downside of mosh is it handles its own rendering and destroys the scrollback buffer. (Yes I know I can add tmux for a middle ground.)

        But it's still irrelevant here; specifically called out in README:

        > The keystroke latency in a running session is unchanged.

        • esseph 3 days ago

          "huge downside" (completely mitigated by using tmux)

          The YouTube and social media eras made everyone so damn dramatic. :/

          Mosh solves a problem. tmux provides a "solution" for some that resolves a design decision that can impact some user workflows.

          I guess what I'm saying here, is it you NEED mosh, then running tmux is not even a hard ask.

          • oefrha 3 days ago

            No it’s not completely mitigated by tmux. mosh has two main use cases (that I know of)

            1. High latency, maybe even packet-dropping connections;

            2. You’re roaming and don’t want to get disconnected all the time.

            For 2, sure tmux is mostly okay, it’s not as versatile as the native buffer if you use a good terminal emulator but whatever. For 1, using tmux in mosh gives you an awful, high latency scrollback buffer compared to the local one you get with regular ssh. And you were specifically taking about 1.

            For read-heavy, reconnectable workloads over high latency connections I definitely choose ssh over mosh or mosh+tmux and live with the keystroke latency. So saying it’s a huge downside is not an exaggeration at all.

            • esseph 2 days ago

              I believe this depends on the intent of your connection!. The first sentence of your last paragraph: "For read-heavy, reconnectable workloads" - A-ha!

              From my stance, and where I've used mosh has been in performing quick actions on routers and servers that may have bad connections to them, or may be under DDoS, etc. "Read" is extremely limited.

              So from that perspective and use case, the "huge downside" has never been a problem.

          • jeffhuys 2 days ago

            Honestly, it feels like the one being dramatic here is you. Because the one you’re replying to added “huge”, you added a whole sentence calling everyone “so damn dramatic”. But oh well.

            • esseph 2 days ago

              You know what has a "huge downside"? Radiation therapy.

              Not a scroll back buffer workflow issue.

      • lxgr 2 days ago

        If you believe that, you clearly haven't had to work with mosh in a heavily firewalled environment.

        Filtering inbound UDP on one side is usually enough to break mosh, in my experience. Maybe they use better NAT traversal strategies since I last checked, but there's usually no workaround if at least one network admin involved actively blocks it.

  • zielmicha 2 days ago

    SSH is actually really slow on high latency high bandwidth links (this is what HPN-SSH patches fix: https://www.psc.edu/hpn-ssh-home/hpn-ssh-faq). It's very apparent if you try running rsync between two datacenters on different contients.

    HTTP/3 (and hopefully this project) does not have this problem.

    • Aachen 2 days ago

      Sounds like a complex change to fix a security protocol but, reading the page, it seems to just increase the send buffer, which indeed makes sense for high-latency links

  • wolrah 3 days ago

    It also tracks with HTTP/3 and QUIC as a whole, as one of the main "selling points" has always been reduced round trips leading to faster connection setup.

  • gchamonlive 3 days ago

    If by being faster at making a connection it would reduce latency even if a little, it would mean a really big improvement for other protocols built on top of it like rsync. If Rsync reuses an active connection to stream the files and calculate changes then the impact might be negligible.

  • sim7c00 a day ago

    openssh is generally not praised for its speed but its security track record. i hope this thing doesnt sacrefice it for a little more speed in something that generally doesn't require more speed..

  • nine_k 3 days ago

    Should be genuinely faster over many VPNs, because it avoids the "TCP inside TCP" tar pit.

  • malux85 3 days ago

    I read this and thought “who cares”?

    I use ssh everywhere, maybe establish 200+ SSH sessions a day for my entire career of 20 years and never once have I thought “I wish establishing this connection was faster”

    • efitz 3 days ago

      Good for you.

      There are a lot of automation use cases for SSH where connection setup time is a significant impediment; if you’re making dozens or hundreds of connections to hundreds or thousands of hosts, those seconds add up.

temp0826 3 days ago

I don't know why it makes me a little sad that every application layer protocol is being absorbed into http.

  • xg15 3 days ago

    If this were really the case, it would indeed be sad, as the standard HTTP request/response model is both too restrictive and too overengineered for many usecases.

    But both HTTP/2 and QUIC (the "transport layer" of HTTP/3) are so general-purpose that I'm not sure the HTTP part really has a lot of meaning anymore. At least QUIC is relatively openly promoted as an alternative to TCP, with HTTP its primary usecase.

    • singpolyma3 3 days ago

      Indeed. "Using quic with a handshake that smells like http3" is hardly "using http" imo

  • zenmac 3 days ago

    Yeah we got those good old network ppl or their corporate (don't knows much about tech) overlord to thank for that.

    If you ever using wifi in the airport or even some hotel with work suite unit around the world, you will notice that Apple Mail can't send or receive emails. It is probably some company wide policy to first block port 25 (that is even the case with some hosting providers) all in the name of fighting SPAM. Pretty soon, 143, 587, 993, 995.... are all blocked. Guess 80 and 443 are the only ones that can go through any firewalls now days. It is a shame really. Hopefully v6 will do better.

    So there you go. And know EU wants to do ChatControl!!!! Please stop this none-sense, listen to the people who actually knows tech.

    • Telemakhos 3 days ago

      Port 25 is insecure and unencrypted; EU doesn't even need ChatControl to hoover up that data, and you'd better believe anything going through an airport wifi router unencrypted is being hoovered by someone no matter what jurisdiction you're in. Apple mail prefers 587 for secure SMTP and 993 for secure IMAP.

      People were (wisely) blocking port 25 twenty years ago.

      • blueflow 2 days ago

        Port 25, which you call insecure and unencrypted, is using the same protocol as port 587, which you call secure - SMTP with STARTTLS.

      • lxgr 2 days ago

        The main problem with port 25 isn't that it's unencrypted, but rather that it's mixing two two concerns: (Often unauthenticated) server-to-server mail forwarding, and (hopefully always authenticated, these days) client-to-server mail submission.

        A network admin can reasonably want to have the users of their network not run mail servers on it (as that gets IPs flagged very quickly if they end up sending or forwarding spam), while still allowing mail submission to their servers.

      • Fnoord 3 days ago

        > People were (wisely) blocking port 25 twenty years ago.

        20 years ago (2005) STARTTLS was still widely in use. Clients can be configured to call it when STARTTLS isn't available. But clients can also be served bogus or snake oil TLS certs. Certificate pinning wasn't widely in use for SMTP in 2005.

        Seems STARTTLS is deprecated since 2018 [1]

        Quote: For email in particular, in January 2018 RFC 8314 was released, which explicitly recommends that "Implicit TLS" be used in preference to the STARTTLS mechanism for IMAP, POP3, and SMTP submissions.

        [1] https://serverfault.com/questions/523804/is-starttls-less-sa...

      • zenmac 3 days ago

        Ah thanks for the correction. Just changed my post above to 587. What I mean is why block all the ports, just keep it open let the user decide if they want to use it. And linux people can always use ufw on their side to be safe. Back in the dot com days, there were people also using telnet, but that got changed to ssh.

        Is it because it is hard to detect what type of the request that is being sent? Stream vs Non Stream etc?

    • codedokode 3 days ago

      Having all protocol look the same makes traffic shaping harder. If you develop a new protocol, do not make your protocol stand out, you won't win anything from it. Ideally all protocols should look like stream of random bytes without any distinctive features.

    • lxgr 2 days ago

      Blocking outbound port 25 is completely reasonable, just like blocking inbound port 80 or 443 would be (if inbound connections even were an option, which they aren't in most networks, at least for IPv4).

      Blocking ports 587, 993, 995 etc. is indeed silly.

  • MrDarcy 3 days ago

    It’s a necessary evil resulting from misguided corporate security teams blocking and intercepting everything else.

    Looking at you, teams who run Zscaler with tls man in the middle attack mode enabled.

  • chrisfosterelli 3 days ago

    It feels a little like a kludge as long as we keep calling it http. The premise makes sense -- best practices for connection initialization have become very complex and a lot of protocols need the same building blocks, so its beneficial to piggyback on the approach taken by one of the most battle tested protocols -- but it's not really hypertext we're using it to transfer anymore so it feels funny.

    • xg15 3 days ago

      Yeah, building it on top of QUIC is reasonable, but trying to shoehorn SSH into HTTP semantics feels silly.

      • conradludgate 3 days ago

        It's on top of HTTP CONNECT, which is intended for converting an existing request (QUIC stream) into a transparent byte stream. This removes the need to deal with request/response semantics.

        The reasons states to use http3 and not QUIC directly makes sense with littlest downside - you can run it behind any standard http3 reverse proxy, under some subdomain or path of your choosing, without standing out to port scanners. While security through obscurity is not security, there's no doubt that it reduces the CPU overhead that many scanners might incur if they discover your SSH server and try a bunch of login attempts.

        Running over HTTP3 has an additional benefit. It becomes harder to block. If your ssh traffic just looks like you're on some website with lots of network traffic, eg google meet, then it becomes a lot harder to block it without blocking all web traffic over http3. Even if you do that, you could likely still get a working but suboptimal emulation over http1 CONNECT

        • thayne 3 days ago

          > you can run it behind any standard http3 reverse proxy

          As long as said proxy supports a http CONNECT to a bi-directional connection. Which most I know of do, but may require additional configuration.

          Another advantage of using http/3 is it makes it easier to authenticate using something like oauth 2, oidc, saml, etc. since it can use the normal http flow instead of needing to copy a token from the http flow to a different flow.

        • MrDarcy 3 days ago

          Google Cloud’s identity aware proxy underpinning the gcloud compute ssh command works the same way, as an http CONNECT upgrade.

          • hxtk 3 days ago

            It also gives you two authenticated protocol layers, which helps them because most standard protocols don’t support multiple authenticated identities. Their zero trust model uses it to authenticate each time you make a connection that your machine has authorization to connect to that endpoint via a client certificate, and then the next protocol layer authenticates the user.

  • codedokode 3 days ago

    This is actually good because every protocol ideally must look the same to make traffic shaping and censorship harder. Either random stream of bytes or HTTP.

    If you are designing a protocol, unless you have a secret deal with telcos, I suggest you masquerade it as something like HTTP so that it is more difficult to slow down your traffic.

    • doublerabbit 3 days ago

      It's been known they throttle HTTP too.

      So your super speedy HTTP SSH connection then ends up being slower than if you just used ssh. Especially if your http traffic looks rogue.

      At least when its its own protocol you can come up with strategies to work around the censorship.

      • codedokode 2 days ago

        No. If you masquerade as HTTPS you can set your SNI to trump.example.com or republicans.example.com and nobody would dare to slow down this traffic. If you have a custom, detectable protocol then you already lost the game.

        There is not only censorship, but traffic shaping when some apps are given a slow lane to speed up other apps. By making your protocol identifiable you gain nothing good.

  • oofbey 3 days ago

    I hear you that it feels like something is off. The lack of diversity feels like we're losing robustness in the ecosystem. But it can be a good thing too. A lot of security issues are concentrated into one stack that is very well maintained. So that means everything built on top of it shares the same attack surface. Which yes means it can all come crashing down at once, but also that there are many eyes looking for vulnerabilities and they'll get fixed quickly. Similarly perf optimizations are all shared, and when thing get this popular can get pushed down into hardware even.

    It's not like we see a lot of downsides that the world collectively agreed on TCP/IP over IPX/SPX or DECNet or X.25. Or that the linux kernel is everywhere.

    • temp0826 3 days ago

      Humbug. I feel an urge to implement token ring over fiber. Excuse me while I yell at clouds.

  • fulafel 3 days ago

    Is there some indication that this is going to be adopted? The linked ietf submission is an expired individual draft (which anyone can send in) and not from the ssh spec working group, sounds like this is from some reseachers that used SSH3 as an optimistic name.

  • m463 13 hours ago

    also, would someone need to have rights to ssh to call it ssh version 3?

    kind of like if a random person created an (unaffiliated) hacker news 2.0 website.

  • attentive 3 days ago

    quic is more layer 4 or close to tcp reimplementation. Far from http layer 7.

kelnos 3 days ago

> Establishing a new session with SSHv2 can take 5 to 7 network round-trip times, which can easily be noticed by the user. SSH3 only needs 3 round-trip times. The keystroke latency in a running session is unchanged.

Bummer. From a user perspective, I don't see the appeal. Connection setup time has never been an annoyance for me.

SSH is battle-tested. This feels risky to trust, even whenever they end up declaring it production-ready.

  • esjeon 3 days ago

    I'm really puzzled by that statement.

    RFC 4253(SSH Transport Layer Protocol)[1] says:

       It is expected that in most environments, only 2 round-trips will be needed for full key exchange, server authentication, service request, and acceptance notification of service request.  The worst case is 3 round-trips.
    
    I've never experienced any issues w/ session initialization time. It should be affected by the configuration of both server and client.

    [1]: https://datatracker.ietf.org/doc/html/rfc4253

  • pancsta 3 days ago

    UDP tunnels are the main feature, way lighter than wireguard, also OpenID auth.

    • rollcat 2 days ago

      Wireguard (and certainly every VPN protocol worth your attention) runs on UDP. TCP-over-TCP is a disaster, no sane person does that.

      And what's "lighter" than Wireguard? It's about as simple as it can get (certainly simpler than QUIC).

    • otabdeveloper4 3 days ago

      > also OpenID auth

      Wait, what? Does it actually work?

      If yes, this is a huge deal. This potentially solves the ungodly clusterfuck of SSH key/certificate management.

      (I don't know how OpenID is supposed to interact with private keys here.)

  • lxgr 2 days ago

    Yes, and those that have fought in these battles know its limitations. Head-of-line blocking when using multiplexing is definitely one of them. This is a very reasonable incremental improvement.

    Importantly, it does not seem to switch out any security mechanisms and is both an implementation and a specification draft, which means that OpenSSH could eventually pick it up too so that people don't have to trust a different implementing party.

    • rollcat 2 days ago

      > [...] which means that OpenSSH could eventually pick it up too [...]

      Remember OpenSSH = OpenBSD. They have an opinionated & conservative approach towards adopting certain technologies, especially if it involves a complex stack, like QUIC.

      "It has to be simple to understand, otherwise someone will get confused into doing the wrong thing."

  • thethimble 3 days ago

    Head-of-line blocking is likely fully addressed by ssh3 where multiplexing several ports/connections over a single physical ssh3 connection should be faster.

    • john01dav 3 days ago

      Calling anything here "physical" is strange and confusing to me. Surely you don't mean the physical layer?

      • zamadatix 3 days ago

        I've seen it a lot with communication protocols for some reason, I guess it's just relatively clear it means "the non virtualized" even though it's clearly a misnomer. E.g. with VRRP a ton of people just say "the physical IP" when talking about the address that's not the VIP, even though the RFC refers to it as "the primary" IP. Arguably "primary IP" is more confusing as to which is being referred to, even though it's more technically accurate.

        Of course, maybe there's a perfectly obvious word which can apply to all of those kinds of situations just as clearly without being a misnomer I've just never thought to mention in reply :D.

  • Levitating 3 days ago

    > Connection setup time has never been an annoyance for me.

    It has always bothered me somewhat. I sometimes use ssh to directly execute a command on a remote host.

    • E39M5S62 3 days ago

      If you're doing repeated connections to the same host to run one-off commands, SSH multiplexing would be helpful for you. SSH in and it'll open up a local unix domain socket. Point additional client connections to the UDS and they'll just go over the existing connection with out requiring round trips or remote authentication. The socket can be configured to keep itself alive for a while and then close after inactivity. Huge huge speed boost over repeated fresh TCP connections.

      • oezi 2 days ago

        Why isn't this the default behavior to use this UDS?

        How to enable this?

        • E39M5S62 a day ago

          Look for documentation on the ControlMaster / ControlPath / ControlPersist options for OpenSSH.

        • oarsinsync 2 days ago

          > Why isn't this the default behavior to use this UDS?

          Because it’s insecure to use on multiuser systems, as it presents an opportunistic access to remote systems for root users on your local system: root can read and write into your UDS too.

          As a user, you have to explicitly opt into this scenario if you deem it acceptable.

          • immibis a day ago

            I don't think that's the reason. root can theoretically do everything and not much is protected from root. root can su to your account and make a new SSH connection. root can replace the ssh command with one that copies their public key before opening a shell.

  • athrowaway3z 3 days ago

    If you are looking for a smoother UX: https://mosh.org/

    • gdevenyi 3 days ago

      Sadly this project looks dead.

      • fsiefken 3 days ago

        still works great though, there's a lot great software I use that hasn't had an update in years or even decades

      • voxadam 3 days ago

        Is it dead or just mature?

        • zamadatix 3 days ago

          Mature should still be fixing bugs, which something like mosh is bound to always run into. From that perspective, it doesn't seem like it's just mature. There doesn't seem to be a clear all-in-one successor fork taking the reins either. E.g. https://github.com/mobile-shell/mosh/issues/1339, as a random sample.

          • fsckboy 2 days ago

            mosh is still included in the Fedora repository (and probably others, I didn't check)

            major distros are maintained, and they wouldn't be shipping it if it had bugs and/or was being used as an exploit

            • zamadatix 2 days ago

              Each distro package maintainer is always welcome to maintain patches in their forks for as long as they like, but the quality and life of each will be per distro as these efforts are coordinated with an upstream.

              • fsckboy 2 days ago

                i was pointing out that saying the package is unmaintained is likely to be false. to add my comment to your comment, i would imagine that distros are not keeping important patches like security to themselves.

                i.e. this package being somehow abandoned and therefore should not be trusted is likely to be false

                • zamadatix 2 days ago

                  The above has all been in reference to the mosh project, not any individual distro packaging. E.g. if you "brew install mosh" on macOS right now you will indeed get an official-but 3-year-old-release without any patches Fedora (or others) may have applied since https://formulae.brew.sh/api/formula/mosh.json. The same is true if one goes to the project's GitHub to download it manually.

                  > i would imagine that distros are not keeping important patches like security to themselves.

                  I'm not 100% sure what "keeping to themselves" means in context of GPL 3 code, but one can verify with the mosh GitHub link to see the upstream project has not had a single commit on any branch for the last 2.5 years.

                  The project is dead, it's up to your trust+verification of any specific downstream packaging as to how much of a problem that is for the binary you may be using. Some maintainers may not have noticed/cared enough yet, some maintainers may only carry security fixes of known CVEs, some maintainers may be managing a full fork. The average reader probably wants to note that for their specific binary rather than note Fedora still packages a downstream version (which may be completely different).

psanford 3 days ago

I do hate the name ssh3. I was glad to see this at the top of the repo:

> SSH3 is probably going to change its name. It is still the SSH Connection Protocol (RFC4254) running on top of HTTP/3 Extended connect, but the required changes are heavy and too distant from the philosophy of popular SSH implementations to be considered for integration. The specification draft has already been renamed ("Remote Terminals over HTTP/3"), but we need some time to come up with a nice permanent name.

  • zdw 3 days ago

    Same - this feels equivalent of some rando making a repo called "Windows 12" or "Linux 7".

    • bravetraveler 3 days ago

      LDAP2 or nextVFS... but point awarded. Feels that way because it is. Though my examples aren't great. These things just are; not really versioned. I don't know if major differences would call for ++

      A better 'working name' would be something like sshttp3, lol. Obviously not the successor to SSH2

    • asveikau 3 days ago

      You mean like cryptocurrency bros naming something "web 3.0"?

    • teddyh 3 days ago

      C.f. “JSON5”.

      • Dylan16807 3 days ago

        Eh. JSON forfeited version numbers, and if this analogy ran all the way through then we'd be looking at a scenario where SSH is based on HTTP 1 or 2. In that situation calling the HTTP/3 version SSH3 would make a lot of sense.

  • cozzyd 3 days ago

    Secure Hypertext Interactive TTY

    • _joel 3 days ago

      That sounds a bit crap

      • pixl97 3 days ago

        HITTY then.

      • BobbyTables2 3 days ago

        But this SHIT is really fast!

        You’ll see when the logs drop!

  • theandrewbailey 3 days ago

    Maybe SSH/3 instead (SSH + HTTP/3)?

    • throwaway127482 3 days ago

      Doesn't /3 mean v3? I mean, for HTTP itself, doesn't the HTTP/3 == HTTPv3? If so, I don't see how this is any better than SSH3 - both SSH3 and SSH/3 read to me like "SSH v3"

      • theandrewbailey 3 days ago

        Yes, but HTTP is about the only thing that versions with a slash. By writing it SSH/3, it would emphasize its relationship with HTTP/3, instead of it being the third version of SSH.

      • Dylan16807 3 days ago

        > Doesn't /3 mean v3?

        I've seen very little do that. Probably just HTTP, and it's using a slash specifically to emphasize a big change.

    • techscruggs 3 days ago

      I like this idea!

      Having SSH in the name helps developers quickly understand the problem domain it improves upon.

  • nine_k 3 days ago

    /* This is one proper bikeshedding thread if I ever saw one. */

    • dpflan 3 days ago

      sshhh ... don't sidetrack the productive comment generation. (also, SSHHH as a possible name)...

  • formerly_proven 3 days ago

    Easy: hhs instead of ssh (since the even more obvious shh is essentially impossible to google). Stands for, idk, HTTP/3 Hardened Shell or something ("host shell"? sounds like windows)

    • catlifeonmars 3 days ago

      hss? Http/3 Secure Shell?

      • treve 3 days ago

        Or h3ss, pronounced hess

  • pdmccormick 3 days ago

    HTTPSSH.

    Why not just SSH/QUIC, what does the HTTP/3 layer add that QUIC doesn’t already have?

    • arka2147483647 3 days ago

      QuickShell - it should be called

      • bscphil 3 days ago

        QSH?

        • BobbyTables2 3 days ago

          At least that isn’t an existing ham radio Q-code!

      • cpuguy83 3 days ago

        That's already a project (library for building a desktop environment).

    • gclawes 3 days ago

      The ability to use HTTP authentication methods, HTTP headers, etc?

  • noman-land 3 days ago

    SSHTTP

  • fsckboy 2 days ago

    my autism plays out also in the world of words, i.e. names of things, and my comment here is more a reply to all my surrounding comments than to yours:

    ssh is not a shell and ssh is not a terminal, so please everybody stop suggesting name improvements that more deeply embed that confusion.

    back in the day, we had actual terminals, and running inside was our shell which was sh. then there was also csh. then there was the idea of "remote" so rsh from your $SHELL would give you a remote $SHELL on another machine. rsh was not a shell, and it was not a terminal. There were a whole bunch of r- prefixed commands, it was a family, and nobody was confused, these tools were not the thing after the r-, these tools were just the r- part.

    then it was realized that open protocols were too insecure so all of the r- remote tools became s- secure remote tools.

    http is a network protocol that enables other things and gets updated from time to time, and it is not html or css, or javascript; so is ssh a network protocol, and as I said, not a shell and not a terminal.

    just try to keep it in mind when thinking of new names for new variants.

    and if somebody wants to reply that tcp/ip is actually the network protocol, that's great, more clarification is always good, just don't lose sight of the ball.

  • thayne 3 days ago

    qrs for Quic Remote Shell?

    Or h3s for HTTP 3 Shell?

    H3rs for http3 remote shell?

    • ape4 3 days ago

      How about Tortoise Shell - a little joke because its so fast

  • unclet 3 days ago

    Why not HSH, or HTTPS Shell.

  • 0x6c6f6c 3 days ago

    SSH over QUIC

    so, maybe SSHoQ or SoQ

    soq reads better for the CLI I suppose.

    • antod 3 days ago

      HTTP under SSH, or hussh for short.

      • BlaDeKke 3 days ago

        Yeah, this one. hussh is a clear winner.

  • manwe150 3 days ago

    How about ush then? The predecessor was rsh, and the next letter tsh is already taken

    • BobbyTables2 3 days ago

      ush — “You shell” — Brilliant!

  • KronisLV 3 days ago

    SSH/HTTP/3

    That way, when you need to use sed for editing text containing it, your pattern can be more interesting:

      sed 's/SSH\/HTTP\/3/SSH over HTTP\/3/g'
    • Wicher 3 days ago

      try:

        sed 's:SSH/HTTP/3:SSH over HTTP/3:g'
      
      
      At least with GNU sed, you can use different separators so dodge the need for exscaping. | works as well.
  • CharlesW 3 days ago

    h3sh | hush3 | qs | qsh | shh | shh3

    • NewJazz 3 days ago

      Anything with a 3 in it is a nightmare to type quickly. shh looks like you typo'd ssh.

      qsh might be taken by QShell

      https://en.m.wikipedia.org/wiki/Qshell

      There's a whole github issue where the issue was bike shed to death.

    • mnsc 3 days ago

      SSQ

  • piannucci 3 days ago

    How about rthym or some variation?

  • e12e a day ago

    Quickshell/qsh?

  • nine_k 3 days ago

    SSH2/3, maybe?

    It's still largely SSH2, but runs on top of HTTP/3.

  • ok123456 3 days ago

    SSH over 3: SO(3). Like the rotation group.

  • BobbyTables2 3 days ago

    RTH3EC is a certainly a mouthful…

  • moralestapia 3 days ago

    Don't use it! Create your own thing and name it however you want.

    Non-doers are the bottom rung of the ladder, don't ever forget that :).

    • literalAardvark 3 days ago

      No... They're one rung up from evil and dumb doers.

staplung 3 days ago

It's cool that SSH is getting some love but I'm a little sad they're not being a little more ambitious with regard to new features, considering it seems like they're more or less creating a new thing. Looks like they're going to support connection migration but it would be cool (to me anyway) if they supported some of the roaming/intermittent connectivity of Mosh[1].

1: https://mosh.org/

  • d3Xt3r 3 days ago

    One of the things I really like about Mosh is the responsiveness - there's no lag when typing text, if feels like you're really working on a local shell.

    I'm guessing SSH3 doesn't do anything to improve that aspect? (although I guess QUIC will help a bit, but isn't quite the same as Mosh is it?)

  • eqvinox 3 days ago

    AIUI connection migration (as well as multipath handling) is a QUIC feature. And how would that roaming feature differ from "built-in tmux"? I'm not sure the built-in part there would really be an advantage…

    • namibj 3 days ago

      Mosh connections don't drop from merely wifi flipping around; you get replies back to the address and port the last uplink packet came from. You can just continue typing and a switch between Wi-Fi and mobile data (for example on a phone while sitting on public transit) shows as merely a lag spike during which typed characters will be predictive echoed by underlining them after an initial delay that serves to avoid flickering from rapidly retracted/changed predictions (predictions are underlined) during low-latency steady-state.

      Mosh is like vnc or rdp for terminal contents: natively variable frame rate and somewhat adaptive predictive local echo for reducing latency perception; think client side cursor handling with vnc or with rdp I'd even assume there might be capability for client-side text echo rendering.

      If you haven't tried mosh in situations with a mobile device that have you experience connection changes during usage, you don't know just how much better it is than "mere tmux over ssh".

      I honestly don't know of a more resilient protocol than mosh that's in regular usage, other than possibly link-layer 802.11n aka "the Wi-Fi that got these 150 Mbit and those 300 Mbit and some 450 Mbit speed claims advertised onto the marker", where link-layer retransmissions and adaptive negotiation of coding parameters and actively-multipath-exploiting MIMO-OFDM (and AES crypto from WPA2) combine for a setup that hides radio interference to not be visible to higher level protocols beyond the unavoidable jitter of the retransmissions and varying throughput potentials from varying radio conditions.

      Oh, I think when viewed regarding computers not the congestion control schemes adjusting the individual connection speeds, there'd also be BitTorrent with DHT and PEX that only needs an infohash: with 160 bits of hash a client seeded into the (mainline) DHT swarm can go and retrieve a (folder of) files from an infohash-specific swarm that's at least partially connected to the DHT (PEX takes care of broadening the connectivity among those that care about the specific infohash).

      In the realm of digital coding schemes that are widely used but aren't of the "transmission" variety, there's also Redbook CD audio that starts off easy with lossless error correction, followed by perceptually effective lossy interpolation to cover severe scratches to the disc's surface.

      • eqvinox 3 days ago

        I'm not sure why you're explaining mosh (I know what it is and have used it before), I was asking what there is other than migration (= handled by QUIC) and resumption (= tmux).

        Local line editing, I guess. Forgot about that.

miduil 3 days ago

I wonder what the current plans are with the project, it's been over a year since the last release - yet alone commits or other activity on GitHub. As they've started working on the project with a paper - I guess they'll might be continuously working on other associated aspects?

  • jhatemyjob 3 days ago

    Thanks for pointing that out. I'm gonna assume it's a dead project. It has only 239 commits, basically a proof of concept. Nothing to take seriously. OpenBSD on the other hand is extremely active, there's no way OpenSSH will be dethroned anytime soon.

    https://github.com/openbsd/src/commits/master/

eqvinox 3 days ago

I feel like this should really be SSH over QUIC, without the HTTP authorization mechanisms. Apart from the latter not really being used at all for users (only for API calls, Bearer auth), shell logins have a whole truckload of their own semantics. e.g. you'd be in a rather large amount of pain trying to wire PAM TOTP (or even just password+OTP) into HTTP auth…

  • 9dev 3 days ago

    I view it orthogonally: Making it easier to use our single company identity we use for every single service for SSH as well would make it so much easier to handle authorization and RBAC properly for Linux server management. Right now, we have to juggle SSH keys; I always wanted to move to SSH certificates instead, but there's not a lot of software around that yet (anyone interested in building some? Contact me).

    So having the ease of mind that when I block someone in Entra ID, they will also be locked out of all servers immediately—that would be great actually.

    > PAM TOTP (or even just password+OTP) into HTTP auth

    But why would you? Before initiating a session, users will have to authorise to the IdP, which probably includes MFA or Passkeys anyway. No need for PAM anymore at all.

    • eqvinox 3 days ago

      > use our single company identity we use for every single service for SSH as well

      How would that even work? Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal? Does the browser assimilate the terminal?

      And let me remind you, HTTP authentication isn't a login form. It's the browser built-in "HTTP username + password" form and its cousins. We're talking HTTP 401. The only places this is widely used is API bearer tokens and NTLM/Kerberos SSO.

      > Before initiating a session, users will have to authorise to the IdP, which probably includes MFA or Passkeys anyway. No need for PAM anymore at all.

      Unfortunately I need to pop your bubble, PAM also does session setup, you'd still need it. And the other thing here is — you're solving your problem. Hard-relying on HTTP auth for this SSH successor needs to solve everyone's problem. And it's an incredibly bad fit for a whole bunch of things.

      Coincidentally, SSH's mechanisms are also an incredibly bad fit; password authentication is in there as a "hard" feature; it's not an interactive dialog and you can't do password+TOTP there either. For that you need keyboard-interactive auth, which I'm not sure but feels like it was bolted on afterwards to fix this. Going with HTTP auth would probably repeat history quite exactly here, with at some point something else getting bolted to the side…

      • Denvercoder9 3 days ago

        > Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal?

        You start the ssh client in the terminal, it opens a browser to authenticate, and once you're logged in you go back to the terminal. The usual trick to exfiltrate the authentication token from the browser is that the ssh client runs an HTTP server on localhost to which you get redirected after authenticating.

        • 9dev 3 days ago

          That, or the SSH client opens a separate connection to the authorization server and polls for the session state until the user has completed the process; that would be the device code grant, which would solve this scenario just fine.

      • 9dev 3 days ago

        > How would that even work? Do you open your browser, log in, and then somehow transfer the session into your ssh client in a terminal? Does the browser assimilate the terminal?

        That's pretty well covered in RFC8628 and doesn't even require a browser on the same device where the SSH client is running.

        > And let me remind you, HTTP authentication isn't a login form. It's the browser built-in "HTTP username + password" form and its cousins. We're talking HTTP 401. The only places this is widely used is API bearer tokens and NTLM/Kerberos SSO.

        That depends entirely on the implementation. It could also be a redirect response which the client chooses to delegate to the user's web browser for external authentication. It's just the protocol. How the client interprets responses is entirely up to the implementation.

        > Unfortunately I need to pop your bubble, PAM also does session setup, you'd still need it.

        I don't see why, really. It might just as well be an opaque part of a newer system to reconcile remote authorization with local identity, without any interaction with PAM itself necessary at all.

        > And the other thing here is — you're solving your problem. Hard-relying on HTTP auth for this SSH successor needs to solve everyone's problem. And it's an incredibly bad fit for a whole bunch of things.

        But isn't that the nice part about HTTP auth, that it's so extensible it can solve everyone's problems just fine? At least it does so on the web, daily, for billions of users.

        • eqvinox 3 days ago

          Everything you've said is true for web authentication, and almost nothing of what you said is true for HTTP authentication.

          This is HTTP authentication: https://httpd.apache.org/docs/2.4/mod/mod_auth_basic.html

          https://github.com/francoismichel/ssh3/blob/5b4b242db02a5cfb...

          https://www.iana.org/assignments/http-authschemes/http-auths...

          Note the OAuth listed there is OAuth 1.0. Support for "native" HTTP authentication was removed in OAuth 2.0.

          This discussion is about using HTTP authentication. I specifically said HTTP authentication in the root post. If you want to do SSH + web authentication, that's a different thread.

          Rule of thumb: if you need HTML in any step of it —and that includes as part of generating a token— it's web auth, not HTTP.

          • 9dev 2 days ago

            No, that isn’t true. All parts of the OAuth dance are just means to end up with a Bearer token in the Authorization header, and I don’t see why the process of obtaining this token couldn’t involve a web browser?

            Plus—HTTP auth isn’t limited to Basic, Digest, and Bearer schemes. There’s nothing stopping an implementation from adding a new scheme if necessary, and add it to the IANA registry.

            • eqvinox a day ago

              It's quite clear that we're using the same words with different definitions. I don't have an 'official' reference/definition for them. Unless you do we'll have to call it a day here and accept the fact that other people use the same names for different things.

              • 9dev a day ago

                Fair enough. I do think however that we both care about standards, protocols, and quality engineering, albeit with different opinions. That's got to be worth something.

    • frumplestlatz 3 days ago

      I hate that web-enshittification of SSH is considered the solution to this problem, and many other modern application-level problems.

      It's done because the web stack exists and is understood by the web/infrastructure folks, not because it represents any kind of local design optima in the non-web space.

      Using the web stack draws in a huge number of dependencies on protocols and standards that are not just very complex, but far more complex than necessary for a non-web environment, because they were designed around the constraints and priorities of the web stack. Complicated, lax, text-based formats easily parsed by javascript and safe to encode in headers/json/query parameters/etc, but a pain to implement anywhere else.

      Work-arounds (origin checks, CORS, etc) for the security issues inherent in untrusted browsers/javascript being able to make network connections/etc.

      We'be been using kerberos and/or fetching SSH keys out of an LDAP directory to solve this problem for literal decades, and it worked fine, but if that won't cut it, solving the SSH certificate tooling problem would be a MUCH lighter-weight solution here than adopting OAuth and having to tie your ssh(1) client implementation to a goddamn web browser.

      • 9dev 3 days ago

        I see your point, but I think you're missing the broader picture here. Web protocols are not just used because they are there, but because the stack is very elegantly layered and extensible, well understood and tested, and offer strong security guarantees. It's not like encryption hasn't been tacked onto HTTP retroactively, but at least that happened using proper staples instead of a bunch of duct tape and hope as with other protocols.

        All of that isn't really important, though. What makes a major point for using HTTP w/ TLS as a transport layer is the ecosystem and tooling around it. You'll get authorization protocols like OIDC, client certificate authentication, connection resumption and migration, caching, metadata fields, and much more, out of the box.

        • eqvinox 3 days ago

          > the stack is very elegantly layered and extensible

          I have to disagree pretty strongly on this one. Case in point: WebSockets. That protocol switch is "nifty" but breaks fundamental assumptions about HTTP and to this day causes headaches in some types of server deployments.

  • axiolite 3 days ago

    That has been around for years:

    https://github.com/moul/quicssh

    • eqvinox 3 days ago

      I guess it didn't get traction… whether that happens honestly feels like a fickle, random thing.

      To be fair, a go project as sole implementation (I assume it is that?) is a no-go, for example we couldn't even deploy it on all our systems since last I checked Go doesn't support ppc64. (BE, not ppc64le)

      I also don't see a protocol specification in there.

      [edit] actually, no, this is not SSH over QUIC. This is SSH over single bidi stream transport over QUIC, it's just a ProxyCommand. That's not how SSH over QUIC should behave, it needs to be natively QUIC so it can take advantage of the multi-stream features. And the built-in TLS.

Arch-TK 3 days ago

SSH is slow, but in my experience the primary cause of slowdown is session setup.

Be it PAM, or whatever OpenBSD is doing, the session setup kills performance, whether you're re-using the SSH connection or not, every time you start something within that connection.

Now obviously for long running stuff, that doesn't matter as much as the total overhead. But if you're doing long running ssh you're probably using SSH for its remote terminal purposes and you don't care if it takes 0.5 seconds or 1 second before you can do anything. And if you want file transfer, we already had a HTTP/3 version of that - it's called HTTP/3.

Ansible, for example, performs really poorly in my experience precisely because of this overhead.

Which is why I ended up writing my own mini-ansible which instead runs a remote command executor which can be used to run commands remotely without the session cost.

  • rollcat 2 days ago

    > Which is why I ended up writing my own mini-ansible which instead runs a remote command executor which can be used to run commands remotely without the session cost.

    HMU on my email. I've been working on/with this since 2016, and I'd love to discuss: <https://github.com/rollcat/judo>

  • nasretdinov 2 days ago

    To speed up Ansible it's sufficient to enable ControlMaster with a short timeout tbh

    • Arch-TK 2 days ago

      I don't believe control master solves the problem, as long as Ansible is configured to create a new session within the long-running SSH connection then it will still have the session setup overhead. I tested this myself when prototyping the replacement.

      However, it looks like pipelining (and obviously forking) could do a lot to help.

      That being said, there were _many_ reasons for me to drop Ansible. Including poor non-linux host support, Yaml, the weird hoops you have to jump through to make a module, and difficulty achieving certain results given the abstraction choices.

      I think Ansible is great, it solves a problem, but my problem was very specific, Ansible was a poor fit for it, and performance was just one of many nails in the coffin for me.

vladdoster 3 days ago

given last commit was 1+ year ago, anyone know what the status of the project is?

she46BiOmUerPVj 3 days ago

So with HTTP requests you can see the domain name in the header and forward it to the correct host. That was never a thing you could do with SSH, does this allow that to work?

  • projektfu 3 days ago

    "Proxy jump

    "It is often the case that some SSH hosts can only be accessed through a gateway. SSH3 allows you to perform a Proxy Jump similarly to what is proposed by OpenSSH. You can connect from A to C using B as a gateway/proxy. B and C must both be running a valid SSH3 server. This works by establishing UDP port forwarding on B to forward QUIC packets from A to C. The connection from A to C is therefore fully end-to-end and B cannot decrypt or alter the SSH3 traffic between A and C."

    More or less, maybe but not automatically like you suggest, I think. I don't see why you couldn't configure a generic proxy to set it up, though.

  • finaard 3 days ago

    But that wasn't really a thing that was an issue with SSH.

    Host *.internal.example.com

      ProxyCommand ssh -q -W %h:%p hop.internal.example.com
    
    
    in the SSH client config would make everything in that domain hop over that hop server. It's one extra connection - but with everything correctly configured that should be barely noticeable. Auth is also proxied through.
    • kbolino 3 days ago

      Is there a way to configure the jump (hop) server to reroute the request based on the value of %h and/or %p? Otherwise, it's going to be quite difficult to configure something like HTTP virtual hosts.

      EDIT: Looking at the relevant RFC [1] and the OpenSSH sshd_config manual [2], it looks like the answer is that the protocol supports having the jump server decide what to do with the host/port information, but the OpenSSH server software doesn't present any relevant configuration knobs.

      [1]: https://www.rfc-editor.org/rfc/rfc4254.html#section-7.2

      [2]: https://man7.org/linux/man-pages/man5/sshd_config.5.html

      • t-3 3 days ago

        Yes, but it's not in the sshd config, it's in the ssh config. See ssh_config(5), search for Remote to find the most relevant sections.

        • kbolino 2 days ago

          I don't follow. If it's in ssh_config, then it's client-side. Either that's the client initiating the request, in which case it's not server-controlled like HTTP virtual hosts, or else it's the "client" involved in the hop through the jump server, in which case it's going to be specific to a single username. Also the Remote* options have to do with remote port forwarding, which is in the wrong direction.

          What am I missing?

    • doubled112 3 days ago

      If you don't need to do anything complicated, ProxyJump is easier to remember.

          Host *.internal.example.com
            ProxyJump hop.internal.example.com
      • chupasaurus 3 days ago

        ProxyJump was implemented a decade ago to replace that specific string.

    • she46BiOmUerPVj 3 days ago

      I'm aware of proxy jump and other client side config but I'd rather that not every single client need to do this configuration.

    • unsnap_biceps 3 days ago

      Newer versions of ssh support ProxyJump

        ssh -J hop.internal.example.com foo.internal.example.com
  • billfor 3 days ago

    You can forward any ssh traffic based on the domain name with SNI redirection. You can also use that with, lets say the nginx stream module, to run ssh and http server on the same port.

dev_l1x_be 3 days ago

This thread is s classic example what we care about: names. Is there technical merit to do ssh over http3? Who caes? Who knows?

alsosprachben 3 days ago

I haven’t seen anyone yet comment on the design constraint that SSH uses a lot of separation in its multiplexing for the purpose of sandboxing/isolation. Would want transport to be as straightforward as possible. SSH needs to be the reliable, secure tunnel that you can use to manage your high performance gateways. It has a lot of ways to turn things off to avoid attack surface. HTTP protocols have a lot of base requirements. Different problems.

computersuck 3 days ago

The added bonus that no one is talking about is that this is written in golang making it by default more memory safe and modern

rootnod3 3 days ago

Don't get me wrong, this might likely be a fantastic tool. But something as essential as a secure connection would definitely need a good pair of eyes for audit before I'd use that for anything in production.

But it's a good start. Props to exploring that kind of space that needs improvement but is difficult to get a foothold in.

ricardobeat 3 days ago

> the keystroke latency during a session remains unchanged

That’s a shame. Lowered latency (and persistent sessions, so you don’t pay the connection cost each time) are the best things about Mosh (https://mosh.org/).

  • nine_k 3 days ago

    Lowered perceived latency.

    • ricardobeat 3 days ago

      Mosh uses UDP in addition to optimistic updates, so there is an actual latency improvement.

      • nine_k 3 days ago

        HTTP/3 also rides on UDP, so the effect should be comparable.

        • ricardobeat 2 days ago

          > SSH3 only needs 3 round-trip times. The keystroke latency in a running session is unchanged.

          If implemented with latency in mind, yes. After a quick look at the code, it seems they are buffering data on both sides with hardcoded buffer sizes at 1500 (TCP packet size) or 30Kb, which could be negating any latency improvements.

supjeff 3 days ago

> SSH3 is a complete revisit of the SSH protocol

so, new undiscovered vulnerabilities

superkuh 3 days ago

So does this mean that you can't self sign anything and have to involve corporate CAs for your ssh now? Because QUIC cannot do anything without CA TLS approval being involved.

  • rollcat 2 days ago

    That is my main objection as well, but perhaps it's time to also revisit TOFU.

    Remember when Github had to rotate its host keys? It was hitting the news far and wide, and likely broke pretty close to every single CI pipeline out there. There was little heads up because it's the friggin host key, you have to act now.

    It's also pretty annoying when you have to deal with that in your own infra. Even if you have a pretty good network/service map, you'll probably have silent breakage somewhere.

    I'm not saying CAs should be the future of SSH, but TOFU is certainly a problem at scale.

    • superkuh 2 days ago

      Some day very soon everyone is going to get some uncomfortable lived experience showing just how dangerous and damaging putting all of our communications eggs in a handful of easily controlled corporate baskets is. It's now calvinball out there and distributed, not centralized, solutions are going to be required to route around the damage. The people who lived through it last time made the internet. And now that they're mostly retired or dead we're removing all the robustness they built in just to better align with employers' profit-motive use cases.

      But we don't have to do that. Not on our own time. Don't use QUIC unless you're getting paid to do it.

      • rollcat a day ago

        I agree that we do need more decentralization, but for the decentralized infrastructure to scale, we need better building blocks. Internet is a much, much bigger place now. TOFU doesn't scale.

Avamander 3 days ago

I like the idea, especially if it can be proxied by a regular H3 proxy.

If it also solved connection multipath/migration and fixes for TCP-related blocking issues, that'd already be amazing.

FergusArgyll 2 days ago

This sounds cool, is there a way to do this with ssh?

  Similarly to your secret Google Drive documents, your SSH3 server can be 
  hidden behind a secret link and only answer to authentication attempts that
  made an HTTP request to this specific link, like the following:

  ssh3-server -bind 192.0.2.0:443 -url-path <my-long-secret>
  • ema 2 days ago

    One thing you can do is listen on a non-standard port.

    • FergusArgyll 2 days ago

      yeah, I'm not worried either way. Just wondering if anyone has a hack for that

burnt-resistor 2 days ago

SSH Communications Security, Inc. owns the trademark to SSH and so this rebranding/coopting may implode suddenly with a C & D.

https://uspto.report/TM/76431998

  • switknee 18 hours ago

    Do you seriously believe that hasn't been genericized? Computer people talk about SSH all the time and usually mean openSSH. When they're not talking about openSSH the next most likely is dropbear on embedded hardware. Nobody ever talks about SSH Communications Security except in the context of the trademark registered in the previous century. It's cool that they were the first to develop the protocol, but they're not relevant today and have no right to harass newer projects that people actually use.

Meneth 3 days ago

An alternative way to hide your SSH server from portscanners is to put it inside a WireGuard VPN.

  • encom 3 days ago

    TFA says "port scanning attacks", but in my opinion it's not. It's barely jiggling the door knob. Securing SSH isn't hard to do properly, and port scans or connection attempts isn't something anyone needs to be concerned about whatsoever.

    I am concerned however about the tedious trend of cramming absolutely everything into HTTP. DNS-over-HTTP is already very dumb, and I'm quite sure SSH-over-HTTP is not something I'm going to be interested in at all.

mcny 3 days ago

I sincerely don't understand this obsession with short names and aliases. I absolutely dislike it. Names should be long and descriptive. I understand that in the past we needed short names because every character cost space and space was precious but it isn't the case anymore.

Please don't give short abbreviated names. Useful full names for commands. Teach full names. When you present something, show full names. If this project used a full name like `remote-terminals-over-http3`, we would not be having this debate about ssh3.

Of course, end users and system administrators and even package managers/distributions are free to add abbreviations but we should be teaching people to use full names.

Prefer things like Set-Location over cd. Prefer npm install --global over npm i -g. Prefer remote-terminals-over-http3 over ssh3.

kdmtctl 2 days ago

It can be actually hidden only paired with an HTTPS server, since CT new certificate logs entries are pretty strong indicator that a host deserves to be "visited".

nick_travels 3 days ago

built in OIDC authentication - YES, love it!

nwmcsween 3 days ago

I recently looked into this at it looks like the IETF(?) RFC draft for SSH3 was abandoned? It's great this exists but I think the standard needs to be done as well.

  • mathfailure 2 days ago

    The project is abandoned as well.

exabrial 2 days ago

I’d sort of rather have the opposite: http over ssh. There’d be key pinning, multi streams, and many more cool things

  • burnt-resistor 2 days ago

    Ah, you can already do this with tunneling. It just doesn't make any sense.

littlecranky67 3 days ago

What is the usecase of using OAuth or "my github account" to login to a linux/unix machine?

  • thethimble 3 days ago

    This is actually a common/desirable feature to permit a group of people to access an ephemeral machine (e.g. engineers accessing a k8s node, etc.). Authorizing "engineers" via OAuth is much more ergonomic and safe vs traditional unix auth which is designed more for non-transient or non-shared users.

thayne 3 days ago

It looks like the RFC is expired.

esseph 3 days ago

The ssh3 name feels like cloutchasing

Velocifyer 3 days ago

Does this still support standard SSH encryption and authentication (on both client and server)?

Imustaskforhelp 3 days ago

Yes, Yes, Yes.

Firstly, I love the satirical name of tempaccount420, I was also just watching memes and this post is literally me (ryan gosling)

As I was also thinking about this thing literally yesterday being a bit delusional on hoping to create a better ssh using http/3 or something or some minor improvement because I made a comment about tor routing and linking it to things like serveo, I was thinking of enhancing that idea or something lol.

Actually, it seems that I have already starred this project but I had forgotten about it, this is primarily the reason why I star github project and this might be where I might have got some inspiration of http/3 in the first place with SSH.

Seems like a really great project (I think)

Now, one question that I have is could SSH be made modular in the sense that we can split the transport layer apart from SSH as this project does, without too much worries?

Like, I want to create a SSH-ish something to lets say something like iroh being the transport layer, are there any libraries or resources which can do something like that? (I won't do it for iroh but I always like mixing and matching and I am thinking of some different ideas like SSH over matrix/xmpp/signal too/ the possibilities could be limitless!)

deknos 2 days ago

how complex is this to understand for auditors? i fear of the ever increasing complexity of protocols which are security-relevant...

aemonfly 3 days ago

HSSH (http security shell), very similar to SSH.

pvsnp 3 days ago

Shouldn’t this be called SSH over HTTP/3?

ur-whale 3 days ago

Knee-jerk reaction: if it aint broke ...

  • Too 3 days ago

    I thought the same until I read the page and realized that ssh is quite broken if you think about it.

    With ssh everybody does TOFU or copies host fingerprints around, vs https where setting up letsencrypt is a no-brainer and you’re a weirdo of you even think about self-signed certs. Now you can do the same with ssh but do you?

    For authentication, ssh relies on long lived keys rather than short lived tokens. Yes, I know about ssh certificates but again, it’s a hassle to set up compared to using any of a million IdP with oauth2 support. This enables central place to manage access and mandate MFA.

    Finally, you better hope your corporate IT has not blocked the SSH port as a a security threat.

  • axiolite 3 days ago

    Telnet, FTP and rlogin wasn't broke, either. They had their own encrypted variants before SSH came along.

    Listing all the deficiencies of something, and putting together a thing that fixes all of them, is the kind of "designed by committee" project that everyone hates. Real progress requires someone to put together a quick project, with new features they think are useful, and letting the public decide if it is useful or not.

Fnoord 3 days ago

Written in Go. Terrible name, already discussed in various other comments and author acknowledges.

The secret path, otherwise giving 404 would need brute-force protection (on HTTPd level?). I think it is easier to run SSH on a non-standard port on IPv6, but it remains true that anyone with network read access between the endpoints can figure it out.

What isn't explained is why would one care about 100 ms latency during auth? I rather have mosh which has resuming support and will work on high latency (tho IIRC won't work over TOR?). But even then, with LTE and NG, my connections over mobile have become very stable here in NL (YMMV).

  • fsiefken 3 days ago

    yes, but in NL sometimes I am just on the edge of a wifi network coverage and then mosh can be handy. it's an edge case though!

Dwedit 3 days ago

Can it tunnel arbitrary TCP ports?

frumplestlatz 3 days ago

> It also supports new authentication methods such as OAuth 2.0 and allows logging in to your servers using your Google/Microsoft/Github accounts.

Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.

zaps 3 days ago

idk if you should call it that

CommanderData 3 days ago

X.509 certificates & PKI....

Hopefully provides a way to pin certs or at least pin certificate authorities && has PFS.

My conspiracy hat doesn't trust all the cert auths out there.

fijiaarone 3 days ago

If it doesn’t fully implement SOAP, what’s the point?

Feels like a spinning hammer meant to drive screws because somebody has never seen a drill before.

odie5533 3 days ago

Faster SSH in Rust when?

hulitu 3 days ago

> SSH3: Faster and rich secure shell using HTTP/3

Maybe they shall teach naming projects in CS.

Why not Windows 12 ? /s

AshamedCaptain 3 days ago

Sure, someone paranoid about his SSH server being continuously proved by bots is going to excitedly jump to a new HTTP-SSH server that is going to be continuously proved by even more bots for HTTP exploits (easily an order of magnitude more traffic) AND whatever new fangled "HTTP-SSH" exploits appear.