nickcw 3 days ago

I wrote both the WebDAV client (backend) for rclone and the WebDAV server. This means you can sync to and from WebDAV servers or mount them just fine. You can also expose your filesystem as a WebDAV server (or your S3 bucket or Google Drive etc).

The RFCs for WebDAV are better than those for FTP but there is still an awful lot of not fully specified stuff which servers and clients choose to do differently which leads to lots of workarounds.

The protocol doesn't let you set modification times by default which is important for a sync tool, but popular implementations like owncloud and nextcloud do. Likewise with hashes.

However the protocol is very fast, much faster than SFTP with it's homebrew packetisation as it's based on well optimised web tech, HTTP, TLS etc.

  • apitman 2 days ago

    Thank you for rclone.

    In your opinion, is WebDAV good enough to be the protocol for exposing file systems over HTTP, or is there room for something better? I was bullish on Solid but they don't seem to be making much progress.

  • m463 3 days ago

    I wonder how you would compare it to nfs (which I believe can be TCP based, and probably encrypted)

    Not that it is a good comparison. NFS isn't super popular, macos can do it, I don't think windows can. But both windows and macos can do webdav.

    • devttyeu 3 days ago

      NFS is much slower, maybe unless you deploy it which RDMA. I believe even 4.2 doesn’t really support asynchronous calls or has some significant limitations around them - I’ve commonly seen a single large write of a few gigs starve all other operations including lstat for minutes.

      Also it’s borderline impossible to tune nfs to go above 30gbps or so consistently, with WebDAV it’s a matter of adding a bunch more streams and you’re past 200gbps pretty easily.

    • Saris 2 days ago

      My experience with NFS is its not very fast, compared to SMB or WebDAV

ctippett 3 days ago

> In fact, you're already using WebDAV and you just don't realize it.

Tailscale's drive share feature is implemented as a WebDAV share (connect to http://100.100.100.100:8080). You can also connect to Fastmail's file storage over WebDAV.

WebDAV is neat.

  • rpdillon 3 days ago

    I use it all the time to mount my CopyParty instance. Works great!

    • geek_at 2 days ago

      Copy party is really great. Using it to share files with my clients as well as for my remote media gallery

rapnie 2 days ago

> I should have titled this post "I hate S3".

Use it where it makes sense. And S3 does not necessarily equate to using Amazon. I like the Garage S3 project that is interesting for smaller scale uses and self-hosted systems. The project is funded with EU Horizon grants via NLnet.

https://garagehq.deuxfleurs.fr/

  • donatj 2 days ago

    I should write a related article "I hate that the AWS S3 SDK has become a defacto web protocol"

    • OJFord 2 days ago

      You hate that there is a standard, or aspects of this one? (Or that it's a de facto standard, not clearly specified for example what's required and what just happens to be in AWS' implementation?)

      • paulddraper 2 days ago

        The S3 protocol (particularly the authentication) is unnecessarily complex and could have used existing simpler choices.

  • bcye 2 days ago

    off-topic but it really is awesome on how many OSS projects Horizon/NLnet/NextGen Europe pops up.

donatj 2 days ago

I wish 9p would be more generally available.

Both Windows and Mac have 9p support built in and both have locked away from the end user. Windows has it exclusively for communication with WSL. macOS has 9p but it's exclusively for communication with it's virtualization system. It would be amazing if I could just mount 9p from the UI.

cricalix 3 days ago

"FTP is dead" - shared web hosting would like a word. Quite a few web hosts still talk about using FTP to upload websites to the hosting server. Yes, these days you can upload SSH keys and possibly use SFTP, but the docs still talk about tools like FileZilla and basic FTP.

Exhibit A: https://help.ovhcloud.com/csm/en-ie-web-hosting-ftp-storage-...

  • SoftTalker 3 days ago

    I haven't used old school FTP in probably 15 years. Surely we're not talking about using that unencrypted protocol in 2025?

    From that link:

        2. SSH connection
    
        You will need advanced knowledge and an OVHcloud web hosting plan Pro or Performance to use this access type.
    
    Well, maybe we are. I'd cross that provider off my list right there.
    • sltkr 3 days ago

      They mention that the "FTP" service includes SFTP, which is file transfer over SSH (not actually related to classic FTP), which is perfectly secure and supported by most FTP clients like Filezilla.

      The premium "SSH connection" you mentioned seems to refer to shell access via SSH, which is a separate thing.

      • cricalix 3 days ago

        They also support FTP without the SSH transport, and it's not FTPS either. Various IP cameras still support FTP as a way to write files out periodically; I use this to provide a "stream" from a camera (8 seconds per frame because reasons) to the world. Actual streaming via RTSP is also available, but I could never get a stable stream to a video host (like YT or Twitch) from the camera (partially because of a poor quality network connection that can't be upgraded easily). So, FTP + credentials -> walled off directory that's not under the web root -> PHP script in web root -> web browser.

    • carlosjobim 3 days ago

      FTP still works great and encryption is a non-priority for 100% of users.

      • SXX 3 days ago

        It should be priority for hosting companies though since leaked credentials and websites hosting malware is a problem.

        • Nextgrid 2 days ago

          Shared hosting companies are still exposing cPanel/WHMCS to the outside world. You don't need FTP passwords to pwn this kind of crap.

      • creatonez 3 days ago

        Transport encryption should be a huge priority for everyone. It's completely unacceptable to continue using unencrypted protocols over the public internet.

        Especially for the use case of transferring files to and from the backend of a web host. Not using it in that scenario is freely handing over control over your backend to everything in between you and the host, putting everyone at risk in the process.

        • carlosjobim 2 days ago

          I've used FTP for static sites for decades by this point. Credentials have never been leaked, transfers have never been interfered with.

          • ndsipa_pomu 2 days ago

            How would you know if the transfers were interfered with? Do you take checksums of the files you upload and then check that the files apparently uploaded are the same?

            Also, how do you know that there isn't someone performing a MITM (man in the middle) attack? FTP has no mechanism that I know of to verify that you're connecting to the server that you think you are.

            It may well be that you're not a sizeable target and that no-one is interested in hacking your site, but that's just luck and not an endorsement of unencrypted FTP.

            • carlosjobim 2 days ago

              How would you know that your neighbours aren't secretly spying together on you and interfering with your life in ways you don't notice?

              We have to put a limit to paranoia. If things work correctly for decades and there are no signs of foul play after endless real world usage, it's safe to say nobody is hacking our FTP.

              It's different if you're a bank or the KGB or the CIA.

              > It may well be that you're not a sizeable target and that no-one is interested in hacking your site, but that's just luck and not an endorsement of unencrypted FTP.

              Do you drive an armored car?

              • DANmode a day ago

                Do you drive a doorless car?

                A frame-less one?

                • carlosjobim 9 hours ago

                  Yes, and it only has two wheels.

                  • DANmode 2 hours ago

                    Don't complain when you get run over.

                    I don't even know if I'm talking about your servers or your bike at this point, ha

              • ndsipa_pomu a day ago

                Needing an armored car or protection from neighbours is specifically to guard against proximity based exploits and those are very unlikely threats to most people. FTP interception can be easily performed from anywhere in the world with a little bit of DNS poisoning and then perform a MITM attack (or even just alter the data in transit from a malicious wifi hotspot).

                It costs approximately zero to use encryption and protect against the FTP exploits, so why continue to use FTP? There's literally no advantage and several possible disadvantages. Just relying on not being hacked before seems a foolish stance to me.

                • carlosjobim a day ago

                  If it's so easily done, then most FTP websites would be hacked every week. But hundreds of millions of people have FTP websites and never get hacked in decades.

                  I challenge you to select any FTP website of your choosing and make a tiny change to prove that you've hacked it and let me know here.

        • otabdeveloper4 3 days ago

          Not true. Your hosting provider already has physical access to the computer you're connecting to.

          Whether or not the connection you're using is encrypted doesn't really matter because the ISP and hosting provider are legally obligated to prevent unauthorized access.

          (It's different if you're the NSA or some other state-level actor, but you're not.)

          • creatonez a day ago

            ISPs very frequently do not give a shit about the law. There are so many instances of major ISPs intercepting and modifying traffic, injecting ads, redirecting people to gambling websites, etc. It's not some freak incident involving the NSA targeting you, it happens all the time. All it takes is one bribe.

            And what happens if your ISP is compromised without their knowledge? What happens when it's a consumer device such as a router? Don't forget that nearly every TP-Link router has an active malware infection.

            It's not just one ISP that you have to trust, it's every single intermediate piece of equipment.

            Intercepting traffic is a trivial & common form of compromise, and the problem multiplies by how many different parties you are handing your data to. It is wildly irresponsible to not attempt to protect against this.

            • otabdeveloper4 3 hours ago

              Nuance is needed here.

              "You <-> ISP <-> Bank webpage" is an entirely different security threat model than "You <-> Server you rent from an ISP".

              Also, unsanctioned wiretapping is an entirely different criminal offense than stealing leaked credentials.

              You can't make blanket statements like that without understanding ISP peering agreements and how data is stored and where.

              Let's not pretend like slapping cryptography over L3 is the entirety of being secure. Often (most of the time?) cryptography doesn't even matter much for security.

              P.S. Security (prevent stealing sensitive data) and verification (making sure nothing extra is added during transfer) are different problems.

        • bigstrat2003 3 days ago

          > It's completely unacceptable to continue using unencrypted protocols over the public internet.

          That is nonsense. The reality is that most data simply is not sensitive, and there is no valid reason to encrypt it. I wouldn't use insecure FTP because credentials, but there's no good reason to encrypt your blog or something.

          • immibis 2 days ago

            Didn't we already go through this 10 years ago and then Firesheep got created and thoroughly debunked it?

          • ndsipa_pomu 2 days ago

            You're missing the opposite issue - people might not care about your data, but you might well care if their data (e.g. porn sites) was uploaded to your blog.

            It's not so much about the data, but protecting your credentials for the server.

          • lavela 3 days ago

            I'd argue that most people like knowing that what they receive is what the original server sent(and vice versa) but maybe you enjoy ads enough to prefer having your ISP put more of it on the websites you use?

            Jokes aside https is as much about privacy as is is about reducing the chance you receive data that has been tampered. You shouldn't only not use FTP because credentials but also because embedded malware you didn't put there yourself.

            • rixed 3 days ago

              I, for one, would like to see an ISP dedicated enough and tecnically able to inject ads in my FTP stream. :)

            • SoftTalker 3 days ago

              Agree but also wonder if ISPs bother with this anymore, now that almost all websites are https.

          • creatonez a day ago

            This is the usual horseshit people say about this topic when they don't understand it. It's not just about encryption, but authentication (tamper-resistance). Your blog might not contain sensitive information, but if the entire website is intercepted and becomes malware, you're in trouble.

            The bad news with FTP in particular is that only one request has to be intercepted and recorded to have persistent compromise, because the credentials are just a username and password transmitted in clear.

  • jasongill 3 days ago

    Shared hosting is dying, but not yet dead; FTP is dying with it - it's really the last big use case for FTP now that software distribution and academia have moved away from FTP. As shared hosting continues to decline in popularity, FTP is going along with it.

    Like you, I will miss the glory days of FTP :'(

    • theshackleford 3 days ago

      Shared hosting is in decline in much the same way as it was in 2015. Aka everyone involved is still making money hand over fist despite continued reports of its death right around the corner.

      • tredre3 3 days ago

        The number of shared hosting providers has drastically declined since the 2000s. I would posit that things like squarespace/hosted wordpress took the lion share, with the advent of $5-10 VPS filling the remaining niches.

        The remaining hosting companies certainly still make a lot of money, a shared hosting business is basically on autopilot once set up (I used to own one, hence why I still track the market) and they can be overcommitted like crazy.

        • theshackleford 2 days ago

          > The number of shared hosting providers has drastically declined since the 2000s

          Yeah, there’s definitely been some wild consolidation. I’ve actually been involved in quite a few acquisitions myself over the last decade in one form or another.

          > (I used to own one, hence why I still track the market)

          I’m still in the industry, though in a very different segment now. I do still keep a small handful of legacy customers, folks I’ve known for years, on shared setups, but it’s more of a “you scratch my back, I’ll scratch yours” kind of thing now. It’s not really a profit play, more a mix of nostalgia and habit.

        • immibis 2 days ago

          Source on the number of providers declining?

          • nativeit a day ago

            Probably worth noting also that declining number of providers does not equal a declining number of customers. I know every company I engaged with ~15-years ago has been acquired at least once.

      • jasongill 3 days ago

        No, not at all the case. There has been continued consolidation of the shared hosting space, plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started. Combine that with site builders eating at shared hosting's market share, and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.

        • SoftTalker 3 days ago

          Seems short sighted, a lot of older people and privacy conscious people of all ages avoid social media. But I guess if they are sustaining a business on only Instagram, good for them.

        • theshackleford 2 days ago

          > There has been continued consolidation of the shared hosting space

          That’s been happening, at least from my own memory, since at least the mid-2000s.

          > plus consumer interest in "a website" has declined sharply now that small businesses just feel that they need an instagram to get started.

          Ah yes, the 2020s version of “just start a Facebook page.” The more things change, the more they stay the same I suppose.

          > Combine that with site builders eating at shared hosting's market share

          I remember hearing that for the first time in I wanna say...2006? It sure did cause a panic for at least a little while.

          > and it's not looking good for the future of the "old school" shared hosting industry that you are thinking of.

          Yes, I've heard this one more times than I can count too.

          The funny thing is, I’ve been hearing this same “shared hosting is dying” narrative for nearly two decades now. Yet, in that time, I’ve seen multiple companies launch, thrive, and sell for multi-million dollar exits.

          But sure, this time it’s definitely the death knell. Meanwhile, I assure you, the bigger players in the space are still making money hand over fist.

          https://www.mordorintelligence.com/industry-reports/web-host...

          > By hosting type, shared hosting led with 37.5% of the web hosting market share in 2024

          • jasongill 2 days ago

            I was in the space from the late 90's, acquired ~30 brands and was the largest private consolidator of shared hosting, and sold to a Fortune 500 in 2015. Sounds like you had a similar experience as mine. There's no way you can deny that the glory days of shared hosting are over - while there is still a little money to be made by setting up a VPS with cPanel, and money to be made if you are WebPros or Newfold, the market is contracting and has been for years due to the factors I listed. The Cheval list used to be the hottest marketplace on the planet and now is just a shell of it's former self, unfortunately.

    • valiant55 9 hours ago

      I think everyone is underestimating how much B2B file exchange happens over SFTP/FTPS. I'm in healthcare and my system moves thousands of files up and down from over 100 unique hosts daily.

    • bawolff 3 days ago

      I think the true death of ftp was amazon s3 deciding to use their own protocol instead of ftp, as s3 is basically the same niche.

      • HumanOstrich 2 days ago

        FTP does not even come close to supporting the use cases of S3, especially now.

        • bawolff 2 days ago

          Yeah, but the average s3 user doesnt care about most of those most of the time.

          Just like how there are usecases ftp supports that s3 doesn't.

  • waste_monk 2 days ago

    Also worth noting that FTPS (FTP over TLS) exists and obviates the fuss around SSH TOFU and key management etc. Especially given we're in the era of free certificates via Let's Encrypt, this is a great option.

    The main downside is people will sometimes assume you mean SFTP (not having heard of FTPS or realising they are different), and then get upset when it doesn't work as they expect. However good tooling will support both e.g. Filezilla.

mid1221213 3 days ago

On the same topic, and because I believe too that WebDAV is not dead, far from it, I published a WIP lastly, part of a broader project, that is an nginx module that does WebDAV file server and is compatible with NextCloud sync clients, desktop & Android. It can be used with Gnome Online Accounts too, as well as with Nautilus (and probably others), as a WebDAV server.

Have a look there: https://codeberg.org/lunae/dav-next

/!\ it's a WIP, thus not packaged anywhere yet, no binary release, etc… but all feedback welcome

  • apitman 2 days ago

    If you have to call out compatibility with specific clients doesn't that indicate pretty serious issues with the spec?

1123581321 3 days ago

I built a simple WebDAV server with Sabre to sync Devonthink databases. WebDAV was the only option that synced between users of multiple iCloud accounts, worked anywhere in the world and didn’t require a Dropbox subscription. It’s a faster sync than CloudKit. I don’t have other WebDAV use cases but I expect this one to run without much maintenance or cost for years. Useful protocol.

  • walterbell 3 days ago

    iOS DevonThink sync WebDAV has been reliable, fast, maintained, non-subscription and includes a web scraper. Good for saving LLM chatbot markdown.

sunaookami 3 days ago

Recently set up WebDAV for my Paperless-NGX instance so my scanner can directly upload scans to Paperless. I wish Caddy would support WebDAV out of the box, had to use this extension: https://github.com/mholt/caddy-webdav

  • xattt 3 days ago

    Which scanner, if you don’t mind me asking? I’ve got a decade+ old ix500 that had cloud support but not local SMB.

    • sunaookami 2 days ago

      EPSON WorkForce ES-580W. Got it from eBay with "damaged packaging" (not really) from Epson Outlet Store in my country. With a discount code I only paid 324 €. There is also an official promotion by Epson (in Europe only maybe?) where you get 75 € cashback for this scanner, so effectively 249 € which is a VERY good price. Also supports SMB but I'm running Paperless on my VPS, hence I used WebDAV (if you do this: the scanner will do a GET request to the WebDAV url first which must be answered with a 200 OK or it will never try WebDAV).

      I debated between this scanner and the Brother ADS-1800W but the Brother has a slow UI and no thingy where the paper lands when it's done scanning (not sure how it's called in English).

      • xattt 2 days ago

        Thank you!

sylens 3 days ago

Author seems to conflate S3 API with S3 itself. Most vendors are now including S3 API compatibility into their product because people are so used to using that as a model

  • dangus 3 days ago

    I was about to make a very similar comment.

    There really is nothing wrong with the S3 API and the complaints about Minio and S3 are basically irrelevant. It’s an API that dozens of solutions implement.

  • notpushkin 3 days ago

    They do mention S3-compatible servers later in the post. It really seems to be about protocol itself.

  • PunchyHamster 3 days ago

    More like attempt at S3 API compatibility...

dabinat 2 days ago

I feel like WebDAV will have staying power for a simple reason: it’s easy to understand and implement. My company has a cloud platform for people to share files and I am working on a feature to allow it to work as a drive through WebDAV. We may support other protocols later on but WebDAV made the most sense to start off with because we already have all the infrastructure we need to deliver files over HTTP. The amount of additional complexity to support WebDAV was near-zero and the amount to support other protocols would be a lot more.

  • chrisandchris 2 days ago

    Fully agree, it's boring technology (tm) and that's usually the way to go (instead of relying on the next big thing), also it's an open standard.

netsharc 3 days ago

One interesting use of WebDAV is SysInternals (which is a collection of tools for Windows), it's accessible from Windows Explorer via WebDAV by going to \\live.sysinternals.com\Tools

  • gruez 3 days ago

    Isn't that SMB, not webdav?

    • netsharc 3 days ago

      I guess the "\\$HOSTNAME\$DIR" URL syntax in Windows Explorer also works for WebDAV. Is it safe to have SMB over WAN?

      I just tried https://live.sysinternals.com/Tools in Windows Explorer, and it also lists the files, identical to how it would show the contents of any directory.

      Even running "dir \\live.sysinternals.com\Tools", or starting a program from the command prompt like "\\live.sysinternals.com\Tools\tcpview64" works.

    • MrDrMcCoy 3 days ago

      IIRC, Windows for a while had native WebDAV support in Explorer, but setting it up was very non-obvious. Not sure if it still does, since I've moved fully to Linux.

cyberpunk 3 days ago

I use webdav for serving media over tailscale to infuse when I'm on the move. SMB did not play nicely at all and nfs is not supported..

The go stdlib has quite a good one that just works with only a small bit of wrapping in a main() etc.

Although ive since written one in elixir that seems to handle my traffic better..

(you can also mount them on macos and browse with finder / shell etc which is pretty nice)

  • moritz 2 days ago

    dou you happen to have the source code open somewhere? i was just looking into webdav via elixir

Tractor8626 4 days ago

If you need sftp independent of unix auth - there is sftpgo.

Sftpgo also supports webdav, but for use cases in the article sftp is just better.

williamjackson 3 days ago

I was surprised, then not really surprised, when I found out this week that Tailscale's native file sharing feature, Taildrive, is implemented as a WebDAV server in the network.

https://tailscale.com/kb/1369/taildrive

  • nine_k 3 days ago

    What else would you expect, just out of curiosity? SMB? NFS? SSHFS?

    • worik 3 days ago

      A proprietary binary patented protocol...

      • PunchyHamster 3 days ago

        and do what, implement virtual filesystem driver for every OS ?

        • HumanOstrich 2 days ago

          Only if adding that complexity locks in more subscribers for premium features and support.

warabe 3 days ago

Just like the author, I use WebDAV for Joplin, also Zotero. Just love them so much.

We need to keep using open protocols such as WebDAV instead of depending on proprietary APIs like the S3 API.

ycui1986 3 days ago

The Windows built-in WebDAV in explorer embarrassingly slow. Pretty much unusable for anything serious.

  • EvanAnderson 3 days ago

    For sure. I tried to setup a collaboration environment for a Customer years ago using WebDAV over SSL in lieu of Dropbox. Everything worked great (authenticating to Active Directory, NTFS ACLs, IP address restrictions in IIS policy where necessary, auditing access in Windows security log and IIS logs, no client to install), but the Windows client experience was hideously slow. People hated it for that and it got no traction.

  • nine_k 3 days ago

    OTOH gio-based WebDAV access built into Nautilus and Thunar is something I use daily, and it works quite fine, for a FUSE-based filesystem.

    Unlike NFS or SMB, WebDAV mounts do not get stuck for a minute when the connection becomes unstable.

  • Tepix 2 days ago

    In my experience, WebDAV has always been slow, no matter which platform.

    Can WebDAV be made fast?

throwaway87502 3 days ago

> While writing this article I came across an interesting project under development, Altmount. This would allow you to "mount" published content on Usenet and access it directly without downloading it... super interesting considering I can get multi-gigabit access to Usenet pretty easily.

There is also NzbDav for this too, https://github.com/nzbdav-dev/nzbdav

CyberDildonics 2 days ago

This seems like another article where they never define the acronym they use and expect everyone to have seen it already.

WebDAV (Web Distributed Authoring and Versioning) is a set of extensions to the Hypertext Transfer Protocol (HTTP), which allows user agents to collaboratively author contents directly in an HTTP web server by providing facilities for concurrency control and namespace operations, thus allowing the Web to be viewed as a writeable, collaborative medium and not just a read-only medium.[1] WebDAV is defined in RFC 4918 by a working group of the Internet Engineering Task Force (IETF).

https://en.wikipedia.org/wiki/WebDAV

indigodaddy 4 days ago

Copyparty has webdav and smb support (among others), which makes it a good candidate to combine with a Kodi client perhaps?

93n 2 days ago

Native WebDAV mount support in Android would be handy. I use davx5 (https://github.com/bitfireAT/davx5-ose), but accessing files is a bit clunky.

I like WebDAV because it 'just works' with the mTLS infra I had already setup on my homelab for access from the outside world.

I use sftpgo (https://sftpgo.com/) on the server side.

p0w3n3d a day ago

My experience with WebDAV is that it's painfully slow with multiple files. I wonder if HTTP/3 would help

apitman 2 days ago

IMO the achilles heel of "Web"DAV is that there doesn't seem to be an easy way to add it to web apps, which precludes using it as a remote file system a la Google Drive. I'm assuming this is because browsers won't let you make PROPFIND et al requests, but I haven't actually tested that.

garganzol 2 days ago

WebDAV is not bad, but could be better. One of its dark corners is file upload. By standard, it uses good old POST-style uploads where the whole file is uploaded as a single blob. If the file is large enough, good luck uploading it to a HTTP server that has a cap on requests.

A standard way of doing progressive chunked uploads would be a solid improvement. However, older protocols like this have an air of lacking supervision which is a shame.

rubatuga 4 days ago

No random writes is the nail in the coffin for me

aborsy 3 days ago

A lot of apps support WebDAV. It seems to be better supported than SFTP?

You can run a WebDAV server using caddy easily.

cyberax 3 days ago

I'm using WebDAV to sync files from my phone to my NAS. There weren't any good alternatives, really. SMB is a non-starter on the public Internet (SMB-over-QUIC might change that eventually), SFTP is even crustier, rsync requires SSH to work.

What else?

  • MrDrMcCoy 3 days ago

    Syncthing is pretty nice for that sort of thing.

    • PunchyHamster 3 days ago

      Syncthing is great but it does file sync, not file sharing, so not ideal when you say want to share a big media library with your laptop but not necessarily load everything on it

      • MrDrMcCoy 3 days ago

        That moves the goalpost. The user I was replying to wanted sync and didn't seem to be using other functionality like that.

    • cyberax 3 days ago

      I have just tried to run their unofficial apps, but I couldn't make them work.

tealpod 3 days ago

FTP is not dead. A huge percent of Wind Turbines use FTP for data transfer.

mastax 3 days ago

Relatedly, is there a good way to expose a directory of files via the S3 API? I could only find alpha quality things like rclone serve s3 and things like garage which have their own on disk format rather than regular files.

  • elitepleb 3 days ago

    consider versitygw or s3proxy

latchkey 3 days ago

It has been 16 years since I started this webdav client for Java:

https://github.com/lookfirst/sardine

Still going.

  • elric 2 days ago

    Sardine is great. I recently used it to automate some backups from a webdav share. No complaints whatsoever :-)

    • latchkey 2 days ago

      Thanks! API design is a bit out of date, but at least it still works.

adriatp 3 days ago

I feel the pain when you refeer to MinIO. I ended up using a pre 15 version in order to have all previous features but that sucks. I will try this.

jFriedensreich 2 days ago

if operating systems had just put a bit more time into the clients and not stopped any work in 2010 or so, webdav could have been much more, covering many usecases of fuse. unfortunately especially the mac webdav and finders outdated architecture make this just too painful

jeroenhd 3 days ago

I wonder how much better WebDAV must have gotten with newer versions of the HTTP stack. I only used it briefly in HTTP mode but found the clients to all be rather slow, barely using tricks like pipelining to make requests go a little faster.

It's a shame the protocol never found much use in commercial services. There would be little need for official clients running in compatibity layers like you see with tools like Gqdrive and OneDrive on Linux. Frankly, except for the lack of standardised random writes, the protocol is still one of the better solutions in this space.

I have no idea how S3 managed to win as the "standard" API for so many file storage solutions. WebDAV has always been right there.

PunchyHamster 3 days ago

> FTP is dead (yay),

Hahahaha, haha, ha, no. And probably (still)more used than WebDAV

pls send help

  • spragl 3 days ago

    Yeah, that must have been wishful thinking.

    FTP is such a clunky protocol, it is peculiar it has had such staying power.

warpspin 2 days ago

> Lots of tools support it: [...| Windows Explorer (Map Network Drive, Connect to a Web site...)

Not sure he ever tried supporting that. We once did and it was a nightmare. People couldn't handle it at all even with screenshotted manuals.

My personal experience says that even the dumbest user is able to use FileZilla successfully, and therefore SFTP, while people just don't get the built-in WebDAV support of the OSes.

I also vaguely recall that WebDAV in Windows had quite a bit of randomly appearing problems and performance issues. But this was all a while ago, might have improved since then.

Velocifyer 3 days ago

JMAP will eventually replace WebDAV.

  • ezst 2 days ago

    That's some wishful thinking. I understand the case for JMAP above IMAP, I understand how "it makes sense" to NIH the rest of cal/cardDAV, I'm not sure what the sales pitch for file transfer is, though, especially when the ecosystem is pretty much inexistant.

panny 3 days ago

>It's broadly available as you can see

And yet, I can never seem to find a decent java lib for webdav/caldav/carddav. Every time I look for one, I end up wanting to write my own instead. Then it just seems like the juice isn't worth the squeeze.

ksk23 2 days ago

Beautiful.

sublinear 4 days ago

This blog post didn't convince me. I must assume the default for most web devs in 2025 is hosting on a Linux VM and/or mounting the static files into a Docker container. SFTP is already there and Apache is too.

The last time I had to deal with WebDAV was for a crusty old CMS nobody liked using many years ago. The support on dev machines running Windows and Mac was a bit sketchy and would randomly have files skipped during bulk uploads. Linux support was a little better with davfs2, but then VSCode would sometimes refuse to recognize the mount without restarting.

None of that workflow made sense. It was hard to know what version of a file was uploaded and doing any manual file management just seemed silly. The project later moved to GitLab. A CI job now simply SFTPs files upon merge into the main branch. This is a much more familiar workflow to most web devs today and there's no weird jank.