antoniomika 6 months ago

This has been replaced with a permissions feature that still provides both delete and overwrite protections. The difference is the underlying store needs to implement it rather than running a server that understands the permission differences. You can read more about this change here: https://github.com/borgbackup/borg/issues/8823#issuecomment-...

  • bayindirh 6 months ago

    This comment needs to be pinned, alongside what the developers say [0] since the change is very misunderstood.

    > The "no-delete" permission disallows deleting objects as well as overwriting existing objects.

    [0]: https://github.com/borgbackup/borg/pull/8798#issuecomment-29...

    • zargon 6 months ago

      Isn't this "no-delete permission" just a made-up mode for testing the borg storage layer while simulating a lack of permissions for deleting and overwriting? In actual deployment, whatever backing store is used must have the access control primitives to implement such a restriction. I don't know how to do this on a posix filesystem, for example. Gemini gave me a convoluted solution that requires the client to change permissions after creating the files.

      • ThomasWaldmann 6 months ago

        at first it was implemented to easily test permission restricted storages (can't easily test on all sorts of cloud storages).

        it was implemented for "file:" (which is also used for "ssh://" repos) and there are automated tests for how borg behaves on such restricted permissions repos.

        after the last beta I also added cli flags to "borg serve", so it now also can be used via .ssh/authorized_keys more easily.

        so it can now also be used for practical applications, not just for testing.

        not for production yet though, borg2 is still in beta.

        help with testing is very welcome though!

      • antoniomika 6 months ago

        Currently, you can either provide the `BORG_REPO_PERMISSIONS` env var to borg [0] or `--permissions` flag to `borg serve` [1]. You can then enforce this as part of your `authorized_keys` command, for example.

        [0] https://github.com/borgbackup/borg/blob/3cf8d7cf2f36246ded75...

        [1] https://github.com/borgbackup/borg/blob/3cf8d7cf2f36246ded75...

        • zargon 6 months ago

          Ah, I was searching borgstore for no-delete, but it gets exploded into itemized permissions in borg. Documentation seems to be non-existent, as the only mention seems to be the changelog where it suggests this only exists for testing. But I suppose it's not released yet.

  • formerly_proven 6 months ago

    The old append-only mode was a hack that wasn’t very useful in practice anyway, because there were no tools to dissect changes in a repository and the datastructures wouldn’t support that anyway.

    Making e.g. snapshots on the backing storage was always the better approach.

  • jaegerma 6 months ago

    Thanks for that link. That issue somehow didn't come up when I researched the removal of append-only. The only hint I had was the vague "remove remainders of append-only and quota support" in the change log without any further information.

homebrewer 6 months ago

For anyone looking to migrate off borg because of this, append-only is available in restic, but only with the rest-server backend:

https://github.com/restic/restic

https://github.com/restic/rest-server

which has to be started with --append-only. I use this systemd unit:

  [Unit]
  After=network-online.target

  [Install]
  WantedBy=multi-user.target

  [Service]
  ExecStart=/usr/local/bin/rest-server --path /mnt/backups --append-only --private-repos
  WorkingDirectory=/mnt/backups
  User=restic
  Restart=on-failure
  ProtectSystem=strict
  ReadWritePaths=/mnt/backups
I also use nginx with HTTPS + HTTP authentication in front of it, with a separate username/password combination for each server. This makes rest-server completely inaccessible to the rest of the internet and you don't have to trust it to be properly protected against being hammered by malicious traffic.

Been using this for about five years, it saved my bacon a few times, no problems so far.

  • rsync 6 months ago

    You can achieve append-only without exposing a rest server provided that 'rclone' can be called on the remote end:

      rclone serve restic --stdio
    
    You add something like this to ~/.ssh/authorized_keys:

      restrict,command="rclone serve restic --stdio --append-only backups/my-restic-repo" ssh-rsa ...
    
    ... and then run a command like this:

      ssh user@rsync.net rclone serve restic --stdio ...
    
    We just started deploying this on rsync.net servers - which is to say, we maintain an arguments allowlist for every binary you can execute here and we never allowed 'rclone serve' ... but now we do, IFF it is accompanied by --stdio.
    • zacwest 6 months ago

      You then use `restic` telling it to use rclone like...

          restic ... --option=rclone.program="ssh -i <identity> user@host" --repo=rclone:
      
      which has it use the rclone backend over ssh.

      I've been doing this on rsync.net since at least February; works great!

  • snickerdoodle12 6 months ago

    I use restic+rclone+b2 with an api key that can't hard delete files. This gives me dirt-cheap effectively append-only object storage with automatic deletion of soft deleted backups after X days.

    • fl0id 6 months ago

      Which is exactly what the borg suggest in their issue.

      • snickerdoodle12 6 months ago

        It's a good strategy and much cheaper than having a VM that can run the server

  • nine_k 6 months ago

    While at it, what do you think about Kopia [1]? It seems to use architectural decisions similar to Restic and Borg, but appears to be much faster in certain cases by exploiting parallel access. It's v0.20 though.

    [1]: https://kopia.io/docs/

    • shelled 6 months ago

      After this time I must have tried Kopia (via KopiaUI) at least a dozen time and every single time I have never been able to figure out in one glance how it works. A brief idea I have is that you pick a folder and pick where to backup/snapshot it to. There is no (or at least an easy and intuitive) way to have a local backup setup of a set of folder, exclusions, inclusions, a config where you can decide the frequency, retention etc. I did try hard to find that out but nothing. I think it's their deliberate design choice and that's fine - but at least from usability perspective it's anything that is even in the direction of backup tools like restic/backrest, borg/vorta etc.

      • nine_k 6 months ago

        Having dug into Kopia config today, I realized that much of this is controlled by a policy, or several policies. The fun part is that the policies live in the repository, there's no obvious way to have it as local files. It makes sense, I suppose, if you have a bunch of similar machines (e.g. workstations) that must follow the same policy, and put their backups to the same repository.

  • JeremyNT 6 months ago

    I'm curious if there is any reason to use Borg these days.

    I had the impression that in the beginning Borg started as a fork of Restic to add missing features, but Restic was the more mature project.

    Is there still anything Borg has that Restic lacks?

    • lutoma 6 months ago

      Borg is a fork of Attic, not restic. Restic is also written in Go while Attic/Borg is in Python.

      For me the reason to use Borg over Restic has always been that it was _much_ faster due to using a server-side daemon that could filter/compress things. The downside being you can’t use something like S3 as storage (but services like Borgbase or Hetzner Storage Boxes support Borg).

      That’s probably changed with the server backend, but with the same downside.

      • KingOfCoders 6 months ago

        We used borg with the very nice people at rsync.net in two startups.

    • remram 6 months ago

      My number one problem with Restic is the memory usage. On some of my workloads, Restic consumes dozens of gigabytes of memory during backup.

      I am very much in the market for a replacement (looking at Rustic for example).

      • nadir_ishiguro 6 months ago

        That's very interesting. Never had noticed anything like that. What kind of workloads are you seeing this with?

  • twhb 6 months ago

    restic’s rest-server append-only mode unfortunately doesn’t prevent data deletion under normal usage. More here: https://restic.readthedocs.io/en/stable/060_forget.html#secu.... Their workaround is pretty weak, in my opinion: a compromised client can still delete all your historic backups, and you’re on a tight timeline to notice and fix it before they can delete the rest of your backups, too.

    • unsnap_biceps 6 months ago

      That article says that a compromised client can not delete your historic backups, however, a compromised client could create enough garbage backups that an automatic job by an non-compromised administration account could delete them due to retention policies.

      I'm not sure what exactly you expect that would be different?

  • cvalka 6 months ago

    Use rustic instead of restic!

gausswho 6 months ago

My current approach is restic, but I'd prefer to have asymmetric passwords, essentially the backup machine only having write access (while maintaining deduplication). This way if the backup machine were compromised, and therefore the password it needs to write, the backup repo itself would still be secure since it would use a different password for reading.

Is this what append-only achieved for Borg?

dblitt 6 months ago

It seems the suggested solution is to use server credentials that lack delete permissions (and use credentials that have delete for compacting the repo), but does that protect against a compromised client simply overriding files without deleting them?

  • ThomasWaldmann 6 months ago

    no-delete disallows any kind of deleting information, that includes object deletion and object overwriting.

  • throwaway984393 6 months ago

    No. Delete and overwrite are different. You need overwrite protection in addition to delete protection. The solution will vary depending on the storage system and the use case. (The comment in the PR is not an exhaustive description of potential solutions)

  • qeternity 6 months ago

    Append-only would imply yes. There is no overwriting in append-only. There is only truncate and append.

    • mosselman 6 months ago

      You have misread I think.

      There used to be append-only, they've removed it and suggest using a credential that has no 'delete' permission. The question asked here is whether this would protect against data being overwritten instead of deleted.

ThomasWaldmann 6 months ago

borgbackup developer here:

TL;DR: don't panic, all is good. :-)

Longer version:

- borg 1.x style “append-only” was removed, because it heavily depended on how the 1.x storage worked (it was a transactional log, always only appending PUT/DEL/COMMIT entries to segment files - except when compacting segments [then it also deleted segment files after appending their non-deleted entries to new segments])

- borg 2 storage (based on borgstore) does not work like that anymore (for good reasons), there is no “appending”. thus “—append-only” would be a misnomer.

- master branch (future borg 2 beta) has “borg serve —permissions=…” (and BORG_PERMISSIONS env var) so one can restrict permissions: “all”, “no-delete”, “write-only”, “read-only” offer more functionality than “append only” ever had. “no-delete” disallows data deleting as well as data overwriting.

- restricting permissions in a store on a server requires server/store side enforced permission control. “borg serve” implements that (using the borgstore posixfs backend), but it could be also implemented by configuring a different kind of store accordingly (like some cloud storage). it’s hard to test that with all sorts of cloud storage providers though, so implementing it in the much easier to automatically test posixfs was also a motivation to add the permissions code.

Links:

- docs: https://github.com/borgbackup/borg/pull/8906/files

- code: https://github.com/borgbackup/borg/pull/8893/files

- code: https://github.com/borgbackup/borg/pull/8844/files

- code: https://github.com/borgbackup/borg/pull/8837/files

Please upvote, so people don't get confused.

mrtesthah 6 months ago

FYI for those using restic, you can use rest-server to achieve a server-side-enforced append-only setup. The purpose is to protect against ransomware and other malicious client-side operations.

aborsy 6 months ago

Borg2 has been in beta testing for a very long time.

Anyone knows when will it come out of beta?

  • ThomasWaldmann 6 months ago

    The usual answer: "when it is ready".

    For low-latency storage (like file: and maybe ssh:) it already works quite nicely, but there might be a lot to do still for high-latency storage (like cloud stuff).

    • dawnerd 6 months ago

      It’s a shame because the current version has had bugs that v2 supposedly fixed for a while.

      • ThomasWaldmann 6 months ago

        Bugs?

        I don't know about any show-stoppers in borg 1.x.

        Design limitations?

        Yes, there are some, that's why borg2 will be quite different. But these are no easy or small changes.

        Also, borg2 will be a breaking release (offering borg transfer to copy existing archives from borg 1.x repos). It takes long because we try to put all breaking changes into borg2, so you won't have to transfer again too soon after borg2 release.

        • dawnerd 6 months ago

          Nothing really breaking but annoying, but I suppose you wouldn't even consider it a bug then.

          For example, the file changed warning. That one ended up breaking my backup cron. I wasn't being notified (my fault there) and the rsync wasn't running. Caught it months later when I tried to run a test restore.

          I'm fine with it returning a warning but people were asking for an option to suppress it and you said in 2022 it would be fixed in v2.

          That's why it's disappointing. Appreciate the work though! Borg has definitely been a life safer and has recovered a failed production server already.

3036e4 6 months ago

I use rsync.net for borg backups. They create daily ZFS snapshots that are read-only to the user, specifically for ransomware protection.

But this was a good reminder I should probably figure out some good way to monitor my borg repo for unintended changes. Having snapshots to roll back to is only useful if a problem is detected in time.

neilv 6 months ago

I used to have a BorgBackup server at home that used append-only and restricted-SSH.

It wasn't perfect, but it did protect against some scenarios in which a device could be majorly messed up, yet the server was more resistant to losing the data.

For work, the backup schemes include separate additional protection of the data server or media, so append-only added to that would be nice, as redundant protection, but not as necessary.

radarsat1 6 months ago

I've been using btrbk with a local linux machine i use as a file server. Works well for incremental snapshot backups, no need to "unthaw" an image, I can directly fetch files from a previous snapshot. The only thing I haven't figured out with btrfs is how to efficiently handle incremental backs to S3. I guess there's not much choice than to use image diffs using btrfs-send because you don't have hard/ref links. But I don't like this because then if i want to retrieve a file from some version I'd have to have an extra 30 TB free to install the base image and progressively all the diffs up to the point I want to retrieve, seems a lot harder. So to make this reasonable I'd have to choose to make periodic non-incremental base images, starts getting complicated.

TheFreim 6 months ago

I've been using Borg for a while, I've been thinking about looking at the backup utility space again to see what is out there. What backup utilities do you all use and recommend?

  • singhrac 6 months ago

    I spent too long looking into this and settled on restic. I'm satisfied with the performance for our large repo and datasets, though we'll probably supplement it with filesystem-based backups at some point.

    Borg has the issue that it is in limbo, i.e. all the new features (including object storage support) are in Borg2, but there's no clear date when that will be stable. I also did not like that it was written in Python, because backups are not always IO blocked (we have some very large directories, etc.).

    I really liked borgmatic on Borg, but we found resticprofile which is pretty much the same thing (it is underdiscussed). After some testing I think it is important to set GOGC and read-concurrency parameters, as a tip. All the GUIs are very ugly, but we're fine a CLI.

    If rustic matures enough and is worth a switch we might consider it.

  • muppetman 6 months ago

    restic

    Single binary, well supported, dedup, compression, excellent snapshots, can mount a backup to restore a single file easily etc etc.

    It's made my backups go from being a chore to being a joy.

    • rsync 6 months ago

      ... also you can point restic at any old SFTP server ...

  • Saris 6 months ago

    Restic is nice. Backrest if you like a webUI.

  • TiredOfLife 6 months ago

    Kopia

    • conception 6 months ago

      Kopia is surprisingly good. I use it with a b2 backend, had percentage based restore verification for regulatory items and is super fast. Only downside is lack of enterprise features/centralized management.

  • actuallyalys 6 months ago

    I still use borg for local backups but use restic for all my offsite backups. Off-hand I don’t think redtic lacks any feature borg has (although there’s probably at least one) after they added compression a few years ago.

nathants 6 months ago

Do something simpler. Backups shouldn’t be complex.

This should be simpler still:

https://github.com/nathants/backup

  • ajb 6 months ago

    Cool, but looks like it's going to miss capabilities, so not suitable for a full OS backup (see https://github.com/python/cpython/issues/113293)

    • nathants 6 months ago

      Interesting. I'm not trying to restore bootable systems, just data. Still, probably worthwhile to rebuild in Go soon.

  • Too 6 months ago

    Index of files stored in git pointing to a remote storage. That sounds exactly like git LFS. Is there any significant difference? In particular in terms of backups.

    • nathants 6 months ago

      Definitely similar.

      Git LFS is 50k loc, this is 891 loc. There are other differences, but that is the main one.

      I don't want a sophisticated backup system. I want one so simple that it disappears into the background.

      I want to never fear data loss or my ability to restore with broken tools and a new computer while floating on a raft down a river during a thunder storm. This is what we train for.

  • orsorna 6 months ago

    Is this a joke?

    I don't see what value this provides that rsync, tar and `aws s3 cp` (or AWS SDK equivalent) provides.

    • nathants 6 months ago

      How do you version your rsync backups?

      • somat 6 months ago

        I use rsyncs --link-dest

        abridged example:

            rsync --archive --link-dest 2025-06-06 backup_role@backup_host:backup_path/ 2025-06-07/
        
        
        Actual invocation is this huge hairy furball of an rsync command that appears to use every single feature of rsync as I worked on my backup script over the years.

            rsync_cmd = [
              '/usr/bin/rsync',
              '--archive',
              '--numeric-ids',
              '--owner',
              '--delete',
              '--delete-excluded',
              '--no-specials',
              '--no-devices',
              '--filter=merge backup/{backup_host}/filter.composed'.format(**rsync_param),
              '--link-dest={cwd}/backup/{backup_host}/current/{backup_path}'.format(**rsync_param),
              '--rsh=ssh -i {ssh_ident}'.format(**rsync_param),
              '--rsync-path={rsync_path}'.format(**rsync_params),
              '--log-file={cwd}/log/{backup_id}'.format(**rsync_params),
              '{remote_role}@{backup_host}:/{backup_path}'.format(**rsync_params),
              'backup/{backup_host}/work/{backup_path}'.format(**rsync_params) ]
        • nathants 6 months ago

          This is cool. Do you always --link-dest to the last directory, and that traverses links all the way back as far as needed?

          • somat 6 months ago

            Yes, this adds a couple of nice features, it is easy to go back to any version using only normal filesysem access and because they are hard links it only uses space for changed files and you can cull old versions without worrying about loosing the backing store for the diff.

            I think it sort of works like apples time-machine but I have never used that product so... (shrugs)

            Note that it is not, in the strictest sense, a very good "backup" mainly because it is too "online", to solve that I have a set of removable drives that I rotate through, so with three drives, each ends up with every third day.

  • yread 6 months ago

    Uh, who has the money to store backups in AWS?!

    • seized 6 months ago

      Glacier Deep Archive is the cheapest cloud backup option at $1USD/month/TB.

      Google Cloud Store Archive Tier is a tiny bit more.

      • ikiris 6 months ago

        To quote the old mongodb video: If you don't care about restores, /dev/null is even cheaper, and its webscale.

      • mananaysiempre 6 months ago

        Both would be pretty expensive to actually restore from, though, IIRC.

        • fc417fc802 6 months ago

          Quite expensive, but it should only ever be a last resort after your local backups have all failed in some way or another. For $1/mo/TB you purchase the opportunity to pay an exorbitant amount to recover from an otherwise catastrophic situation.

          • Ferret7446 6 months ago

            If you don't test your backups, they don't exist.

            • seized 6 months ago

              There is a free tier that accounts for testing, first 100GB of transfer out of AWS per month is free.

        • seized 6 months ago

          Yes, about $90USD per TB.

          But I weigh that against data recovery from failed disks and the loss of the data I put in Glacier (family photos/etc). Then its dirt cheap.

    • nathants 6 months ago

      Depends how big they are. My high value backups go into S3, R2, and a local x3 disk mirror[1].

      My low value backups go into a cheap usb hdd from Best Buy.

      1. https://github.com/nathants/mirror

    • PunchyHamster 6 months ago

      Support for S3 means you can just have minio server somewhere acting as backup storage (and minio is pretty easy to replicate). I have local S3 on my NAS replicated to cheapo OVH serwer for backup

puffybuf 6 months ago

I've been using device mapper+encryption to backup my files to encrypted filesystem on regular files. (cryptsetup on linux, vnconfig+bioctl on openbsd). Is there a reason for me to use borgbackup? Maybe to save space?

I even wrote python scripts to automatically cleanup and unmount if something goes wrong (not enough space etc). On openbsd I can even Double encrypt with blowfish(vnconfig -K) and then a diff alg for bioctl.

  • anyfoo 6 months ago

    Does your solution do incremental backups at all? I have backups going back years, because through incremental backups each delta is not very large.

    Every once in a while things gets sparsed out, so that for example I have daily backups for the recent past, but only monthly and then even yearly for further back.

    • hcartiaux 6 months ago

      I maintain my incremental backups and handle the rotation with a shell script (bontmia) based on rsync with `--link-dest` (it creates hard links for unchanged files from the last backup). I've been using this on top of cryptsetup/luks/ext4 or xfs for > 10 years.

      Bonus: the backups are readable without any specific tools, you don't have to be able to reinstall a backup software to restore files, which may or may not be difficult in 10 years.

      This is the tool I use: https://github.com/hcartiaux/bontmia

      It's forked from an old project which is not online anymore, I've fixed a few bugs and cleaned the code over the years.

LeoPanthera 6 months ago

Is that a big deal? You should probably be doing this with zfs immutable snapshots anyway. Or equivalent feature for your filesystem.

  • philsnow 6 months ago

    The purpose of the append-only feature of borgbackup is to prevent an attacker from being able to overwrite your existing backups if they compromise the device being backed up.

    Are you talking about using ZFS snapshots on the remote backup target? Trying to solve the same problem with local snapshots wouldn't work because the attack presumes that the device that's sending the backups is compromised.

    • LeoPanthera 6 months ago

      > Are you talking about using ZFS snapshots on the remote backup target?

      Yes.

  • homebrewer 6 months ago

    There's not much sense in using these advanced backup tools if you're already on ZFS, even if it's just on the backup server, I would stick with something simpler. Their whole point is in reliable checksums, incremental backups, deduplication, snapshotting on top of a 'simple' classical filesystem. Sounds familiar to any ZFS user?

    • nijave 6 months ago

      Dedupe is efficient in Borg. The target needs almost no RAM

    • globular-toast 6 months ago

      Are there any good options for an off-site zfs backup server besides a colo?

      Would be interested to know what others have set up as I'm not really happy with how I do it. I have zfs on my NAS running locally. I backup to that from my PC via rsync triggered by anacron daily. From my NAS I use rclone to send encrypted backups to Backblaze.

      I'd be happier with something more frequent from PC to NAS. Syncthing maybe? Then just do zfs sync to some off site zfs server.

      • aeadio 6 months ago

        Aside from rsync.net which was mentioned in a sibling comment, there’s also https://zfs.rent, or any VPS with Linux or FreeBSD installed.

        • globular-toast 6 months ago

          zfs.rent is in the wrong location and I can't see anything about zfs send/receive support on rsync.net. What kind of VPS product has multiple redundant disks attached? Aren't they usually provided with virtual storage?

      • gaadd33 6 months ago

        I think Rsync.net supports zfs send/receive

    • PunchyHamster 6 months ago

      well, till lightning fries your server. Or you fat finger command and fuck something up.

  • topato 6 months ago

    I'm also completely confused why this was at the top of my hacki, seems completely innocuous

    • ajb 6 months ago

      Ideally a backup system should be implementable in such a way that no credential on the machines being backed up, enable the deletion or modification of existing backups. That's so that if your machines are hacked a) the backups can't be deleted or encrypted in a ransom attack and b) If you can figure out when the first compromise occurred, you know that before that date the backup data is not compromised.

      I guess some people might have been relying on this feature of borgbackup to implement that requirement

jbverschoor 6 months ago

Moved to duplicacy. Works great for me

  • jbverschoor 6 months ago

    Not to be confused with duplicati or duplicity

seymon 6 months ago

Borg vs Restic vs Kopia ?

They are so similar in features. How do they compare? Which to choose?

  • aborsy 6 months ago

    Restic is the winner. It talks directly to many backends, is a static binary (so you can drop the executable in operating systems which don’t allow package installation like a NAS OS) and has a clean CLI. Kopia is a bit newer and less tested.

    All three have a lot of commands to work with repositories. Each one of them is much better than closed source proprietary backup software that I have dealt with, like Synology hyperbackup nonsense.

    If you want a better solution, the next level is ZFS.

    • PunchyHamster 6 months ago

      Kopia is VERY similar to Restic, main differences is Kopia getting half decent UI vs Restic being a bit more friendly for scripting

      > If you want a better solution, the next level is ZFS.

      Not a backup. Not a bad choice for storage for backup server tho

      • omnimus 6 months ago

        But aren't ZFS snapshot replicas a backup? It seems that systems like TrueNAS went this way and you then don't need other solution.

      • jopsen 6 months ago

        IMO the UI is a killer feature.

        I don't need to configure and monitor cron jobs.

    • seymon 6 months ago

      I am already using zfs on my NAS where I want my backups to be. But I didn't consider it for backups till now

      • aeadio 6 months ago

        You can consider something like syncthing to get the important files onto your NAS, and then use ZFS snapshots and replication via syncoid/sanoid to do the actual backing up.

        • aborsy 6 months ago

          Or install ZFS also on end devices, and do ZFS replication to NAS, which is what I do. I have ZFS on my laptop, snapshot data every 30 minutes, and replicate them. Those snapshots are very useful, as sometimes I accidentally delete data.

          With ZFS, all file system is replicated. The backup will be consistent, which is not the case with file level backup. With latter, you have to also worry about lock files, permissions, etc. The restore will be more natural and quick with ZFS.

          • fc417fc802 6 months ago

            I can't speak to zfs but I don't find btrfs snapshots to be a viable replacement for borgbackup. To your filesystem consistency point I snapshot, back the snapshot up with borg, and then delete the snapshot. I never run borg against a writable subvolume.

  • noAnswer 6 months ago

    I use Borg since eight years and it has never let me down. Including a full 8TB disaster restore. It's super resilient to crashes.

    When I tested Restic (eight years ago) it was super slow.

    No opinion about Kopia, never heard of it.

    • liotier 6 months ago

      Same here: my selection boiled down to Borg vs. Restic. I started with Restic because my friends used it and, while it was perfectly satisfactory functionally, found it unbearably slow with large backups. Changed to Borg and I've been happy everafter !

      • GCUMstlyHarmls 6 months ago

        What is a "large" backup? Slow to backup locally or slow to backup over a network? (obviously you are not saying its slow without understanding the network is inherently slow, but more along the lines of maybe its network protocol is slow.)

        • liotier 6 months ago

          Those were only about 10 TB - home scale, and over SSH across 2 to 10 ms. I was coming from rdiff-backup, which satisfyingly saturated disk writes, whereas I didn't even understand what bottleneck restic was hitting.

  • the_angry_angel 6 months ago

    Kopia is awesome. With exception to it’s retention policies, but work like no other backup software that I’ve experienced to date. I don’t know if it’s just my stupidity, being stuck in 20 year thinking or just the fact it’s different. But for me, it feels like a footgun.

    The fact that Kopia has a UI is awesome for non-technical users.

    I migrated off restic due to memory usage, to Kopia. I am currently debating switching back to restic purely because of how retention works.

    • zargon 6 months ago

      I’m confused. Is Kopia awesome or is it a footgun? (Or are words missing?)

  • herewulf 6 months ago

    I don't know about the other two but restic seems to have a very good author/maintainer. That is to say that he is very active in fixing problems, etc..

  • spiffytech 6 months ago

    I picked Kopia when I needed something that worked on Windows and came with a GUI.

    I was setting up PCs for unsophisticated users who needed to be able to do their own restores. Most OSS choices are only appropriate for technical users, and some like Borg are *nix-only.