I know that for data storage the best bet is a NAS and RAID1 or something in that vein, but what about all the docker containers you are running, carefully configured services on your rpi, installed *arr services on your PC, etc.?

Do you have a simple way to automate backups and re-installs of these as well or are you just resigned to having to eventually reconfigure them all when the SD card fails, your OS needs a reinstall or the disk dies?

  • rentar42@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    11 months ago

    There’s lots of very good approaches in the comments.

    But I’d like to play the devil’s advocate: how many of you have actually recovered from a disaster that way? Ideally as a test, of course.

    A backup system that has never done a restore operations must be assumed to be broken. similar logic should be applied to disaster recovery.

    And no: I use Ansible/Docker combined approach that I’m reasonably sure could quite easily recover most stuff, but I’ve not yet fully rebuilt from just that yet.

    • Human Crayon@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      11 months ago

      I have (more than I’d like to admit) recovered entirely from backups.

      I run proxmox, everything else in a VM. All VMs get backed up to three different places once a week, backups are tested monthly on a rando proxmox box to make sure they still work. I do like the backup system built into it, serves my needs well.

      Proxmox could die and it wouldn’t make much of a difference. I reinstall proxmox, restore the VMs and I’m good to go again.

    • Kaldo@kbin.socialOP
      link
      fedilink
      arrow-up
      1
      ·
      11 months ago

      I’m not sure what Ansible does that a simple Docker Compose doesn’t yet but I will look into it more!

      My real backup test run will be soon I think - for now I’m moving from windows to docker, but eventually I want to get an older laptop, put linux on it and just move everything to the docker on it instead and pretend it’s a server. The less “critical” stuff I have on my main PC, the less I’m going to cry when I inevitably have to reinstall the OS or replace the drives.

      • rentar42@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        11 months ago

        I just use Ansible to prepare the OS, set up a dedicated user, install/setup Rootless Docker and then Sync all the docker compose files from the same repo to the appropriate server and launch/update as necessary. I also use it to centrally administer any cron jobs like for backup.

        Basically if I didn’t forget anything (which is always possible) I should be able to pick a brand new RPi with an SSD and replace one of mine with a single command.

        It also allows me to keep my entire setup “documented” and configured in a single git repository.

    • Dandroid@dandroid.app
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      I restored from a backup when I swapped to a bigger SSD. Worked perfectly first try. I use rsnapshot for backups.

  • ikidd@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    I run everything on a 2 node proxmox cluster with ZFS mirror volumes and replication of the VMs and CTs between them, run PBS with hourly snapshots, and sync that to multuple USB drives I swap off site.

    The docker VM can be ZFS snapshotted before major updates so I can rollback.

    • twei@feddit.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      10 months ago

      You should get another node, otherwise when node1 fails node2 will reboot itself and then do nothing because it has no quorum

        • twei@feddit.de
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 months ago

          I know, but every time I had to do that it felt like it’s a jank solution. If you have a raspberry pi or smth like that you can also set it up as a qdevice.

          …and if you’re completely fine with how it is you can also just leave it like it is

          • ikidd@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            10 months ago

            So I started to write a reply that said basically that I was OK doing that manually, but thought that “hell, I have a PBS box on the network that would do that fine”. So it took about 3 minutes to install the corosync-qdevice packages on all three and enable it. Good to go.

            Thanks for the kick in the ass.

          • ikidd@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 months ago

            So since I now had a “quorate” cluster again, I thought I’d try out HA. I’d always been under the impression that unless you had a shared storage LUN, you couldn’t HA anything. But I thought I’d trigger a replication and then down the 2nd node just as a test. And lo and behold, the first node brought up my OPNsense VM from the replicated image about 2 minutes after the second node lost contact, and internet starts working again.

            I’m really excited about having that feature working now. This was a good night, thank you.

            • twei@feddit.de
              link
              fedilink
              English
              arrow-up
              2
              ·
              10 months ago

              If you need another thing to do, you could try to make your opnsense HA and never have your internet stop working while rebooting a node. It’s pretty simple to set up, you might finish it in 1-2 evenings. Happy clustering!

              • ikidd@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                10 months ago

                I’ll look into that. I did see the option in opnsense once upon a time but never investigated it.

  • vividspecter@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    10 months ago

    I put all docker data in one directory (or rather, a btrfs subvolume) and both snapshot and back it up daily to multiple machines. docker-compose files are also kept in the same subvolume.

    My latest server is NixOS, so I don’t even bother backing up the root subvolume, since the actual config is tracked on git and replicated on multiple machines. If I want to reinstall, I can just install NixOS and deploy the config, then just copy over the docker subvolume, and rebuild the containers. Some of this could be automated further (nixos-anywhere and disko look promising for the actual OS install) but my systems don’t typically break often enough for that to be a significant issue.

    You can go even further and either just use nix for the services, or use nix to build containers themselves, but I have a working setup already and it’s good enough, and I can easily switch to another distribution if issues start occurring in NixOS.

  • atzanteol@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    11 months ago
    1. Most systems are provisioned in proxmox with terraform.
    2. Configuration and setup is handled via ansible playbooks after the server is available. 2.a) Do NOT make changes on the server without updating your ansible scripts - except during troubleshooting. 2.b) Once troubleshooting is done delete and re-create the VM from scratch using only scripts to ensure it works.
    3. VM storage is considered to be ephemeral. All long-term data/config that can’t be re-created with ansible is either stored on an NFS server with a RAID5 dive configuration or backed up to that same file-server using rsnapshot.
    4. NFS server is backed-up nightly to backblaze using duplicacy.
    5. Any other non-VM systems like personal laptops and the like are backed up nightly to the file-server using rsnapshot. Those snapshots are then backed up to backblaze using duplicacy.
  • emax_gomax@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    I use docker so don’t really have to worry about reproducibility of the Services or configurations. Docker will fetch the right services and versions. I’ve documented the core configurations so I can set them back up relatively easily. Anything custom I haven’t documented I’ll just have to remember or find I need to reset up.

  • tetris11@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    11 months ago

    Radical suggestion:

    • Once a year you buy a hard drive that can handle all of your data.
    • rsync everything to it
    • unplug it, put it back in cold storage
    • atzanteol@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      11 months ago

      Once a… year? There’s a lot that can change in a year. Cloud storage can be pretty cheap these days. Backup to something like backblaze, S3 or Glacier nightly instead.

  • lemmyvore@feddit.nl
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago
    • Install Debian stable with the ssh server included.
    • Keep a list of the packages that were installed after (there aren’t many but still).
    • All docker containers have their compose files and persistent user data on a RAID1 array.
    • Have a backup running that rsyncs once a day /etc, /home/user and /mnt/array1/docker to another RAID1 to daily/, from daily/ once a week rsync to weekly/, from weekly/ once a monthb timestamped tarball to monthly/. Once a month I also bring out a HDD from the drawer and do a backup of monthly/ with Borg.

    For recovery:

    • Reinstall Debian + extra packages.
    • Restore the docker compose and persistent files.
    • Run docker compose on containers.

    Note that some data may need additional handling, for example databases should be dumped not rsunced.

  • CameronDev@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I rsync my root and everything under it to a NAS, will hopefully save my data. I wrote some scripts manually to do that.

    I think the next best thing to do is to doco your setup as mich as possible. Either by typed up notes, or ansible/packer/whatever, any documentation is better than nothing if you have to rebuild.

  • desentizised@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    I used to (over a span of about 4 years now) just rely on a RaidZ2 (ZFS) pool (faulted drive replacements never gave any issues) but I recently did an expansion of the array plus OS reinstall and only now am I starting to incorporate Docker containers into my workflows. The live data is in ~ and nightly rsynced onto the new larger RaidZ2 pool but there is also data on that pool which I’ve thus far never stored anywhere else.

    So my answer to the question would be an off-site unraid install which is still in the works. This really will only be that. A catastophe insurance. I probably won’t even rely on parity drives there in order to maximize space since I already have double parity on ZFS.

    As far as reinstallation goes, I don’t feel like restoring ~ and running docker compose for all the services again would be too much of a hassle.

  • dr_robot@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    11 months ago

    My configuration and deployment is managed entirely via an Ansible playbook repository. In case of absolute disaster, I just have to redeploy the playbook. I do run all my stuff on top of mirrored drives so a single failure isn’t disastrous if I replace the drive quickly enough.

    For when that’s not enough, the data itself is backed up hourly (via ZFS snapshots) to a spare pair of drives and nightly to S3 buckets in the cloud (via restic). Everything automated with systemd timers and some scripts. The configuration for these backups is part of the playbooks of course. I test the backups every 6 months by trying to reproduce all the services in a test VM. This has identified issues with my restoration procedure (mostly due to potential UID mismatches).

    And yes, I have once been forced to reinstall from scratch and I managed to do that rather quickly through a combination of playbooks and well tested backups.

  • ssdfsdf3488sd@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    10 months ago

    virtualize the machine with proxmox, use proxmox backup server, load vm on new system if you get catastrophic failure on the machine running the vm currently.

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    10 months ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    Git Popular version control system, primarily for code
    HA Home Assistant automation software
    ~ High Availability
    LXC Linux Containers
    NAS Network-Attached Storage
    Plex Brand of media server package
    RAID Redundant Array of Independent Disks for mass storage
    RPi Raspberry Pi brand of SBC
    SBC Single-Board Computer
    SSD Solid State Drive mass storage

    8 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

    [Thread #287 for this sub, first seen 18th Nov 2023, 10:35] [FAQ] [Full list] [Contact] [Source code]

  • simpleslipeagle@lemmynsfw.com
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    11 months ago

    My server has a raid1 mdadm boot drive. And an 8 dive raid6 with zfs. It’s been running for 14 years now. The only thing that I haven’t replaced over it’s lifetime is the chassis. In fact the proc let out the magic smoke a few weeks ago, after some new parts it’s still going strong.