• 2 Posts
  • 20 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle

  • It’s great for single drive, raid 0, and raid 1. Don’t use it for more raid, it is not acceptable for that (raid 10 obv ok). It still can lose data for raid 5/6 still.

    I’m not sure of the tools that Fedora includes to manage BTRFS but these scripts are great https://github.com/kdave/btrfsmaintenance you use them to scrub and balance. Balance is for redistributing blocks and scrub checks if bits have unexpectedly changed due to bit rot (hardware issue or cosmic ray). Scrub weekly for essential photos, important docs, and the like. Monthly for everything else. Balance monthly, or on demand if free drive space is tight and you want a bit more bits.

    RAID 1 will give you bit rot detection with scrub and self-recover said bit rot detection (assuming both drives don’t mystically have the same bit flip, which is very unlikely). Single drive will just detect.

    BTRFS snapshot then send/receive is excellent for a quick backup.

    Remember that a BTRFS snapshot will keep all files in the snapshot, even if you delete them off the live drive. Delete 500 GB of stuff, but the space didn’t reduce? Probably a snapshot is remembering that 500 GB. Delete the snapshot and your space is back.

    You can make sub volumes inside a BTRFS volume, which are basically folders but you can snapshot just them. Useful for scrubbing your essential docs folder more often than everything else, or snapshotting more often too.

    Lastly, you can disable copy-on-write (cow) for volumes. Reduces their safety but increases write speed, good for caches and I’ve read VM drive images need it for performance.

    Overall, great. Built-in and no need to muck with ZFS’s extra install steps, but you get the benefits ZFS has (as long as you’re ok to be limited to RAID 1)






  • I trust the check restic -r '/path/to/repo' --cache-dir '/path/to/cache' check --read-data-subset=2000M --password-file '/path/to/passfile' --verbose. The --read-data-subset also does the structural integrity while also checking an amount of data. If I had more bandwidth, I’d check more.

    When I set up a new repo, I restore some stuff to make sure it’s there with restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' restore latest --target /tmp/restored --include '/some/folder/with/stuff'.

    You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn’t lazy I guess, just to make sure I’m not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.

    Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there sudo mkdir -p '/mnt/restic'; sudo restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' mount '/mnt/restic'.


  • I have my router (opnsense) redirect all DNS requests to pihole/adguardhome. AdGuard home is easier for this since you can have it redirect wildcard *.local.domain while pihole wants every single one individually (uptime.local.domain, dockage.local.domain). With that combo of router not letting DNS out to upstream servers and my local DNS servers set up to redirect *.local.domain to the correct location(s), my DNS requests inside my local network never get out where an upstream DNS can tell you to kick rocks.

    I combined the above with a (hella cheap for 10yr) paid domain, wildcard certified the domain without exposure to the wan (no ip recorded, but accepted by devices), and have all *.local.domain requests redirect to a single server caddy instance that does the final redirecting to specific services.

    I’m not fully sure what you’ve got cooking but I hope typing out what works for me can help you figure it out on your end! Basically the router doesn’t let anything DNS get by to be fucked with by the ISP.


  • I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.

    Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.

    Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.

    On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.

    So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.

    If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.

    Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.



  • glizzyguzzler@lemmy.blahaj.zonetoSelfhosted@lemmy.worldAdvantages of rootless podman?
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    1 month ago

    This answers all of your questions: https://github.com/containers/podman/discussions/13728 (link was edited, accidentally linked a redhat blog post that didn’t answer your Q directly but does make clear that specifying a user in rootless podman is important for security for the user running the rootless podman container if that user does more than just run the rootless podman container).

    So the best defense plus ease of use is podman root assigning non-root UIDs to the containers. You can do the same with Docker, but Docker with non-root UIDs assigned still caries the risk of the root-level Docker daemon being hacked and exploited. Podman does not have a daemon to be hacked and exploited, meaning root Podman with non-root UIDs assigned has no downsides!





  • This was very insightful and I’d like to say I groked 90% of it meaningfully!

    For an Incus container with its unique MAC interface, yes if I run a Docker container in that Incus container and leave the Docker container in its default bridge mode then I get the desired feature set (with the power of onions).

    And thanks for explaining CNI, I’ve seen it referenced but didn’t fully get how it’s involved. I see that podman uses it to make a MACVLAN interface that can do DHCP (until 5.0, but the replacement seems to be feature-compatible for MACVLAN), so podman will sidestep the pain point of having to assign a no-go-zone on the DHCP server for a Docker swath of IPv4s, as you mentioned. Close enough for containers that the host doesn’t need to talk to.

    So in summary:

    • I’ve got Docker doing the extent it can manage with MACVLAN and there’s no extra magicks to be done on it.

    • Podman will still use MACVLAN (no host to container comms still) but it’s able to use DHCP to get an address for the MACVLAN container.

    • If the host must talk to the container with MACVLAN, I can either use the MACVLAN bypass as you linked to above or put the Docker/Podman container inside an Incus container with its bridge mode.

    • Kubernutes continues to sound very powerful and flexible but is definitely beyond my reach yet. (Womp womp)

    Thanks again for taking the time to type and explain all of that!


  • Thanks for taking the time to reply!

    The host setup has eth0 as the physical interface to the rest of the network, with br0 replacing it completely. br0 has the same MAC as the eth0 interface and eth0 just forwards to br0 which then does the bridging internally. br0 being a bridge means that incus is able to split it off without MACVLAN but rather its nic device in bridge mode which “Uses an existing bridge on the host (br0) and creates a virtual device pair to connect the host bridge to the instance.” That results in a network interface that has its own MAC and is assigned a local IP by the DHCP server on the network while also being able to talk to the host.

    Incus accomplishes the same goal as Proxmox (Proxmox has similar bridge network devices for its containers/VMs) just without Incus needing to be your OS/distro like Proxmox does, it’s just a package.

    As for the Docker, the parent interface is br0 which has supplanted eth0. MACVLAN is working as it is intended to in Docker, as far as I can tell. The container has a networking device with its own MAC address, and after supplying the MACVLAN network device with my network’s subnet and gateway and static IP address in the Docker compose file it works as expected. If I don’t supply a static IP in the Docker compose file, Docker just assigns it the first IP in the given subnet - no DHCP interaction. This docker-net-dhcp plugin (I linked to the issue about it not working on the latest version of Docker anymore) was made to give Docker network devices the ability to use DHCP to get an IP address, but it’s clearly not something to rely on.

    If I’m missing something about MACVLAN that makes DHCP work for Docker, let me know! Hardcoding an IP into a docker-compose file adds an extra step to remember compared to everything else being configured on the centralized DHCP server - hence the shoddy implementation claim for Docker.

    Thanks for the link to using another MACVLAN and routing around the host<-/->container connection issue inherent to MACVLAN. I’ll keep it in mind as an alternate to Incus container around another container! I do wish there could be something like Incus’ hassle-free solution for Docker or Podman.





  • glizzyguzzler@lemmy.blahaj.zonetoSelfhosted@lemmy.worldPodman or rootless docker?
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    5 months ago

    Hey bigdickdonkey, I recently tried and wasn’t able to shit my way through podman, there just wasn’t enough chatter and guides about it. I plan to revisit it when Debian 13 comes out, which will include podman quadlets. I also tried to get podman quadlets to work on Ubuntu 24 and got closer, but still didn’t manage and Ubuntu is squicky.

    I read about true user rootless Docker and decided that was too finicky to keep up to date. It needs some annoying stuff to update, from what I could tell. I was planning on many users having their own containers, and that would have gotten annoying to manage. Maybe a single user would be an OK burden.

    The podman people make a good argument for running podman as root and using userns to divvy out UIDs to achieve rootless https://www.redhat.com/en/blog/rootless-podman-user-namespace-modes but since podman is on the back burner till there’s more community and Debian 13, I applied that idea to Docker.

    So I went with root Docker with the goals of:

    • read only
    • set user to different UID:GID for each container
    • silo containers in individual Docker networks
    • nothing gets /var/run/docker.sock
    • cap_drop: all
    • security-opt=no-new-privileges
    • volumes all get tagged with :rw,noexec,nosuid,nodev,Z

    Basically it’s the security best practices from this list https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html

    This still has risk of the Docker daemon being hacked from the container itself somehow, which podman eliminates, but it’s as close to the podman ideal I can get within my knowledge now.

    Most things will run as rootless+read-only+cap_drop with minor messing. Automatic ripping machine would not, but that project is a wild ride of required permissions. Everything else has succumbed, but I’ve needed to sometimes have a “pre launch container” to do permission changes or make somewhere like /opt writable.

    I would transition one app stack at a time to the best security practices, and it’s easier since you don’t need to change container managers. Hope this helps!


  • Your budget is really near a https://store.ui.com/us/en/collections/unifi-dream-router/products/udr Unifi dream router. Your family is gonna be way happier with you (0 downtime) and it’ll give you extender options if you ever need it. Unifi is good enough and they update regularly, just disable cloud access stuff and you’re good.

    Otherwise you want Opnsense instead of Openwrt. The upgrade process for Openwrt is not automatic, while Opnsense is. Worth it not to have to dote on your router.

    And you should get an access point (Unifi something or Tplink Omsomething), wifi is problematic with openwrt and I’m not sure if opensense even lets you do it (haven’t tried).

    And you’ll need a switch, dumb or managed, up to you if you want VLANs. The Opnsense box will have just one LAN port, so it requires a switch if you want to plug more than one thing into it. A switch with PoE+ can power the access point directly.

    Opnsense needs x64 arch (Intel or AMD CPUs), get a small thin client like a Dell Wyse 5070 extended or HP T730 or that mentioned Fujitsu Futro S720 (its CPU is old tho, you can do better). There may be newer thinclients, you just want a mini PCIe slot to install some Intel gigabit card from eBay with 2 ports. Google power efficient gigabit mini PCIe card - there’s an older model that sucks power and a newer one that doesn’t suck; if you go more than gigabit skip 2.5 on Intel unless you google hard and expect extra power draw. Very limited point to 4 port cards, just go higher gigabit speeds don’t think about multiplexing ports or whatever it is called; and switches switch better than the router can and remove CPU overhead for more actual routing work - 2 port card is the way.

    Slap Incus (superior but newer, less guides, LXD is previous name if googling stuff) or Proxmox (good enough, more guides for this) on it, make a VM and pass through the 2 ports of the PCIe cards, slap Opnsense in the VM. Make an LXC container and slap Debian on it and spin up the Unifi controller for your AP. Another container for adguard home or pi hole and you’ve got a box that does the basic nets all in one. The built-in port on the thin client is how you will access the underlying OS, it gets plugged into the switch you’ll have to get. If you got something with 2 gigs of RAM and an AMD Geode/GX or aged Intel Atom CPU I’d just only do Opnsense no hypervisor stuff.

    Sorry for the info dump but there’s a lot of angles!

    But really, the Unifi dream router is much easier and solves it all-in-one. You need 3 pieces (router, wifi access point, Ethernet switch) for a good experience otherwise.