Started my first home server about 3 weeks ago and I really need to reconsider my storage options, but everything I read about NAS setups is going right over my head. This is gonna be a novel partially because writing this down helps me think through it, and I also just want to be sure I’m on the right track.

Here’s my current setup and what I’m looking to do:

  • My server itself is a little HP mini PC. i7, 2 TB SSD, solid little machine so far. Running Proxmox with a single debian VM which houses all my docker containers - I know I’m not using proxmox to its full advantage, but whatever it works for me. I mostly just use it for its backup system.

  • Currently using an 8 TB powered usb external, primarily for media and backup files. Everything else fits directly on the server’s internal SSD with plenty of space available, but being able to expand or migrate nextcloud and immich down the road would be nice

  • Coincidentally, I’ve been using a similar 8 TB external for my desktop for the past 3-4 years. Right now it’s just for desktop backups (cachyOS) and storing about 500GB worth of ROMs and growing. I used to use this to expand my steam library, but over the years internal storage has gotten much cheaper so I really don’t need to do that any more.

  • I’ve been reading about external drive shucking, since apparently that’s a thing? Seems like my best bet here would be to crack both of these external drives open and slap them into a NAS. 16TB would be plenty for my use.

  • Hardware: while I like the form factor of Synology/Terramaster/etc, seems like the better choice would be to just slap together my own mini-ITX build and throw TrueNAS on it. Easy enough, but what sort of specs should I look for? Since I already have 2 drives to slap in, I’d be looking to spend no more than $200. Alternatively, if I did want the convenience and form factor of a “traditional” NAS, is that reasonable within the budget? From what I’ve seen it’s mostly older models in that price range.

  • I assume I can essentially just mount the NAS like an external drive on both the server and my desktop, is that how it works? For example, Jellyfin on my server is pointed to /mnt/external, could I just mount a NAS to that same directory instead of the USB drive and not have to change a thing on the configuration side?

  • Will adding a NAS into the mix introduce any buffering/latency issues with Jellyfin and Navidrome?

  • What about emulation? I’m going to set up RomM pretty soon along with the web interface for older games, easy enough. But is streaming roms over a NAS even an option I should consider for anything past the Gamecube era?

  • greyfox@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    4 days ago

    My server itself is a little HP mini PC. i7, 2 TB SSD, solid little machine so far. Running Proxmox with a single debian VM which houses all my docker containers - I know I’m not using proxmox to its full advantage, but whatever it works for me. I mostly just use it for its backup system.

    Not sure how mini you mean but if it has spots for your two drives this should be plenty of hardware for both NAS and your VMs. TrueNAS can run VMs as well, but it might be a pain migrating from Proxmox.

    Think of Proxmox as a VM host that can do some NAS functions, and TrueNAS as a NAS that can do some VM functions. Play with them both, they will have their own strengths and weaknesses.

    I’ve been reading about external drive shucking, since apparently that’s a thing? Seems like my best bet here would be to crack both of these external drives open and slap them into a NAS. 16TB would be plenty for my use.

    It’s been a couple of years since I have shucked drives but occasionally the drives are slightly different than normal internal drives. There were some western digital drives that had one pin that was different from normal and worked in most computers, but some power supplies which had that pin wired required you to mask the pin before the drive would fire up.

    I wouldn’t expect any major issues just saying you should research your particular model.

    You say 16TB with two 8TB drives so I assume you aren’t expecting any redundancy here? Make sure you have some sort of backup plan because those drives will fail eventually, it’s just a matter of time.

    You can build those as some sort of RAID0 to get you 16TB or you can just keep them as separate drives. Putting them in a RAID0 gives you some read and write performance boost, but in the event of a single drive failure you lose everything.

    If 8TB is enough you want to put them in a mirror which give you 8TB of storage and allows a drive to fail without losing any data. There is still a read performance boost but maybe a slight loss on write performance.

    Hardware: while I like the form factor of Synology/Terramaster/etc, seems like the better choice would be to just slap together my own mini-ITX build and throw TrueNAS on it. Easy enough, but what sort of specs should I look for? Since I already have 2 drives to slap in, I’d be looking to spend no more than $200. Alternatively, if I did want the convenience and form factor of a “traditional” NAS, is that reasonable within the budget? From what I’ve seen it’s mostly older models in that price range.

    If you are planning on running Plex/Jellyfin an Intel with UHD 600 series or newer integrated graphics is the simplest and cheapest option. The UHD 600 series iGPU was the first Intel generation that has hardware decode for h265 so if you need to transcode Plex/Jellyfin will be able to read almost any source content and reencode it to h264 to stream. It won’t handle everything (i.e. AV1) but at that price range that is the best option.

    I assume I can essentially just mount the NAS like an external drive on both the server and my desktop, is that how it works? For example, Jellyfin on my server is pointed to /mnt/external, could I just mount a NAS to that same directory instead of the USB drive and not have to change a thing on the configuration side?

    Correct. Usually a NAS offers a couple of protocols. For Linux NFS is the typical filesystem used for that. For Windows it would be a Samba share. NFS isn’t the easiest to secure, so you will either end up with some IP ACLs or just allowing access to any machine on your internal network.

    If you are keeping Proxmox in the mix you can also mount your NFS share as storage for Proxmox to create the virtual hard drives on. There are occasionally reasons to do this like if you want your NAS to be making snapshots of the VMs, or for security reasons, but generally adding the extra layers is going to cut down performance so mounting inside of the VM is better.

    Will adding a NAS into the mix introduce any buffering/latency issues with Jellyfin and Navidrome?

    Streaming apps will be reading ahead and so you shouldn’t notice any changes here. Library scans might take longer just because of the extra network latency and NAS filesystem layers, but that shouldn’t have any real effect on the end user experience.

    What about emulation? I’m going to set up RomM pretty soon along with the web interface for older games, easy enough. But is streaming roms over a NAS even an option I should consider for anything past the Gamecube era?

    Anything past GameCube era is probably large ISO files. Any game from a disk is going to be designed to load data from disk with loading screens, and an 8tb drive/1gb Ethernet is faster than most disks are going to be read. PS4 for example only reads disks at 24MB/s. Nintendo Switch cards aren’t exactly fast either so I don’t think they should be a concern.

    It wouldn’t be enough for current gen consoles that expect NVMe storage, but it should be plenty fast for running roms right from your NAS.

    • nfreak@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      This is all super helpful, appreciate it. Just for clarity, the mini PC right now is one of those tiny HP EliteDesks. Definitely no room to fit any extra drives, but I already pulled the trigger on a second machine after doing some more research, and that should be plenty for something that’s basically just going to be a storage box.

      Good catch on the redundancy, at the time posting this I didn’t realize I needed the physical space/drives to set up that safety net. 8 should be plenty for the time being. Say if I wanted to add another drive or two down the road, what sort of complications would that introduce here?

      I do have a backup plan but the mirror safety net is definitely a good call, since it’s not an ideal solution. Right now I’m storing most backups internally, on a small USB drive, and uploaded to a b2 bucket, while I’m manually backing up all of that plus my media/emulation library to a 20TB external drive once a month and shoving it in my storage unit in between.

      Good to know network latency shouldn’t be too noticeable, guess that does make sense. I don’t expose anything publicly, LAN/VPN only and it’s just my wife and I here, so I’m not too concerned with locking down access any more than it needs to be.

      • greyfox@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Good catch on the redundancy, at the time posting this I didn’t realize I needed the physical space/drives to set up that safety net. 8 should be plenty for the time being. Say if I wanted to add another drive or two down the road, what sort of complications would that introduce here?

        With TrueNAS your underlying filesystem is ZFS. When you add drives to a pool you can add them:

        • individually (RAID0 - no redundancy, bad idea)
        • in a mirror (RAID1 - usually two drives, a single drive failure is fine)
        • raidz1 (RAID5 - any single drive in the set can fail, one drive’s worth of data does to parity). Generally a max of about 5 drives in a raidz1, if you make the stripe too wide when a drive fails and you start a rebuild to replace it the chances of one of the remaining drives you are reading from failing or at least failing to read some data increases quickly.
        • raidz2 (RAID6 - any two drives can fail, two drives worth of data goes to parity). I’ve run raidz2 vdev up to about 12 drives with no problems. The extra parity drive means the chances of data corruption, or of a other drive failing while you are rebuilding is much lower.
        • raidz3 (triple parity - any three drives can fail, three drives worth of data goes to parity). I’ve run raidz3 with 24 drive wide stripes without issues. Though this was usually for backup purposes.
        • draid (any parity level and stripe switch you want). This is generally for really large arrays like 60+ disks in a pool.

        Each of these sets is called a vdev. Each pool can have multiple vdevs and there is essentially a RAID0 across all of the vdevs in the pool. ZFS tends to scale performance per vdev so if you want it to be really fast, more smaller vdevs is better than fewer larger vdevs.

        If you created a mirror vdev with two drives, you could add a second mirror vdev later. Vdevs can be of diferent sizes so it is okay if the second pair of drives is a different size. So if you buy two 10TB drives later they can be added to your original pool for 18TB usable.

        What you can’t do is change a vdev from one type to another. So if you start with a mirror you can’t change to a raidz1 later.

        You can mix different vdev types in a pool though. So you could have two drives in a mirror today, and add an additional 5 drives in a raidz1 later.

        Drives in a vdev can be different sizes but the vdev gets sized based on the smallest drive. Any drives that are larger will be wasting space until you replace that smaller drive with a similar sized one.

        A rather recent feature lets you expand raidz1/2/3 vdevs. So you could start with two drives today in a raidz1 (8TB usable), and add additional 8TB or higher drives later adding 8TB of usable space each time.

        If you have a bunch of mismatched drives of different sizes you might want to look at UnRAID. It isn’t free but it is reasonably priced. Performance isn’t nearly as good but it has its own parity system that allows for mixing drives of many sizes and only your single largest drive needs to be used for parity. It also has options to add additional parity drives later so you can start at RAID5 and move to RAID6 or higher later when you get enough drives to warrant the extra parity.

        • nfreak@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Incredible info, thank you so much for this. Next investment will definitely be a couple of extra drives then - the 8 will be fine for a bit but I’m definitely gonna outgrow that space within a month or two

  • gaylord_fartmaster@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    8 days ago

    I can’t answer each bullet (and a couple are dependant on other things like drive speed, activity, and network throughput) but I’ve been using shucked external HDDs for over a decade and would recommend it. I used to use OpenMediaVault running in a VM on Proxmox and briefly tried TrueNAS, but I’ve since migrated all of my VMs to LXCs, so now I just have the drives mounted on the Proxmox host directly combined with mergerfs (not managed by Proxmox’s storage pools) and I pass it through to a Turnkey Linux file server LXC via bind mounts to share over SMB/NFS. Less overhead and LXCs can share CPU/memory dynamically while VMs can’t.

    You should be able to replace that /mnt/external directory with no issues as long as the structure is the same within.

    • nfreak@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      8 days ago

      This is actually really helpful and reassuring, even if I’m not planning on going that far with it just yet. tbh it feels like I’m overcomplicating the entire concept in my head, but that’s par for the course

      • gaylord_fartmaster@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        8 days ago

        I think the risk of losing data naturally leads to people seeking out the most robust storage solution possible when 90% of those people would probably be better off with something simpler with less that can go wrong.