For reasons unexplained, you have no homelab hardware, but $1,000 in cash earmarked for the purpose.
What are you buying, what are you installing on it, and how is it different from what you’ve done previously (i.e. lessons learned)?
UDM-PRO, USW-Aggregation, USW-Enterprise-24-POE, U6-LR… build a server with i5/32GB NVMe boot drive, then some RAID drives… I took out a loan in this scenario as $1,000 wouldn’t cover my entire rack getting blown up.
I’d separate my storage and put that in its own server.
Then, I’d probably go for multiple low energy sff “servers” instead of one powerfull one.
, but $1,000 in cash
not sure how this would help me, I’ve spend 10k or more, but I could get a t-shirt I guess?
N5105 nas board, 32-64gb of ram, 1x 500gb nvme SSD, some sort of case, and a bunch of HDDs, I like the 8tb ironwolfs, they are cheap enough, but large enough.
Maybe the n6005 if you can find it. But it’s a great server, handles most selfhost stuff. I run Ubuntu server on it, it’s just the cleanest and easiest to use, no GUI needed.
What’s nice is it’s super low power, and cheap. So you can eventually migrate to a more powerful Proxmox server, on minipcs, like NAB6, than just turn the n5105 into a TrueNAS server, and even duplicate it for backups, and triplicate (if you are really feeling it), for redundancy. Getting a 2nd and 3rd Proxmox minipcs enables HA on VMs. So yea. That’s my goal. ATM I gotta migrate to the Proxmox.
Same but with a N100 motherboard. Asus and Asrock have some ITX boards with this chip.
I loved migrating to 3 nucs from a 2015 synology, so think you are 100% correct. (It allowed me to use TB networking for a 26Gbe ceph network)
TB = Thunderbolt?
For 1k i would start with a Unifi UDM-Pro, a Intel NUC and a Synology NAS.
I regret getting a UDM-Pro and recently swapped it for an n5105 OPNsense box. Luckily they keep their value, so I didn’t lose any money on the UDMP.
Why do you regret that choice?
I have a UniFi system: APs, switches, CKG2, Gateway. I’m looking to add CKG2+ and some POE cameras
Honest question… Why people with knoedge on how to do one, buy a Nas like synology? Are you not just paying double or triple for the same result you could have if making the NAS from scratch?
Reliability and lower power consumtion than most Frankenstein-DIY cheap stuff recommended here ;)
No, you are not paying anywhere near double or triple.
My Synology came in at ~$750 for the chassis and 2 8TB IronWolf drives.
A custom build with TrueNas was coming in at over $1k.
Hm, yeah maybe I just don’t know the pricing/cost of a Synology then.
In my country just the price of a 8Tb IronWolf drive costs almost 1 entire month of the minimum wage here.
The cheapest Synology NAS available here is the DS223J, and it comes with no drives included and costs 80% of two months of minimum wage.
It’s way cheaper to repurpose old hardware or buy from AliExpress and make a DIY build, there is no comparison and also I have no idea of what “custom build” are you mentioning, as most NAS builds I’ve seen are pretty cheap as you don’t need much horsepower and DDR4 memory has low prices nowadays.
We use Synology at work to avoid paying CALs on a WS VM
I bought a qnap a long time ago, never again…it was like 3k with disk for 6 x 6TB drives like 10 years ago. They constantly get hacked, a bunch of their NAS’s were getting crypto lockered because some Dev hard coded an admin password iirc. their software does a bunch of shit I dont need and it runs like shit now with just me using it. I’m gonna reset it soon once I get my data off.
My NAS now is a r730xd with 12 x 12tb drives in it running true nas. Granted my electric bill is a car payment with all my stuff, it only cost me like 1,500 for disk and the server was super cheap and has a 10 gig connection.
Granted some of it is cool if you are still learning like 1 click and you can have a mysql php server on there ect. I thought about getting a synology but all the bells and whistles it can do with apps and that I can just run on a real server.
After my past Ubiquiti experiences I can’t agree on the UDM…
I’m still a beginner at it, but I would say to not over prioritize cores. Ram will be your bottleneck first. I day this as someone with 36 physical cores and like 90% of them idle
u/diffraa , this is a key point.
At $dayjob, we use 4 GB per core for application workloads and it works well. Databases get 16 GB per core. Memcached gets 32 GB per core. In development we use 16 GB per core because there isn’t heavy load.
My own homelab is built around a bunch of quad cores with 32 GB of memory. The memory has come in useful. Having 64 GB per quad core would be even better, but was not possible when I built the systems many years ago (I bought super cheap $40 motherboards with only two slots). For my initial purpose getting 2x 1 GB sticks would have been enough, but I’m glad I bought more as I use all the memory now.
If you don’t know what you want to do, I would get 8 GB of memory per core at minimum, and in a lightly loaded homelab, 16 GB per core is totally reasonable. I would only get less memory if you know you’re going to hit the CPUs hard with particular tasks that share memory or use little memory, and even then I would get minimum 4 GB per core.
Dell Poweredge budget server. R720 can have good specs for cheap on eBay. Get a ubiquiti switch for vlans. Firewall brand of your choice I did tz400w. You should have some money left over to buy an endpoint as well. Then install VMWare and build out a vm environment of your choice. I chose windows just to continue learning the systems I administer.
Just chiming in that the consensus on Mini PC clusters is pretty cool.
Completely agree. That’s where it’s at!
I have a rack full of R710s that barely get used anymore because energy is so freaking expensive. I’d either do everything in the cloud or use lots of low powered machines at home.
I would buy a second hand workstation with all the pcie slots I could. They are bargains, and you can pull / upgrade cpus as needed. Need more ram? Put the second cpu in. Don’t need it? Pull it out.
A bigger NAS with more drive bays
Will do almost what i have now: compact (ITX/mATX) board with C612/2600v3/v4, maxxed with memory. SAS board/NVME/10G if you want/need. Silent and efficient for 24/7
Bought a Dell R630 from ebay for a decent price, but I wish I’ve had spend more on larger capacity hard drives. I bought a bunch of old 600GB HDD running RAID 10 that right now im afraid to replace them.
I got an enterprise class 19" short depth chassis, whit supermicro motherboard and a xeon D (they are soldered to motherboard) whit 8 cores at 2.something GHz, multithreading and so on. Bought a 128GB ecc ram kit and a pair of intel enterprise 1TB ssds. Installed proxmox, whit mirrored discs, and it’s now running 8 containers and 3 vms. Really low power consumption, just a bit loud but perfect for the garage. Placed inside an ikea lack table and mounted up above a door. Avoid buying consumaer class ssds as they are gonna last you only a few months in a configuration like this (that comes from experience, 20% wear in 6 months whit the initial Kingston I bought)
https://www.servethehome.com/introducing-project-tinyminimicro-home-lab-revolution/ Small foot print, low wattage, modern CPU can run anything I can try and throw at it just get a lot of ram. Id run Ubuntu or Debian all apps go in docker containers, maybe install cockpit if I wanted web gui. And run vms if I want via KVM https://ubuntu.com/blog/kvm-hyphervisor If you want to go nas Plex rute you can add a hd via 10g usb Great level1techs video about mini PC home server https://youtu.be/GmQdlLCw-5k?si=VrdfDRfmpNHCZz-H
Everyone here recommending tinylabs, but what if you need lots of TB’s? Is there a solution then? I have a Microserver Gen 8 (which is plenty powerful enough) but need way more space, and was going to buy something that can fit 10+ Hard drives…
There’s lots of solutions.
Cheap:
But a full tower PC case with room for 10+ HDDs. Lot of options like those from Fractal, CoolerMaster, etc.
Enterprise (expensive):
Buy a JBOD with a backplane that you plug all your discs into then plug that into a server.
can you make ZFS pools across devices with Proxmox? Otherwise idk what you do for storage redundancy or RAID unless you run like longhorn or ceph or something across the cluster - all those machines have a single drive