I currently have a 24/7 linux old-office-PC-turned-server for self-hosting, and a desktop for mostly programming and playing games (linux as a host + a windows VM with a passed-through GPU). The server’s i5-3330 is usually at ~10-15% usage.

Here’s the actual idea: what if, instead of having a separate server and desktop, I had one beefy computer that’d run 24/7 acting as a server and just spun up a linux or windows VM when I needed a desktop? GPUs and USB stuff would be passed through, and I could buy a PCIe SATA or NVMe controller I could also passthrough to not have to worry about virtualized disk overhead.

I’m almost certain I could make this work, but I wonder if it’s even worth it - would it consume less power? What about damage to the components from staying powered 24/7? It’d certainly be faster accessing a NAS without the whole “Network-Attached” part, and powering on the desktop for remote access could just be a command over SSH instead of some convoluted remote WoL that I haven’t bothered setting up yet.

I’d love to hear your thoughts on this.

Edit 2 months later: Just bought a 7950X3D and use the 3D V-cache half of it as a virtualized desktop with the other cores used for running the host and other VMs. Works perfectly when passing through a dedicated GPU, but iGPU passthrough is very difficult if not impossible since I couldn’t manage it.

Edit even later-er: iGPU passthrough is possible on ryzen 7000 after all, everything works great now.

  • Outcide@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Based on ancient memories, what kills hardware is temperature variations (expanding/contracting eventually breaks things). So I wouldn’t worry about any damage to components from being left on 24/7, in fact might even be an improvement.

  • Nibodhika@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I wouldn’t do that if you have the hardware to keep things separated but it’s because of what I run on my server. You need to take into consideration that besides CPU/GPU usage a server’s heavy load will be on Network and Disks, so while the system looks like it’s not doing much, it might be doing a lot of Io operations. The thing with that is that both your disks and your network have a limited capacity, so if you’re trying to play a game you might get longer load screens and higher ping that you would with one machine for each.

    That being said it vastly depends on what you’re running on the server, but higher pings and lower fps are a given since now you’ll have more processes running on the background, so for games that are CPU bottlenecked it will be a massive hit.

  • MeldrikA
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    I did something like this some years ago now. I had watched the “One PC, 2 Screens” (or something like that) From Linus Tech Tips, so I wanted to try it myself.

    Using Unraid makes it easy to setup and I had 2 Windows VMs with their own screen and keyboard/mouse, dedicated GPU and then docker running my selfhosted stuff.

    It uses more power though, because the PC is always on and the hardware needs to be beefy.

  • vegetaaaaaaa@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I used to do this for years (32GB RAM Ryzen 5 Debian box running as both desktop machine + libvirt hypervisor). I ended up migrating VMs to a separate physical host because I sometimes had to shut down/dual-boot to Windows for games, and I needed a few always-running services like my Mumble server - other than this specific problem, it worked flawlessly.

  • huquad@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Definitely possible! But as the other commenter’s have pointed out, there are some costs/tradeoffs to be aware of. I’ll start by answering your questions. Power consumption could technically be less sharing a system due to less overhead (only one mobo, ram, etc), but power is mostly CPU/GPU, so I don’t think you’d see a huge difference. Furthermore, always on VM vs sleeping/turning off when you’re not using it should have marginal effects. Another commenter mentioned it, but always on isn’t a problem. Sustained elevated drive temperatures can be an issue, but really you’re looking at elevated CPU/GPU temps which won’t be an issue. The bigger issue is temperature cycling, but even then consumer hardware is derated to last 10-20 years as long as you aren’t doing overvoltage and you keep up with periodic repaste/repadding (every 5 years or so is typically recommended). Finally for turning on your VM, I’d recommend just leaving it on. Alternatively, you could send an ssh command as you stated.

    Having a a hypervisor server with VMs is very common and well documented if you only want VNC/ssh. Regardless, any server maintenance/reboots will also obviously disrupt the desktop. Additionally, VNC doesn’t support audio. I believe Windows remote desktop has audio, but I’m not sure about quality.

    To get improved video/audio, you’ll need a GPU. Once you add a GPU, things get trickier. First, your host/server will try to use the GPU. There are ways to force the host to not use the GPU, but that can be tricky. Alternatively, you can look into VFIO which hands off the GPU to the VM when turned on. However, this is even trickier. Last, you can install two GPUs (or use iGPU/APU if applicable). Then you can pass the GPU through. Last I looked, NVIDIA and AMD are both possible options and this is now easier than ever. Regardless, if you plan on gaming, you should know some games will block you for playing in a VM (specifically games with anticheat). All that said, desktop/server has some drawbacks, but is still a great option. Your next step is choosing your hypervisor.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Possible, but having a relatively big and noisy PC + UPS etc. right next to your desk is not so great.

  • linearchaos@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I do this with unraid. (Libvirt) VM overhead is real. I probably get 80% of frame rate compared to baremetal if I’m local. I never use it local though, it’s in a rack in my basement.

    I generally run parsec on it and remote it from a netbook. If you can get client and server wired the experience is mostly passable.

  • fuckwit_mcbumcrumble@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    VMware workstation works great. Just install Windows or linux on the box, workstation, and fire up all the VMs you could dream of, while using the PC as a normal PC.

    But just know that any time you need to reboot your PC PC you need to reboot your server, which sucks. It’s much better to just keep the old office PC. That old i5 uses so little power at idle vs a modern CPU being perpetually kept awake.

      • fuckwit_mcbumcrumble@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I can’t find any benchmarks comparing workstation to ESXi. But for work we spend most of our time in type 2 hypervisors and performance is just fine. Just make sure you’re not using the Windows Hypervisor Platform because that does have a huge performance penalty. Considering OP uses an old i5 I’m sure a modern CPU would handle the load just fine.

        But importantly workstation has something ESXi doesn’t, 3D Acceleration. And if you’re doing anything graphical it makes a huge difference.

  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    4
    ·
    edit-2
    1 year ago

    I personally do something similar and have several VMs on my main computer that perform various functions. As they are not particularly resource intensive, I have never had an issue with it. I also went the lazier route and run games directly on the hypervisor, not in a VM.

    For you, GPU passthrough is the main hurdle. It is surmountable, but it isn’t as simple as other parts of VM setups. If you can get that part working well, everything else should fall into place.

    Also, for the sake of your own sanity, do not try to ‘share’ the GPU between the hypervisor and a VM. Use the onboard GPU for the hypervisor (or a baby add-in GPU if you don’t have onboard).