I’d expected this but it still sucks.

  • Crogdor@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    There are two kinds of datacenter admins, those who aren’t using VMWare, and those who are migrating away from VMWare.

  • brygphilomena@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Regrettably, there is currently no substitute product offered.

    I really don’t think you regret a God damn thing broadcom.

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 year ago

      If you’re already running windows, hyper-v. theres proxmox, and tons of others. So they are mistaken. 🤣

      • TheHolm@aussie.zone
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        All of them not equate in same league. Do you know any type 1 free supervises out there? Xen probably.

        • Voroxpete@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I assume what you’re looking for specifically here is a complete platform that you can install on bare-metal, not just the actual hypervisor itself. In which case consider any of these:

          • Proxmox
          • XCP-NG
          • Windows Hyper-V Server Core (basically Windows Server Nano with Hyper-V)
          • Any Linux distro running KVM/QEMU - Add Cockpit if you need a web interface, or use Virt-Manager, either directly or over X-forwarding
          • Anarch157a@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Any Linux distro running KVM/QEMU - Add Cockpit if you need a web interface, or use Virt-Manager, either directly or over X-forwarding

            No need for X forwarding, you can connect Virt-Manager to a remote system that has libvirt,

            • Voroxpete@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              This is true, but not everyone gets to use a linux system as their main desktop at work. I’m not aware of a windows version of virt-manager, but if that exists it would be fucking rad.

        • yeehaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Proxmox, Xen, hyper-v are all considered type 1 as far as I’m aware.

  • Moonrise2473@feddit.it
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    RIP VMware.

    Broadcom prefers to milk the top 500 customers with unreasonable fees rather than bother with the rest of the world. They know that nobody with a brain would intentionally start a new datacenter with VMware solutions

  • Decronym@lemmy.decronym.xyzB
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

    Fewer Letters More Letters
    ESXi VMWare virtual machine hypervisor
    HA Home Assistant automation software
    ~ High Availability
    LTS Long Term Support software version
    LXC Linux Containers
    NAS Network-Attached Storage
    Plex Brand of media server package
    RPi Raspberry Pi brand of SBC
    SBC Single-Board Computer
    ZFS Solaris/Linux filesystem focusing on data integrity

    8 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

    [Thread #506 for this sub, first seen 12th Feb 2024, 20:15] [FAQ] [Full list] [Contact] [Source code]

  • Changer098@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Well dang, I guess that “learn about proxmox” line on my to-do list just moved a little higher. For the most part, I’ve enjoyed using ESXi and am sad to see it go.

      • dan@upvote.au
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        I like Unraid… It has a UI for VMs and LXC containers like Proxmox, but it also has a pretty good Docker UI. I’ve got most things running on Docker on my home server, but I’ve also got one VM (Windows Server 2022 for Blue Iris) and two LXC containers. (LXC support is a plugin; it doesn’t come out-of-the-box)

        Docker with Proxmox is a bit weird, since it doesn’t actually support Docker and you have to run Docker inside an LXC container or VM.

        • LifeBandit666@feddit.uk
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 year ago

          I’m in the market for a nas or thinclient for these kinds of things, an upgrade for my RPi Home Assistant.

          I’m stuck at hardware at the moment and think a cheap 2bay NAS is probably the way to go. My concern is that I won’t be able to run all the things on a NAS mainly because I’m clueless. This community talks in maths (as Radiohead say) so half the time I’m trying to decipher all the LXCs and other acronyms.

          Anyway, I think I need to learn PROXMOX or Unraid so your comment has me interested.

          My question to you is this: since your server is plugged in via ethernet, can you access the Windows VM via web interface? Or does it require a screen, keyboard, mouse, etc?

          I think I’m gonna be running HA in a VM, along with Adguard and maybe LMS in docker containers, then probably a Windows VM for Arr and Plex. I assume all these things will have their own port but I’m just not 100% about the actual Windows VM

          • Scrath@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            1 year ago

            I run a couple of containers on my lenovo mini pc. I have proxmox installed on bare metal and then one VM for truenas, one for docker containers and one for home assistant OS.

            For me the limiting factor is definitely RAM. I have 20GB (because the machine came with a 2x4GB configuration and I bought a single 16GB upgrade stick) and am constantly at ~98% utilization.

            To be fair, about half of that is eaten up by TrueNAS alone due to ZFS.

            The point I’m trying to make is basically make sure you can put enough RAM into your machine. Some NAS have soldered memory you won’t be able to upgrade. The CPU performance you need highly depends on what you want to do.

            In my case the only CPU intensive task I have is media transcoding which can often be offloaded to dedicated bardware like intel quicksync. The only annoying exception is hardware transcoding of x265 media which is apparently only supported from intel 7th gen and upwards processors and I have a 6th gen i5… Or maybe I configured something wrong. No clue

            Edit: I wrote that after reading the first half of your comment. Regarding connecting a screen, I think I had one connected once to set up proxmox. Afterwards I just log into the proxmox web interface. If required I can use that to get a GUI session of each VM as well.

            • LifeBandit666@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              Hey no you answered a bunch of questions I had there. So I’m looking for an i7 with lots of RAM. Thanks that’s excellent

              • Scrath@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                Just to be sure there isn’t a misunderstanding. With 7th gen I mean any intel iX-7xxx processor or higher.

                The first (or first 2) numbers of the second part of the processor name determine the generation of the processor. The number immediately following the i just denotes the performance tier within the processors own generation

                • LifeBandit666@feddit.uk
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  1 year ago

                  Thanks for the correction. I’ve lurked in here and the Reddit one back before the time we don’t talk about, but I have no clue when it comes to hardware. I got given a PC to game on and was talking to my mate about buying server bits, and mentioned getting i7 processors. He told me it would be more powerful than my gaming rig because that’s only i5s.

                  This makes more sense. So I can get an i3-7xxx quad core mini PC and try upgrade the RAM and storage.

                  I have a bunch of ram sticks in a bottom drawer and some HDDs I’ve never managed to boot yet, so I have things to play with… I just don’t know what they are or if they work.

                  I love to tinker though. This all sounds like lots of fun

  • 0110010001100010@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    Really glad I made the transition from ESXi to Docker containers about a year ago. Easier to manage too and lighter on resources. Plus upgrades are a breeze. Should have done that years ago…

    • kalpol@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      I need full on segregated machines sometimes though. I’ve got stuff that only runs in Win98 or XP (old radio programming software).

          • DeltaTangoLima@reddrefuge.com
            link
            fedilink
            English
            arrow-up
            0
            ·
            1 year ago

            No headaches here - running a two node cluster with about 40 LXCs, many of them using Docker, and an OPNsense VM. It’s been flawless for me.

            • TCB13@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              1 year ago

              If you’re already using LXC containers why are you stuck with their questionable open-source and ass of a kernel when you can just run LXD/Incus and have a much cleaner experience in a pure Debian system? Boots way faster, fails less and is more open.

              Proxmox will eventually kill the free / community version, it’s just a question of time and they don’t offer anything particularly good over what LXD/Incus offers.

              • DeltaTangoLima@reddrefuge.com
                link
                fedilink
                English
                arrow-up
                0
                ·
                1 year ago

                I’m intrigued, as your recent comment history keeps taking aim at Proxmox. What did you find questionable about them? My servers boot just fine, and I haven’t had any failures.

                I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  1 year ago

                  comment history keeps taking aim at Proxmox. What did you find questionable about them?

                  Here’s the thing, I run Promox since 2009 until the end of last year professionally in datacenters, multiple clusters around 10-15 nodes each. I’ve been around for all wins and fails of Proxmox, I’ve seen the raise and fall of OpenVZ, all the SLES/RHEL compatibility issues and then them moving to LXC containers.

                  While it worked most of the time and their payed support was decent I would never recommend it to anyone since LXD/Incus became a thing. The Promox PVE kernel has a lot of quirks and hacks. Besides the fact that is build upon Ubuntu’s kernel that is already a dumpster fire of hacks (waiting someone upstream to implement things properly so they can backport them and ditch their implementations) they add even more garbage over it. I’ve been burned countless times by their kernel when it comes to drivers, having to wait months for fixes already available upstream or so they would fix their own shit after they introduced bugs.

                  At some point not even simple things such as OVPN worked fine under Proxmox’s kernel. Realtek networking was probably broken more times than working, ZFS support was introduced with guaranteed kernel panics and upgrading between versions was always a shot in the dark and half of the time you would get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later. Their startup is slow, slower than any other solution - it even includes daemons that are there just to ensure that other things are running (because most of them don’t even start with the system properly on the first try).

                  Proxmox is considerably cheaper than ESXi so people use it in some businesses like we did, but far from perfect. Eventually Canonical invested in LXC and a very good and much better than OpenVZ and co. container solution was born. LXC got stable and widely used and LXD came with the hypervisor higher level management, networking, clustering etc. and since we now have all that code truly open-source and the creators of it working on the project without Canonicals influence.

                  There’s no reason to keep using Proxmox as LXC/LXD got really good in the last few years. Once you’re already running on LXC containers why keep using and dragging all the Proxmox bloat and potencial issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

                  I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

                  Well if you’re some time to spare on testing stuff try LXD/Incus and you’ll see. Maybe you won’t replace all your Proxmox instances but you’ll run a mixed environment like a did for a long time.

  • RedFox@infosec.pub
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    What about virtualizing windows?

    Only thing I know of is hyperv, but it’s not widely used I don’t think and MS is pushing azure $tack right?

    • yeehaw@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Hyper-v is definitely wisely used…

      Lots of hypervisors support windows. Ie proxmox