Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

bigMouthCommie ,
@bigMouthCommie@kolektiva.social avatar

what does this mean for me? i have a lenovo 82k100lqus

kalpol OP ,

Doesn't mean anything right now if you are running ESXi, except you can't reinstall ESXi unless you kept the image and you won't get ESXi updates.

bigMouthCommie ,
@bigMouthCommie@kolektiva.social avatar

i looked it up, and it's part of vmware? i don't run that so shrug

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

*proxmox*

bigMouthCommie ,
@bigMouthCommie@kolektiva.social avatar

oh, fuck. really? what if i have 12 year old copy?

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

I meant you should switch to proxmox. What are you referring to?

nrezcm ,

No this is Patrick.

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

Spongebob is that you?

TCB13 ,
@TCB13@lemmy.world avatar

He should really switch to LXD/Incus, not Proxmox as it will end like ESXi one day.

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

Lxd is slow and doesn't support HA

TCB13 , (edited )
@TCB13@lemmy.world avatar

LXD uses QEMU/KVM/libvirt for VMs thus the performance is at least the same as any other QEMU solution like Proxmox, the real difference is that LXD has a much smaller footprint, doesn’t depend on 400+ daemons thus boots and runs management operations much faster. The virtualization tech is the same and the virtualization performance is the same.

Here's one of my older LXD nodes running HA:

https://lemmy.world/pictrs/image/266e723e-62f9-4ca5-86eb-b0ce45a7f342.png

It's "so hard" to run HA under LXD... you just have to download the official HA OS image and import to LXD.

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

Sorry I meant high availability as in the ability to live transfer a VM to a different host without downtime or service interruptions.

I hear that I lot of people are using LXC. Are you using a container runtime in the LXC container? (i.e. docker or podman)

TCB13 ,
@TCB13@lemmy.world avatar

Sorry I meant high availability as in the ability to live transfer a VM to a different host without downtime or service interruptions.

Oh, my bad then. But yes, like Proxmox, LXD/Incus can do live migrations of VMs since 4.20 (2021 I believe). Live migration of containers can be done under specific circunstantes as well.

Are you using a container runtime in the LXC container? (i.e. docker or podman)

In some of them yes. At least under Debian as long as you've set security.nesting=true it will work fine.

TCB13 ,
@TCB13@lemmy.world avatar

*proxmox*

*LXD/Incus*

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

LXD is not really usable for anything as it is very slow

TCB13 , (edited )
@TCB13@lemmy.world avatar

LXD uses QEMU/KVM/libvirt for VMs thus the performance is at least the same as any other QEMU solution like Proxmox, the real difference is that LXD has a much smaller footprint, doesn’t depend on 400+ daemons thus boots and runs management operations much faster. The virtualization tech is the same and the virtualization performance is the same.

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

Maybe I'm just doing it wrong. I've just found LXD to be lacking as you can't live transfer it to a different host. It is also slower than Docker and Podman and I was unable to get docker running in a unprivileged LXC container. I think it should be possible to run docker in LXC but by the time I spend the effort is is more secure and easier to use a full virtual machine.

Maybe I should revisit the idea though as it seems like many people stand by it.

TCB13 ,
@TCB13@lemmy.world avatar

I’ve just found LXD to be lacking as you can’t live transfer it to a different host

It isn't lacking... https://linuxcontainers.org/incus/docs/main/howto/move_instances/#move-instances but as with Proxmox there are details when it comes to containers. VMs can fully migrate live.

I was unable to get docker running in a unprivileged LXC container

What host OS are you running on? Did you set security.nesting true on said container?

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

I probably just set it up wrong.

mindlight ,

Along with the termination of perpetual licensing, Broadcom has also decided to discontinue the Free ESXi Hypervisor, marking it as EOGA (End of General Availability).

Wiktionary:
Adjective
perpetual (not comparable)
Lasting forever, or for an indefinitely long time.

Hello ProxMox here I come!

TCB13 ,
@TCB13@lemmy.world avatar

Hello ProxMox here I come!

Proxmox is questionable open-source, performs poorly and will most likely end up burning the free users at some point. Get get yourself into LXC/LXD/Incus that does both containers and VMs, is way more performant and clean and is also available on Debian's repositories.

timbuck2themoon ,

You know, you can recommend lxd and whatever without putting out FUD about proxmox and other tech.

TCB13 ,
@TCB13@lemmy.world avatar

While I get your point... I kind of can't: https://lemmy.world/comment/7476411

acockworkorange ,

What about Proxmox makes its license questionable?

TCB13 ,
@TCB13@lemmy.world avatar

First they're always nagging you to get a subscription. Then they make system upgrades harder for free customers. Then the gatekeep you from the enterprise repositories in true RedHat fashion and have important fixes from the pve-no-subscription repository multiple times.

acockworkorange ,

As long as the source code is freely available, that's entirely congruent with GPL, which is one of the most stringent licenses. You can lay a lot of criticism on their business practices, and I would not deploy this on my home server, but it haven't seen any evidence that they're infringing any licenses.

TCB13 ,
@TCB13@lemmy.world avatar

Okay if you want to strictly look at licenses per si no issues there. But the rest of what I described I believe we can agree is very questionable, takes into questionable open-source.

acockworkorange ,

I beg to differ. Building a business model around open source is tricky at best. There's always tradeoffs, and their model means they have less support from the broader community as their project will be used less. It's their choice to make and I don't see anything questionable with it. It's one of the stated goals of GPL to not impede business with open source.

Proxmox isn't making you sign away rights granted by the license - that to me is questionable legally and downright bullshit morally. Again, what they're doing is fine, even if it makes their product undesirable to me.

Thank you for putting the word out on Incus as an alternative to Proxmox, one that is likely to fit the needs of many that are ill served by Proxmox. But besmirching their reputation on moral grounds doesn't do anyone any favors. It ends up soiling the reputation of Incus as a side effect, even.

TCB13 ,
@TCB13@lemmy.world avatar

besmirching their reputation on moral grounds doesn’t do anyone any favors.

I'm not sure if you came across my other comment about Proxmox (here) but unfortunately it isn't just "besmirching their reputation on moral grounds".

Also, I would like to add that a LOT of people use Proxmox to run containers and those containers are currently LXC containers. If one is already running LXC containers why not have the full experience and move to LXD/Incus that was made by the same people and designed specifically to manage LXC and later on VMs?

After all Proxmox jumps through hoops when managing LXC containers as they simply retrofitted both their kernel and pve-container / pct that were originally developed to manage OpenVZ containers.

acockworkorange ,

I'm not sure if you came across my other comment about Proxmox (here) but unfortunately it isn't just "besmirching their reputation on moral grounds".

I have, and was based on that I wrote what I did. I still think those choices are business decisions that are not against open source, neither the letter or the spirit of the licenses. It seems you disagree.

Also, I would like to add that a LOT of people use Proxmox to run containers and those containers are currently LXC containers. If one is already running LXC containers why not have the full experience and move to LXD/Incus that was made by the same people and designed specifically to manage LXC and later on VMs?

Why not, indeed? I thanked you before for raising awareness for that. Please keep up. It's really the "Proxmox is fake open source" discourse I take issue with. I think it would be more helpful if you said "and you get all security updates for free with Incus, unlike Proxmox." It's a clear, factual message, devoid of a value judgement. People don't like to be told what to think.

Also it's weird that you take issue with Proxmox but not LXD. From what I read in the Incus initial announcement, what Canonical did with LXD is barely legal and definitely against the spirit of its license. Incus is a drop in replacement. Why even bring LXD up?

And, as far as micro to small installations go, TrueNAS is another alternative that plays well with open source (AFAIK). Unlikely to be used specifically for VMs or containers, but it's a popular choice for home servers for a reason.

To sum it up: I'm trying to provide some constructive criticism of your approach. But I'm just an internet stranger so... You do you. I hope you think about it, though.

TCB13 ,
@TCB13@lemmy.world avatar

Also it’s weird that you take issue with Proxmox but not LXD. From what I read in the Incus initial announcement, what Canonical did with LXD is barely legal and definitely against the spirit of its license. Incus is a drop in replacement. Why even bring LXD up?

Mostly because we're on a transition period from LXD into Incus. If you grab Debian 12 today you'll get LXD 5.0.2 LTS from their repositories that is supported both by the Debian team and the Incus team. Most online documentation and help on the subject can also be found under "LXD" more easily. Everyone should be running Incus once Debian 13 comes along with it, but until then the most common choice is LXD from Debian 12 repositories. I was never, and will never suggest anyone to install/run LXD from Canonical.

It’s really the “Proxmox is fake open source” discourse I take issue with. I think it would be more helpful if you said “and you get all security updates for free with Incus, unlike Proxmox.” It’s a clear, factual message, devoid of a value judgement. People don’t like to be told what to think.

I won't say I don't get your point, I get it, I kinda pushed it a bit there and you're right. Either way what stops Proxmox from doing the same thing BCM/ESXi did now? We're talking about a for profit company and the alternative Incus sits behind the Linux Containers initiative that is effectively funded by multiple parties.

And, as far as micro to small installations go, TrueNAS is another alternative that plays well with open source (AFAIK). Unlikely to be used specifically for VMs or containers, but it’s a popular choice for home servers for a reason.

Yes, TrueNAS can be interesting for a lot of people and they also seem to want to move into the container use-case with TrueNAS Scale but that one is still more broken than useful.

acockworkorange ,

What stops Proxmox is the same thing "stopping" Canonical. The next day there'll be a fork and anyone can start selling pro support for it, further encroaching in their business model.

Regarding TrueNAS, there's nothing broken. You can can sideload both containers and VMs. You can say it's inconvenient, but again, it'll be suited for some people, not so much for others.

TCB13 ,
@TCB13@lemmy.world avatar

What stops Proxmox is the same thing “stopping” Canonical.

But Canonical is no longer a concern since Incus has nothing to do with them...

TrueNAS, there’s nothing broken.

As I said, a lot of the interesting software available via TrueCharts is broken or poorly maintained, this is sad as it would be a great solution.

Voroxpete ,

"How dare this business try to make money?!!"

Open source still has to exist within the framework of capitalism. I am all for building the fully automated luxury gay space communist utopia where people just build awesome software and release it for free all the time without ever having to worry about paying the bills (seriously, I would encourage every open-source advocate to think about how much more awesome stuff we would have if universal basic income was a thing), but that is simply not the world we're in right now. They need to keep the lights on, and that means advertising their paid services.

moonpiedumplings ,

Nothing that is more questionable than lxd, which now requires a contributor license agreement, allowing canonical to not open source their hosted versions, despite lxd being agpl.

Thankfully, it's been forked as incus, and debian is encouraging users to migrate.

But yeah. They haven't said what makes proxmox's license questionable.

TCB13 ,
@TCB13@lemmy.world avatar

Thankfully, it’s been forked as incus, and debian is encouraging users to migrate.

Yes, the people running the original LXC and LXD projects under Canonical now work on Incus under the Linux Containers initiative. Totally insulated from potential Canonical BS. :)

The move from LXD to Incus should be transparent as it guarantees compatibility for now. But even if you install Debian 12 today and LXD from the Debian repository you're already insulated from Canonical.

kn33 ,

They're terminating in the sense that they won't sell it anymore. They're not breaking the licensing they've already sold (mostly, there was some fuckery with activating licensing they sold through third parties)

kalpol OP ,

Sort of. The activation license will work as long as you have it. They won't renew support though, which effectively kills it when the support contract runs out.

kn33 ,

You won't be able to upgrade to new versions when the support contract runs out, but you can install updates to the existing version as long as updates are made for it. This has always been the lifecycle for perpetual licensing. It's good forever, but at a certain point it becomes a security risk to continue using. The difference here is they won't sell you another perpetual license when the lifecycle is up.

0110010001100010 ,
@0110010001100010@lemmy.world avatar

Really glad I made the transition from ESXi to Docker containers about a year ago. Easier to manage too and lighter on resources. Plus upgrades are a breeze. Should have done that years ago...

kalpol OP ,

I need full on segregated machines sometimes though. I've got stuff that only runs in Win98 or XP (old radio programming software).

DoctorWhookah ,

Do you work for a railroad? That sounds too familiar.

kalpol OP ,

Lol no, just old radios. My point is just that my requirements are pretty widely varied.

tyablix ,

I'm curious what radio software you use that has these requirements?

kalpol OP ,

Old Motorolas, they really hate users.

DeltaTangoLima ,
@DeltaTangoLima@reddrefuge.com avatar

Might be time to look into Proxmox. There's a fun weekend project for you!

TCB13 ,
@TCB13@lemmy.world avatar

Save yourself time and future headaches and try LXD/Incus instead.

DeltaTangoLima ,
@DeltaTangoLima@reddrefuge.com avatar

No headaches here - running a two node cluster with about 40 LXCs, many of them using Docker, and an OPNsense VM. It's been flawless for me.

TCB13 ,
@TCB13@lemmy.world avatar

If you're already using LXC containers why are you stuck with their questionable open-source and ass of a kernel when you can just run LXD/Incus and have a much cleaner experience in a pure Debian system? Boots way faster, fails less and is more open.

Proxmox will eventually kill the free / community version, it's just a question of time and they don't offer anything particularly good over what LXD/Incus offers.

DeltaTangoLima ,
@DeltaTangoLima@reddrefuge.com avatar

I'm intrigued, as your recent comment history keeps taking aim at Proxmox. What did you find questionable about them? My servers boot just fine, and I haven't had any failures.

I'm not uninterested in genuinely better alternatives, but I don't have a compelling reason to go to the level of effort required to replace Proxmox.

TCB13 , (edited )
@TCB13@lemmy.world avatar

comment history keeps taking aim at Proxmox. What did you find questionable about them?

Here's the thing, I run Promox since 2009 until the end of last year professionally in datacenters, multiple clusters around 10-15 nodes each. I've been around for all wins and fails of Proxmox, I've seen the raise and fall of OpenVZ, all the SLES/RHEL compatibility issues and then them moving to LXC containers.

While it worked most of the time and their payed support was decent I would never recommend it to anyone since LXD/Incus became a thing. The Promox PVE kernel has a lot of quirks and hacks. Besides the fact that is build upon Ubuntu's kernel that is already a dumpster fire of hacks (waiting someone upstream to implement things properly so they can backport them and ditch their implementations) they add even more garbage over it. I've been burned countless times by their kernel when it comes to drivers, having to wait months for fixes already available upstream or so they would fix their own shit after they introduced bugs.

At some point not even simple things such as OVPN worked fine under Proxmox's kernel. Realtek networking was probably broken more times than working, ZFS support was introduced with guaranteed kernel panics and upgrading between versions was always a shot in the dark and half of the time you would get a half broken system that is able to boot and pass a few tests but that will randomly fail a few days later. Their startup is slow, slower than any other solution - it even includes daemons that are there just to ensure that other things are running (because most of them don't even start with the system properly on the first try).

Proxmox is considerably cheaper than ESXi so people use it in some businesses like we did, but far from perfect. Eventually Canonical invested in LXC and a very good and much better than OpenVZ and co. container solution was born. LXC got stable and widely used and LXD came with the hypervisor higher level management, networking, clustering etc. and since we now have all that code truly open-source and the creators of it working on the project without Canonicals influence.

There's no reason to keep using Proxmox as LXC/LXD got really good in the last few years. Once you're already running on LXC containers why keep using and dragging all the Proxmox bloat and potencial issues when you can use LXD/Incus made by the same people who made LXC that is WAY faster, stable, more integrated and free?

I’m not uninterested in genuinely better alternatives, but I don’t have a compelling reason to go to the level of effort required to replace Proxmox.

Well if you're some time to spare on testing stuff try LXD/Incus and you'll see. Maybe you won't replace all your Proxmox instances but you'll run a mixed environment like a did for a long time.

DeltaTangoLima , (edited )
@DeltaTangoLima@reddrefuge.com avatar

OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we've been forced to work with.

But, for my self-hosted needs, Proxmox has been an absolute boon for me (I moved to it from a pure RasPi/Docker setup about a year ago).

I'm interested in having a play with LXD/Incus, but that'll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it. The former requires investment, and the latter is pretty much a one-way decision (at least, not an easy one to rollback from).

Something I need to ponder...

TCB13 ,
@TCB13@lemmy.world avatar

OK, I can definitely see how your professional experiences as described would lead to this amount of distrust. I work in data centres myself, so I have plenty of war stories of my own about some of the crap we’ve been forced to work with.

It's not just the level of distrust, is the fact that we eventually moved all those nodes to LXD/Incus and the amount of random issues in day to day operations dropped to almost zero. LXD/Incus covers the same ground feature-wise (with a very few exceptions that frankly didn't also work properly under Proxmox), is free, more auditable and performs better under the continuous high loads you expect on a datacenter.

When it performs that well on the extreme case, why not use for self-hosting as well? :)

I’m interested in have a play with LXD/Incus, but that’ll mean either finding a spare server to try it on, or unpicking a Proxmox node to do it.

Well you can always virtualize under a Proxmox node so you get familiar with it ahaha

msage ,

How is the development of LXD?

I am a huge fan of LXC, but I hate random daemons running (so no Docker for me). I have been looking at the Linux Container website, and they mentioned Canonical taking LXD development under its wings, and something about no one else participating apart from Canonical devs.

So I'm kind of scared about the future of LXC and Incus. Do you have any more information about that?

TCB13 , (edited )
@TCB13@lemmy.world avatar

So I’m kind of scared about the future of LXC and Incus. Do you have any more information about that?

Canonical decided to take LXD away from the Linux Containers initiative and "close it" by changing the license. Meanwhile most of the original team at Canonical that made both LXC and LXD into a real thing quit Canonical and are not working on Incus or somehow indirectly "on" the Linux Containers initiative.

no one else participating apart from Canonical devs.

Yes, because everyone is pushing code into Incus and the team at Canonical is now very, very small and missing the key people.

The future is bright and there's money to make things happen from multiple sources. When it comes to the move from LXD to Incus I specifically asked stgraber about what’s going to happen in the future to the current Debian LXD users and this was his answer:

We’ve been working pretty closely to Debian on this. I expect we’ll keep allowing Debian users of LXD 5.0.2 to interact with the image server either until trixie is released with Incus available OR a backport of Incus is made available in bookworm-backports, whichever happens first.

As you can see, even the LTS LXD version present on Debian 12 will work for a long time. Eventually everyone will move to Incus in Debian 13 and LXD will be history.


Update: here's an important part of the Incus release announcement:

The goal of Incus is to provide a fully community led alternative to Canonical’s LXD as well as providing an opportunity to correct some mistakes that were made during LXD’s development which couldn’t be corrected without breaking backward compatibility.

In addition to Aleksa, the initial set of maintainers for Incus will include Christian Brauner, Serge Hallyn, Stéphane Graber and Tycho Andersen, effectively including the entire team that once created LXD.

msage ,

Excelent write-up, thank you very much. I'm going to invest my time to learning Incus!

MigratingtoLemmy ,

XCP-ng it is for you sir

fuckwit_mcbumcrumble ,

why are you stuck with their questionable open-source and ass of a kernel

Because you don't care about it being open source? Just working (and continuing to work) is a pretty big motivating factor to stay with what you have.

TCB13 ,
@TCB13@lemmy.world avatar

Because you don’t care about it being open source?

If you're okay with the risk of one day ending up like the people running ESXi now then you should be okay. Let's say that not "ending up with your d* in your hand" when you least expect it is also a pretty big motivating factor to move away from Proxmox.

Now I don't see how come in a self-hosting community on Lemmy someone would bluntly state what you've.

fuckwit_mcbumcrumble ,

What makes you think that can't happen to something just because it's open source? And from all companies it's from Canonical.

It's "Selfhosted" not "SelfHostedOpenSourceFreeAsInFreedom/GNU". Not everyone has drank the entire open source punch bowl.

TCB13 ,
@TCB13@lemmy.world avatar

What makes you think that can’t happen to something just because it’s open source? And from all companies it’s from Canonical.

You better review your facts.

It was originally mostly made at Canonical however I was NOT ever suggesting you run LXC/LXD from Canonical's repos. The solution is available on Debian's repositories and besides LXD was forked into Incus by the people who originally made LXC/LXD while working at Canonical that now work full time on the Incus project / away from Canonical keeping the solution truly open.

It’s “Selfhosted” not “SelfHostedOpenSourceFreeAsInFreedom/GNU”. Not everyone has drank the entire open source punch bowl.

Dude, I use Windows and a ton of proprietary software, I'm certainty not Richard Stallman. I simply used Proxmox for a VERY LONG time professionally and at home and migrated everything gradually to LXD/Incus and it performs a LOT better. Being truly open-source and not a potential CentOS/ESXi also helps.

TCB13 ,
@TCB13@lemmy.world avatar

Fear no my friend. Get get yourself into LXC/LXD/Incus as it can do both containers and full virtual machines. It is available on Debian's repositories and is fully and truly open-source.

eerongal ,
@eerongal@ttrpg.network avatar

I agree with the other poster; you should look into proxmox. I migrated from ESXi to proxmox 7-8 years ago or so, and honestly its been WAY better than ESXi. The migration process was pretty easy too, i was able to bring over the images from ESXi and load them directly into proxmox.

MangoPenguin ,
@MangoPenguin@lemmy.blahaj.zone avatar

If you're running a basic linux install you can use KVM for some VMs. Or use Proxmox for a good ESXi replacement.

TCB13 ,
@TCB13@lemmy.world avatar

Or... LXD/Incus.

TCB13 ,
@TCB13@lemmy.world avatar

So... you replaced a property solution by a free one that depends on proprietary components and a proprietary distribution mechanism? Get get yourself into LXC/LXD/Incus (that does both containers and VMs) and is available on Debian's repositories. Or Podman if you really like the mess that Docker is.

kalpol OP ,

I've seen you recommending this here before - what's its selling point vs say qemu-kvm? Does Incus do virtual networking without having to straight up learn iptables or whatever? (Not that there is anything wrong with iptables, I just have to choose what I can learn about)

TCB13 ,
@TCB13@lemmy.world avatar

Does Incus do virtual networking without having to straight up learn iptables or whatever?

That's the just one of the things it does. It goes much further as it can create clusters, download, manage and create OS images, run backups and restores, bootstrap things with cloud-init, move containers and VMs between servers (even live sometimes). Another big advantage is the fact that it provides a unified experience to deal with both containers and VMs, no need to learn two different tools / APIs as the same commands and options will be used to manage both. Even profiles defining storage, network resources and other policies can be shared and applied across both containers and VMs.

ziviz ,
@ziviz@lemmy.sdf.org avatar

Yay... Capitalism...

TCB13 ,
@TCB13@lemmy.world avatar

This was totally expected, even before BCM bought them. This is the same thing we had with CentOS/ReadHat and that will happen with Docker/DockerHub and all the people that moved from CentOS to Ubuntu.

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

Its not capitalism it just is business. No one is making you use it.

Damage ,

I wonder what's the future of vmware player

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

Not bright...

angelsomething ,

Bummer. Oh well, good thing I’m learning proxmox eh.

possiblylinux127 ,
@possiblylinux127@lemmy.zip avatar

Hopefully everyone migrated.

Decronym Bot , (edited )

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
ESXi VMWare virtual machine hypervisor
HA Home Assistant automation software
~ High Availability
LTS Long Term Support software version
LXC Linux Containers
NAS Network-Attached Storage
Plex Brand of media server package
RPi Raspberry Pi brand of SBC
SBC Single-Board Computer
ZFS Solaris/Linux filesystem focusing on data integrity

8 acronyms in this thread; the most compressed thread commented on today has 4 acronyms.

[Thread for this sub, first seen 12th Feb 2024, 20:15]
[FAQ] [Full list] [Contact] [Source code]

zorflieg ,

High Availability not Home Assistant.

cyberpunk007 ,

Depends which community you ask 🤣. It was definitely high availability first though.

redfox ,
@redfox@infosec.pub avatar

What about virtualizing windows?

Only thing I know of is hyperv, but it's not widely used I don't think and MS is pushing azure $tack right?

cyberpunk007 ,

Hyper-v is definitely wisely used...

Lots of hypervisors support windows. Ie proxmox

atzanteol ,

To be pedantic - KVM is the hypervisor. Proxmox is a wrapper to it.

cyberpunk007 ,

Fair enough

Anarch157a ,
@Anarch157a@lemmy.world avatar

Being even more pedantic, KVM is the hypervisor, QEMU is a wrapper around it and Proxmox provides a management interface to it.

Socket462 ,

I tried virtualizing Windows on proxmox and it went smooth

MangoPenguin ,
@MangoPenguin@lemmy.blahaj.zone avatar

Anything based on KVM does great

dan ,
@dan@upvote.au avatar

Just make sure you install the virtio drivers.

Lettuceeatlettuce ,
@Lettuceeatlettuce@lemmy.ml avatar

XCP-ng or Proxmox if you need a bare metal hypervisor. Both open source, powerful, mature, and have large communities with lots of helpful documentation.

I think you can migrate ESXi VMs directly to XCP-ng. I have moved onto it about 6 months ago and it has been solid. Steep learning curve, but really great once you get the hang of it, and enterprise grade if you need stuff like HA clustering and complex virtual networking solutions.

Disaster ,

I managed to migrate all mine to libvirt when I dumped esxi. They dropped support for the old opteron I was running at the time, so I couldn't upgrade to v7. Welp, Fedora Server does just as well and I've been moving the VM hosted services into containers anyway.

Ofc... well, we'll see what IBM does with RedHat. Probably something like this eventually. They simply can't help themselves.

Moonrise2473 ,

RIP VMware.

Broadcom prefers to milk the top 500 customers with unreasonable fees rather than bother with the rest of the world. They know that nobody with a brain would intentionally start a new datacenter with VMware solutions

Anti_Iridium ,

Not anymore, thats for damn sure.

jelloeater85 ,
@jelloeater85@lemmy.world avatar

It's really sad, they used to be amazing and the goto for running Linux VMs on back in the day. Still haven't seen anyone do hardware pass through as well.

Voroxpete ,

When big players like AWS are running KVM and XCP-NG, yeah. VMware is basically the also-ran at this point.

brygphilomena ,

Regrettably, there is currently no substitute product offered.

I really don't think you regret a God damn thing broadcom.

cyberpunk007 ,

If you're already running windows, hyper-v. theres proxmox, and tons of others. So they are mistaken. 🤣

TheHolm ,
@TheHolm@aussie.zone avatar

All of them not equate in same league. Do you know any type 1 free supervises out there? Xen probably.

cyberpunk007 ,

Proxmox, Xen, hyper-v are all considered type 1 as far as I'm aware.

Sethayy ,

KVM makes proxmox type 1

MangoPenguin ,
@MangoPenguin@lemmy.blahaj.zone avatar

Proxmox

jelloeater85 ,
@jelloeater85@lemmy.world avatar

I'm not sure why you're getting down voted, you're right. I'm not sure if anyone would run Proxmox for their enterprise hypervisor? I mean HyperV is okay. Slim pickings for big orgs. I know there's Nutanix, but most folks are moving to the big three for VMs and hosting.

ssdfsdf3488sd ,

I am running proxmox at a moderately sized corp. The lack of a real support contract almost kills it, which is too bad because it is a decent product

Voroxpete ,

I assume what you're looking for specifically here is a complete platform that you can install on bare-metal, not just the actual hypervisor itself. In which case consider any of these:

  • Proxmox
  • XCP-NG
  • Windows Hyper-V Server Core (basically Windows Server Nano with Hyper-V)
  • Any Linux distro running KVM/QEMU - Add Cockpit if you need a web interface, or use Virt-Manager, either directly or over X-forwarding
Anarch157a ,
@Anarch157a@lemmy.world avatar

Any Linux distro running KVM/QEMU - Add Cockpit if you need a web interface, or use Virt-Manager, either directly or over X-forwarding

No need for X forwarding, you can connect Virt-Manager to a remote system that has libvirt,

Voroxpete ,

This is true, but not everyone gets to use a linux system as their main desktop at work. I'm not aware of a windows version of virt-manager, but if that exists it would be fucking rad.

CazRaX ,

They mean that they aren't offering another solution.

cyberpunk007 ,

I know, but this is the way I read it when they claim to give no option.

sj_zero ,

The most important thing for everyone to remember is that if you don't fully own the thing such that you can install and run it without asking permission, or if it isn't simply free and open source, then it can go away at any time.

brickfrog ,

Sucks but not surprising. Broadcom has a history of doing things like this, ugh. Even with their paid products they jack up the price so much that the only customers that stick around are the business enterprise types that are locked in & can't easily migrate for various reasons.

PlasterAnalyst ,

ESXi sucks

jelloeater85 ,
@jelloeater85@lemmy.world avatar

Have you ever used it?

PlasterAnalyst ,

I don't own an IBM mainframe so.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • selfhosted@lemmy.world
  • incremental_games
  • meta
  • All magazines