Full extension rails are probably best going to come from the original vendor as a general principle, rather than attempting to use universal rails.
If you have a wall mounted rack, unless your walls are not drywall, physics is working against you. It's already a pretty intense heavy cantilever, and putting a server in there that can extend past the front edge is only going to make that worse.
If you want to use full extension rails, you should get a rack that can sit squarely on the floor on either feet or appropriately rated casters. You should also make sure your heaviest items are on the bottom ESPECIALLY if you have full extension rails - it will make the rack less likely to overbalance itself and tip over when the server is extended.
Pesky physics strikes again! Haha. I do have them lay bolted to my basement foundation, so it should support a fair amount of weight… but “should” and “does” are different words for a reason 😂
Fair - there are ways to handle it. I didn't want to include specifics since I'm not a professional contractor for this sort of thing, but I should have indicated that there are exceptions.
Hopefully the rack is mounted directly into the studs or concrete. I've seen them crush gypsum between the stud and paint too...
I'm not much help, but they all suck. I've bought probably 2k worth of universal sliding rails to attempt with various servers. I don't know why but none of them fit properly. None of them slide properly. They are all just annoying. I gave up with them. I bought the OE sliding rail kits for my various servers and magically everything works perfectly.
For servers where I want to move them in and out but don't want sliding rails, I just buy those universal L bracket type mounts. The servers slide well enough on them, they're just powder coated bent steel. And a single screw from the front panel into the rack keeps it secure enough since the weight is being held by the support, not its own ears.
That’s solid info, thanks! Looks like the OEM rails aren’t “technically” designed for this model. Or the one they have won’t fit my space. I have like maybe 22” until I hit the wall, and most of the rail kits SilverStone has seem to be minimally 22” which wouldn’t account for any rear vent and or power cord space, if it would even fit on my 20” rack.
FWIW I do have the rack lag bolted in 4 spots to a concrete basement foundation.
a fully extended chassis on rails in a wall mount anything (frame or enclosure) is going to place an extreme amount of pull force on the wall attachment points.
I would personally not place anything but a static, fixed load into a wall mount.
equipment on rails is a lifesaver and, if you really want to do it, consider a freestanding enclosure thats designed to take deep servers, extended loads and has anti-tip features.
I guess that’s a very valid point. The racks is bolted into concrete with 4x 5” lag bolts FWIW, so I just kind of assumed that’d be fine? But I supposed physics may not be kind to me, considering me hefting the machine into the place is already heavy…
lag bolts into shields into concrete may be secure if its done really carefully. it still leaves possible issues with the frame integrity - there are quite a few low quality frames and cabinets out there and mechanical stress on those vertical rails and all of the connection points in-between when equipment is extended on rails is no joke.
I am used to datacentre grade mounting gear (even in my home lab), so I am a bit spoiled. however... take a look at Rack Solutions for harder-to-find quality mounts, rails and adapters. a source for excellent quality steel open racks/frames and enclosures is x-mark (now owned by belden). thats the stuff I use for myself.
edit: as was mentioned in another comment, OEM rails are almost always your best bet, however high quality 4-post sliding shelves have saved my butt on ocassion. Rack Solutions also offers those.
Not seen that one before, but I'm familiar with the concept. I'm working on something for myself that'll go into our will prep when we finally get around to it.
I'm currently migrating all sorts of stuff to Proxmox.
Nice thing is, VM's and containers are easily copied with systems off, even did a P-to-V of an ancient Win7 machine and am reusing that hardware for Proxmox, and will run the VM in Proxmox until I get everything cleaned up and restructured.
I migrated from a mix of proxmox, hyper-V, bare metal, and Synology hosted docker onto a full k8s cluster.
It is much easier to manage now, including adding or replacing nodes. Including a rebuild of the cluster from 7 rpis onto 7 elite desk mini PCs. (From arm to x86 and from Debian to Talos)
But it wasn't a small process either.
You'll have to deploy your k8s cluster, learn how to host the services you want (using a load balancer, dns setup, cluster IPs, etc), and setting up a storage provider (I use NFS to my Synology share, not the fastest or most secure but easiest)
And then you'll need to migrate your services off the old hardware onto the cluster one by one... Which means learning docker and k8s and how they work together.
There are some things that I cannot host on the cluster like zwave2mqtt which requires a physical location centralized in my house and access to a USB zwave adapter. So even then not quite 100% ended up on the cluster, it runs on docker on an rpi though. (Technically you can do this if you pin the container to a single host and pass through the USB device, but I didn't see a reason for it.)
But, service upgrades, adding new services now that I'm used to it is very easy... Expanding compute is also pretty easy. So maintenance has gone down a bunch. But it was also a decent amount of work and learning to get there.
K8s is relatively specialized knowledge compared to the general computer literate population that knows how computers generally work... So in terms of someone being able to take over your work, if they already know k8s, then it would be reasonably easy. If they don't but are savvy enough to learn it would take a bit but not be too bad. If someone doesn't already know their way around Linux and a terminal, it would probably not be possible for them to pick it up in a reasonable amount of time though.
I'm currently running a hypervisor lab to test stuff for friends in the SMB IT space to find a replacement for VMWare. At the moment, Proxmox has the best cost/flexibility/ease of learning, but if Kubernetes is more mature, has better support, that would be a great argument for it.
Proxmox is going to be a lot easier to pick up if you’re coming from vmware. Kubernetes is a beast with a considerable learning curve so if you’re not familiar with it already then I wouldn’t recommend it for a lab environment (unless the goal is specifically to learn it).
I've seen a few people who run proxmox on the bare metal with k8s running inside VMs, or containers, inside proxmox. I'm not sure if I should just go full bare metal k8s or have the proxmox (or other?) intermediate layer..
You can run it on proxmox if you want to mix non k8s machines onto the same hardware. All my k8s nodes are dedicated to running k8s only though, so there is no reason for me to have that extra step.
I would not run k8s on proxmox so you can run multiple nodes on the same machine though, the only reason I could really see to do that is if you only had one machine and you really wanted to keep your controller and worker nodes separate.
I’ve migrated most of my lab from a mess of proxmox lxcs over to k3s (I use k8s at work), except for home assistant. I’ve been back and forth on that one. I really like being able to back up the entire vm before running updates or whatever. Could you use a node selector to force zwave or zigbee or whatever to run on the node that has the usb device? Or is it still a pain in the ass that way cause you have to know the path on the specific host… I haven’t tried that yet.
You can pin the pod to a specific node and pass through the USB device path and that will work. But the whole point of k8s is redundancy and workloads running anywhere.
Plus for IOT networks like zigbee and zwave, controller position in your house is important. If your server is more centrally located that may not be a concern for you.
I've heard of some using a USB serial over Ethernet device to relocate their controller remotely but i haven't looked into that. Running this one off rpi for the controller just made more sense for me.
Makes sense, thanks. Yeah idk about usb serial over Ethernet, it’s an interesting idea but I wouldn’t want to introduce more moving parts (and/or latency) to the network.
Any tips you can give for someone who is running k8s on rpi4s and wants to switch architectures? Sounds like you did something similar and while my rpis are holding strong, I want something with a little more power like a few N100 based micro pcs.
All the images I used already had x86 variants available. In fact, I was building and pushing my own arm variants for a few images to my own Nexus repository which I've stopped since they aren't necessary anymore.
If you are using arm only images, you'll need to build your own x86 variants and host them.
I created a brand new cluster from scratch and then setup the same storage pv/PVCs and namespaces.
Then I'd delete the workloads from the old cluster and apply the same yaml to the new cluster, and then update my DNS.
I used kubectx to swap between them.
Once I verified the new service was working I'd move to the next. Since the network storage was the same it was pretty seamless. If you're using something like rook to utilize your nodes disks as network storage that would be much more difficult.
After everything was moved I powered down the old cluster and waited a few weeks before I wiped the nodes. In case I needed to power it up and reapply a service to it temporarily.
My old cluster was k8s on raspbian but my new one was all Talos. I also moved from single control plane to 3 machines control plane. (Which is completely unnecessary, but I just wanted to try it). But that had no effect on any services.
I've used virtio for Nutanix before and not using open speed test, but instead using iperf, gathered line rate across hosts.
However I also know network cards matter a lot. Some network cards, especially cheap Intel x710 suck. They don't have specific compute offloading that can be done so the CPU does all the work and the host cpu itself processes network traffic significantly slowing throughput.
My change to mellanox 25g cards showed all vm network performance increase to the expected line rate even on same host.
That was not a home lab though, that was production at a client.
Edit sorry I meant to wrap up:
to test use iperf (you could use UDP at 10Gbit and run it continuous, in UDP mode you need to set the size you try to send)
while testing look for CPU on the host
If you want to exclude proxmox you could attempt to live boot another usb Linux and test iperf over the lan to another device.
My guess is there is a "glitch" somewhere in the middle. If not then it might be SMB or your drive speeds.
Can you try doing a speed check in between hosts? Also, I would make sure that the networking is paravirtualized properly. You also could try swapping out your network cables.
When I use OpenSpeedTest to to test to another VM, it doesn't read or write from the HDD, and it doesn't leave the Proxmox NIC. It's all direct from one VM to another. The only limitations are CPU are perhaps RAM. Network cables wouldn't have any effect on this.
I'm using VirtIO (paravirtualized) for the NICs on all my VMs. Are there other paravirtualization options I need to be looking into?
I don't have a lot of experience in high speed but as soon as you start getting faster there tends to be exponential overhead. I think you should try mounting the network share on the Proxmox host to test speed without the complexity of the VMs. If you get the results you are looking for then you are good but if it is bottle necked there the bottle neck is on the NAS or SMB. SMB is particularly hard to overcome as it seems to be slow no matter what you do.
VLANs are layer 2, they don't really need things like a router to work and should be set up at the switch//layer 2 level and are not routable across the internet. Based on the specifications you've stated you want to make a DMZ which is a lot different and will require knowledge of IP subnets, access control lists, and wildcard masks. It's not much harder to do, but it is different than what your doing, you wouldn't use a hammer for a screw.
To lay some foundation, a VLAN is akin to a separate network with separate Ethernet cables. That provides isolation between machines on different VLANs, but it also means each VLAN must be provisioned with routing, so as to reach destinations outside the VLAN.
Routers like OpenWRT often treat VLANs as if they were distinct NICs, so you can specify routing rules such that traffic to/from a VLAN can only be routed to WAN and nowhere else.
At a minimum, for an isolated VLAN that requires internet access, you would have to
define an IP subnet for your VLAN (ie /24 for IPv4 and /64 for IPv6)
advertise that subnet (DHCP for IPv4 and SLAAC for IPv6)
route the subnets to your WAN (NAT for IPv4; ideally no NAT66 for IPv6)
Starting with brass tacks, the way I'm reading the background info, your ISP was running fibre to your property, and while they were there, you asked them to run an additional, customer-owned fibre segment from your router (where the ISP's fibre has landed) to your server further inside the property. Both the ISP segment and this interior segment of fibre are identical single-mode fibres. The interior fibre segment is 30 meters.
Do I have that right? If so, my advice would be to identify the wavelength of that fibre, which can be found printed on the outer jacket. Do not rely on just the color of the jacket, and do not rely on whatever connector is terminating the fibre. The printed label is the final authority.
With the fibre's wavelength, you can then search online for transceivers (xcvrs) that match that wavelength and the connector type. Common connectors in a data center include LC duplex (very common), SC duplex (older), and MPO (newer). 1310 and 1550 nm are common single mode wavelengths, and 850 and 1300 nm are common multimode wavelengths. But other numbers are used; again, do not rely solely on jacket color. Any connector can terminate any mode of fibre, so you can't draw any conclusions there.
For the xcvr to operate reliably and within its design specs, you must match the mode, wavelength, and connector (and its polish). However, in a homelab, you can sometimes still establish link with mismatching fibres, but YMMV. And that practice would be totally unacceptable in a commercial or professional environment.
Ultimately, it boils down to link losses, which are high if there's a mismatch. But for really short distances, the xcvrs may still have enough power budget to make it work. Still, this is not using the device as intended, so you can't blame them if it one day stops working. As an aside, some xcvrs prescribe a minimum fibre distance, to prevent blowing up the receiver on the other end. But this really only shows up on extended distance, single mode xcvrs, on the order of 40 km or more.
Finally, multimode is not dead. Sure, many people believe it should be deprecated for greenfield applications. I agree. But I have also purchased multimode fibre for my homelab, precisely because I have an obscene number of SFP+ multimode, LC transceivers. The equivalent single mode xcvrs would cost more than $free so I just don't. Even better, these older xcvrs that I have are all genuine name-brand, pulled from actual service. Trying to debug fibre issues is a pain, so having a known quantity is a relief, even if it means my fibre is "outdated" but serviceable.
Regarding future proofing, I would say that anyone laying single pairs of fibres is already going to constrain themselves when looking to the future. Take 100 Gbps xcvrs as an example: some use just the single pair (2 fibres total) to do 100 Gbps, but others use four pairs (8 fibres total) driving each at just 25 Gbps.
The latter are invariably cheaper to build, because 25 Gbps has been around for a while now; they're just shoving four optical paths into one xcvr module. But 100 Gbps on a single fiber pair? That's going to need something like DWDM which is both expensive and runs into fibre bandwidth limitations, since a single mode fibre is only single-mode for a given wavelength range.
So unless the single pair of fibre is the highest class that money can buy, cost and technical considerations may still make multiple multimode fibre cables a justifiable future-looking option. Multiplying fibres in a cable is likely to remain cheaper than advancing the state of laser optics in severely constrained form factors.
Naturally, a multiple single-mode cable would be even more future proofed, but at that point, just install conduit and be forever-proofed.
Regarding future proofing, I would say that anyone laying single pairs of fibres is already going to constrain themselves when looking to the future.
It could be right, but it depends on what people can run in the conducts. I was lucky to be able to pull those 2 cables.
On the other hand, this is a rent apartment that I will soon leave :D
Do I have that right? If so, my advice would be to identify the wavelength of that fibre, which can be found printed on the outer jacket. Do not rely on just the color of the jacket, and do not rely on whatever connector is terminating the fibre. The printed label is the final authority.
You got it right!
On the cable unfortunately there is no wavelength printed (it's a cable made for my ISP), but I've read on a forum (that talks about this ISP):
GPON adopts WDM to transmit data of different upstream/downstream wavelengths over the same ODN. Wavelengths range from 1290 - 1330 nm in the upstream direction and from 1480 - 1500 nm in the downstream direction.
Edit: In the meanwhile, do you have any 2,5Gbe PCI card that you suggest (I need it to connect OPNsense to the ONT via PPPoE)? I've found only the QNAP QXG-2G1T-I225 that costs about 75€ or the Edimax EN-9225TX-E for about 41€ (but I haven't read much about this one).
In my first draft of an answer, I thought about mentioning GPON but then forgot. But now that you mention it, can you describe if the fibres they installed are terminated individually, or are paired up?
GPON uses just a single fibre for an entire neighborhood, whereas connectivity between servers uses two fibres, which are paired together as a single cable. The exception is for "bidirectional" xcvrs, which like GPON use just one fibre, but these are more of a stopgap than something voluntarily chosen.
Fortunately, two separate fibres can be paired together to operate as if they were part of the same cable; this is exactly why the LC and SC connectors come in a duplex (aka side-by-side) format.
Whereas in data center and networking, I have never seen anything but UPC, and that's what xcvrs will expect, with tiny exceptions or if they're GPON xcvrs.
So I need to correct my previous statement: to be fully functional as designed, the fiber and xcvr must match all of: wavelength, mode, connector, and the connector's polish.
The good news is that this should mostly be moot for your 30 meter run, since the extra losses from mismatched polish should still link up.
As for that xcvr, please note that it's an LRM, or Long Range Multimode xcvr. Would it probably work at 30 meters? Probably. But an LR xcvr that is single mode 1310 nm would be ideal.
Thanks for your precision!
To the ONT arrives a single fiber with a SC/APC connector, but this is not a problem since I will be using the ONT provided and use the 2,5Gb copper port to connect it to OPNsense (looking for a 2,5 Gb PCI card).
The 2 fiber that I've asked them to run (from OPNsense to server) are terminated with the same SC/APC connectors and I was thinking about using this SC female/female adapter and this SC/APC to LC cable that I've just realized that are still APC...I'll have a look if there are SC/APC to LC/UPC cables
As for that xcvr, please note that it’s an LRM, or Long Range Multimode xcvr. Would it probably work at 30 meters? Probably. But an LR xcvr that is single mode 1310 nm would be ideal.
Is it a LRM? Damn, I didn't realized since I've filtered for single mode. If the filter doesn't work, I've no idea which is LR. Would you be so gentle to point to a cheap one for single mode finer?
I've only looked briefly into APC/UPC adapters, although my intention was to do the opposite of your scenario. In my case, I already had LC/UPC terminated duplex fibre through the house, and I want to use it to move my ISP's ONT closer to my networking closet. That requires me to convert the ISP's SC/APC to LC/UPC at the current terminus, then convert it back in my wiring closet. I hadn't gotten past the planning stage for that move, though.
Although your ISP was kind enough to run this fibre for you, the price of 30 meters LC/UPC terminated fibre isn't terribly excessive (at least here in USA), so would it be possible to use their fibre as a pull-string to run new fibre instead? That would avoid all the adapters, although you'd have to be handy and careful with the pull forces allowed on a fibre.
But I digress. On the xcvr choice, I don't have any recommendations, as I'm on mobile. But one avenue is to look at a reputable switch manufacturer and find their xcvr list. The big manufacturers (Cisco, HPE/Aruba, etc) will have detailed spec sheets, so you can find the branded one that works for you. And then you can cross-reference that to cheaper, generic, compatible xcvrs.
Although your ISP was kind enough to run this fibre for you, the price of 30 meters LC/UPC terminated fibre isn’t terribly excessive (at least here in USA), so would it be possible to use their fibre as a pull-string to run new fibre instead? That would avoid all the adapters, although you’d have to be handy and careful with the pull forces allowed on a fibre.
The problem is the installation of the connectors. They've welded the fiber the SC/APC pigtails, I wouldn't be able to do that.
The big manufacturers (Cisco, HPE/Aruba, etc) will have detailed spec sheets, so you can find the branded one that works for you. And then you can cross-reference that to cheaper, generic, compatible xcvrs.
That would be very very generous of you; in the fiber section I'm pretty ignorant and I'm worried to purchase wrong items 🙈
I quickly looked up the HPE/Aruba transceiver document, and starting on page 61 is the table of SFP+ transceivers, specifically describing the frequency and mode. At least from their transceivers, J9151A, J9151E, JL749A, and JL783A would work for your single-mode, 1310 nm needs.
You will have to do additional research to find generic parts which are equivalent to those transceivers. Good luck in your endeavors!
In some ways, I kinda despise the 802.3bz specification for 2.5 and 5 Gbps on twisted pair. It came into existence after 10 Gbps twisted-pair was standardized, and IMO exists only as a reaction to the stubbornly high price of 10 Gbps ports and the lack of adoption -- 1000 Mbps has been a mainstay and is often more than sufficient.
802.3bz is only defined for twisted pair and not fibre. So there aren't too many xcvrs that support it, and even fewer SFP+ ports will accept such xcvrs. As a result, the cheap route of buying an SFP+ card and a compatible xcvr is essentially off-the-table.
The only 802.3bz compatible PCIe card I've ever personally used is an Aquantia AQN-107 that I bought on sale in 2017. It has excellent support in Linux, and did do 10 Gbps line rate by my testing.
That said, I can't imagine that cards that do only 2.5 Gbps would somehow be less performant. 2.5 Gbps hardware is finding its way into gaming motherboards, so I would think the chips are mature enough that you can just buy any NIC and expect it to work, just like buying a 1000 Mbps NIC.
BTW, some of these 802.3bz NICs will eschew 10/100 Mbps support, because of the complexity of retaining that backwards compatibility. This is almost inconsequential in 2024, but I thought I'd mention it.
homelab
Hot
This magazine is from a federated server and may be incomplete. Browse more on the original instance.