Welcome to Incremental Social! Learn more about this project here!
Check out lemmyverse to find more communities to join from here!

litchralee

@litchralee@sh.itjust.works

This profile is from a federated server and may be incomplete. Browse more on the original instance.

litchralee ,

It never ceases to amaze me how prolific PowerPC/PowerISA was (still is?) in the embedded space

litchralee , (edited )

Agreed. When I was fresh out of university, my first job had me debugging embedded firmware for a device which had both a PowerPC processor as well as an ARM coprocessor. I remember many evenings staring at disassembled instructions in objdump, as well as getting good at endian conversions. This PPC processor was in big-endian and the ARM was little-endian, which is typical for those processor families. We did briefly consider synthesizing one of them to match the other's endianness, but this was deemed to be even more confusing haha

litchralee ,

Your primary issue is going to be the power draw. If your electricity supplier has cheap rates, or if you have an abundance of solar power, then it could maybe find life as some sort of traffic analyzer or honeypot.

But I think even finding a PCI NIC nowadays will be rather difficult. And that CPU probably doesn't have any sort of virtualization extensions to make it competitive against, say, a Raspberry Pi 5.

litchralee ,

Do you have a reference for "class 3 e-scooters"? My understanding of the California Vehicle Code is that the class system only applies to bicycles with pedals, per CVC 312.5.

Whereas e-scooters -- the things that Bird and Lime rent through their app -- exist under CVC 407.5, which previously covered the older, gasoline-powered 50 cc types of scooters. But apparently the law has now completed written out the gas-powered ones, only mentioning electric-powered "motorized scooters".

Strictly speaking, there isn't a requirement in the law for e-scooters to have a speed governor, whereas ebikes must have one, either 20 mph (32 kph) or 28 mph (45 kph). Instead, riders of e-scooters are subject to a speed limit of 15 mph (25 kph), a stalwart from the days of the gas-powered scooters.

The key distinction here is that an ebike over-speeding beyond its class rating is an equipment violation, akin to an automobile without operational brake lights. But an e-scooter over-speeding beyond 15 mph is a moving violation, potentially incurring points on the rider's driving license -- if they have one -- and can impact auto insurance rates, somewhat bizarrely.

I'm not saying CA law is fair to e-scooters -- it's not -- but I can't see a legal scenario where an e-scooter can overtake an ebike rider if both are operating at full legal limits.

litchralee ,

Ah, now I understand what you mean. Yes, the stock C80 would indeed legally be a Class 2 ebike in California, by virtue of its operable pedals, whether or not it's actually practical to use the pedals. That the marketing material suggests the C80 is used primarily with its throttle is no different than other Class 2 ebikes which are often ridden throttle-only, as many city dwellers have come to fear.

As for the unlock to Class 3, I wonder how they do that: California's Class 3 does not allow throttle-only operation, requiring some degree of pedal input.

The spectrum of two-wheelers in California include: bicycles, ebikes (class 1, 2, 3), scooters, mopeds (CVC 406), motor-driven cycles, and motorcycles (aka motorbikes; CVC 400)

The "moped" category, one which has almost been forgotten to the 1970s, has seen a resurgence: the now-updated law recognizes 30 mph, electric, 4 HP (3 kW) max two- or three-wheelers. These mopeds are street legal, bike lane legal, don't have annual registration, no insurance requirement, but do need an M1/M2 license. These CVC 406 mopeds are not freeway legal, but darn if they're not incredibly useful for in-town riding.

I could get myself an electric dirt bike and plates for it, 100% legally.

litchralee ,

To lay some foundation, a VLAN is akin to a separate network with separate Ethernet cables. That provides isolation between machines on different VLANs, but it also means each VLAN must be provisioned with routing, so as to reach destinations outside the VLAN.

Routers like OpenWRT often treat VLANs as if they were distinct NICs, so you can specify routing rules such that traffic to/from a VLAN can only be routed to WAN and nowhere else.

At a minimum, for an isolated VLAN that requires internet access, you would have to

  • define an IP subnet for your VLAN (ie /24 for IPv4 and /64 for IPv6)
  • advertise that subnet (DHCP for IPv4 and SLAAC for IPv6)
  • route the subnets to your WAN (NAT for IPv4; ideally no NAT66 for IPv6)
  • and finally enable firewalling

As a reminder, NAT and NAT66 are not firewalls.

litchralee , (edited )

Starting with brass tacks, the way I'm reading the background info, your ISP was running fibre to your property, and while they were there, you asked them to run an additional, customer-owned fibre segment from your router (where the ISP's fibre has landed) to your server further inside the property. Both the ISP segment and this interior segment of fibre are identical single-mode fibres. The interior fibre segment is 30 meters.

Do I have that right? If so, my advice would be to identify the wavelength of that fibre, which can be found printed on the outer jacket. Do not rely on just the color of the jacket, and do not rely on whatever connector is terminating the fibre. The printed label is the final authority.

With the fibre's wavelength, you can then search online for transceivers (xcvrs) that match that wavelength and the connector type. Common connectors in a data center include LC duplex (very common), SC duplex (older), and MPO (newer). 1310 and 1550 nm are common single mode wavelengths, and 850 and 1300 nm are common multimode wavelengths. But other numbers are used; again, do not rely solely on jacket color. Any connector can terminate any mode of fibre, so you can't draw any conclusions there.

For the xcvr to operate reliably and within its design specs, you must match the mode, wavelength, and connector (and its polish). However, in a homelab, you can sometimes still establish link with mismatching fibres, but YMMV. And that practice would be totally unacceptable in a commercial or professional environment.

Ultimately, it boils down to link losses, which are high if there's a mismatch. But for really short distances, the xcvrs may still have enough power budget to make it work. Still, this is not using the device as intended, so you can't blame them if it one day stops working. As an aside, some xcvrs prescribe a minimum fibre distance, to prevent blowing up the receiver on the other end. But this really only shows up on extended distance, single mode xcvrs, on the order of 40 km or more.

Finally, multimode is not dead. Sure, many people believe it should be deprecated for greenfield applications. I agree. But I have also purchased multimode fibre for my homelab, precisely because I have an obscene number of SFP+ multimode, LC transceivers. The equivalent single mode xcvrs would cost more than $free so I just don't. Even better, these older xcvrs that I have are all genuine name-brand, pulled from actual service. Trying to debug fibre issues is a pain, so having a known quantity is a relief, even if it means my fibre is "outdated" but serviceable.

litchralee ,

Regarding future proofing, I would say that anyone laying single pairs of fibres is already going to constrain themselves when looking to the future. Take 100 Gbps xcvrs as an example: some use just the single pair (2 fibres total) to do 100 Gbps, but others use four pairs (8 fibres total) driving each at just 25 Gbps.

The latter are invariably cheaper to build, because 25 Gbps has been around for a while now; they're just shoving four optical paths into one xcvr module. But 100 Gbps on a single fiber pair? That's going to need something like DWDM which is both expensive and runs into fibre bandwidth limitations, since a single mode fibre is only single-mode for a given wavelength range.

So unless the single pair of fibre is the highest class that money can buy, cost and technical considerations may still make multiple multimode fibre cables a justifiable future-looking option. Multiplying fibres in a cable is likely to remain cheaper than advancing the state of laser optics in severely constrained form factors.

Naturally, a multiple single-mode cable would be even more future proofed, but at that point, just install conduit and be forever-proofed.

litchralee ,

In my first draft of an answer, I thought about mentioning GPON but then forgot. But now that you mention it, can you describe if the fibres they installed are terminated individually, or are paired up?

GPON uses just a single fibre for an entire neighborhood, whereas connectivity between servers uses two fibres, which are paired together as a single cable. The exception is for "bidirectional" xcvrs, which like GPON use just one fibre, but these are more of a stopgap than something voluntarily chosen.

Fortunately, two separate fibres can be paired together to operate as if they were part of the same cable; this is exactly why the LC and SC connectors come in a duplex (aka side-by-side) format.

But if the ISP does GPON, they may have terminated your internal fibre run using SC, which is very common in that industry. But there's a thing with GPON specifically, where the industry has moved to polishing the fiber connector ends with an angle, known as Angled Physical Contact (APC) and marked with green connectors, versus the older Ultra Physical Contact (UPC) that has no angle. The benefit of APC is to reduce losses in the ISP's fibre plant, which helps improve services.

Whereas in data center and networking, I have never seen anything but UPC, and that's what xcvrs will expect, with tiny exceptions or if they're GPON xcvrs.

So I need to correct my previous statement: to be fully functional as designed, the fiber and xcvr must match all of: wavelength, mode, connector, and the connector's polish.

The good news is that this should mostly be moot for your 30 meter run, since the extra losses from mismatched polish should still link up.

As for that xcvr, please note that it's an LRM, or Long Range Multimode xcvr. Would it probably work at 30 meters? Probably. But an LR xcvr that is single mode 1310 nm would be ideal.

litchralee ,

I've only looked briefly into APC/UPC adapters, although my intention was to do the opposite of your scenario. In my case, I already had LC/UPC terminated duplex fibre through the house, and I want to use it to move my ISP's ONT closer to my networking closet. That requires me to convert the ISP's SC/APC to LC/UPC at the current terminus, then convert it back in my wiring closet. I hadn't gotten past the planning stage for that move, though.

Although your ISP was kind enough to run this fibre for you, the price of 30 meters LC/UPC terminated fibre isn't terribly excessive (at least here in USA), so would it be possible to use their fibre as a pull-string to run new fibre instead? That would avoid all the adapters, although you'd have to be handy and careful with the pull forces allowed on a fibre.

But I digress. On the xcvr choice, I don't have any recommendations, as I'm on mobile. But one avenue is to look at a reputable switch manufacturer and find their xcvr list. The big manufacturers (Cisco, HPE/Aruba, etc) will have detailed spec sheets, so you can find the branded one that works for you. And then you can cross-reference that to cheaper, generic, compatible xcvrs.

litchralee ,

Re: 2.5 Gbps PCIe card

In some ways, I kinda despise the 802.3bz specification for 2.5 and 5 Gbps on twisted pair. It came into existence after 10 Gbps twisted-pair was standardized, and IMO exists only as a reaction to the stubbornly high price of 10 Gbps ports and the lack of adoption -- 1000 Mbps has been a mainstay and is often more than sufficient.

802.3bz is only defined for twisted pair and not fibre. So there aren't too many xcvrs that support it, and even fewer SFP+ ports will accept such xcvrs. As a result, the cheap route of buying an SFP+ card and a compatible xcvr is essentially off-the-table.

The only 802.3bz compatible PCIe card I've ever personally used is an Aquantia AQN-107 that I bought on sale in 2017. It has excellent support in Linux, and did do 10 Gbps line rate by my testing.

That said, I can't imagine that cards that do only 2.5 Gbps would somehow be less performant. 2.5 Gbps hardware is finding its way into gaming motherboards, so I would think the chips are mature enough that you can just buy any NIC and expect it to work, just like buying a 1000 Mbps NIC.

BTW, some of these 802.3bz NICs will eschew 10/100 Mbps support, because of the complexity of retaining that backwards compatibility. This is almost inconsequential in 2024, but I thought I'd mention it.

litchralee ,

I quickly looked up the HPE/Aruba transceiver document, and starting on page 61 is the table of SFP+ transceivers, specifically describing the frequency and mode. At least from their transceivers, J9151A, J9151E, JL749A, and JL783A would work for your single-mode, 1310 nm needs.

You will have to do additional research to find generic parts which are equivalent to those transceivers. Good luck in your endeavors!

litchralee ,

Since you mentioned that your ONT is 2.5 Gbps, I am assuming that you need a twisted-pair NIC. I don't have a recommendation for a NIC exactly for 2.5 Gbps, but since you're specifically looking for low operating temperature, you may want to avoid 10 Gbps twisted-pair NICs.

10GBaseT -- sometimes called 10G copper, but 10Gbps DACs also use copper -- operates very hot, whether in an SFP+ module or as a NIC. The latter is observable just by looking at the relatively large heat sinks needed for some cards. This is an inevitable result of trying to push 800 MSymbols/sec over pairs of copper wires, and it's lucky to exceed 55 meters on CAT6. It's impressive how far copper wire has come, but the end is nigh.

Now, it could be that when a 10 Gbps NIC is only linked at 2.5 Gbps, it could drop into a lower power state. But my experience with the 10/100/1000 baseT specs suggest that the PHY on a 10 Gbps NIC will just repeat the signals four times, to produce the same transmission of the quarter-as-fast 2.5 Gbps spec. So possibly no heat savings there.

A dedicated 2.5 Gbps card would likely operate cooler and is more likely to be available as a single port, which would fit in your available PCIe ports. Whereas 802.3bz 2.5/5/10 Gbps NICs tend to be dual-port.

A final note: you might find "2.5 Gbps RJ45 SFP+" modules online. But I'm not aware of a formal 802.3 spec that defines the 2.5/5 Gbps speeds for modular connectors, so these modules probably won't work with SFP+ NICs.

litchralee ,

Rack-mounted beer holder.

Jk. But really, anything which helps organize stuff is a worthwhile job for a 3d printer. Even something to loop fibre optic cables on, so that they don't exceed their maximum bend radii, is useful.

I think you'll also find the 3d printer aids in other endeavors. I've used mine to print replacement car trims, ham radio accessories, a photo film spooler, a bushing to convert vacuum hose diameters, and other odds and ends.

Looking to buy some Mellanox ConnectX-3 cards

I was found a listing on eBay for a "Mellanox CX354A ConnectX-3 FDR Infiniband 40GbE QSFP+" card for quite cheap. By the sound of the listing title it supports both infiniband and 40GbE, is that right? I would like to try out infiniband, but I would be buying for the 40GbE. And are there good drivers for modern linux distros for...

litchralee , (edited )

I only have experience with Mellanox CX-5 100Gb cards at work, but my understanding is that mainline Linux has good support for the entire CX lineup. That said, newer kernel versions -- starting at maybe 5.4? -- will have all sorts of bug fixes, so hopefully your preferred distro has built with those driver modules included, or loadable.

As for Infiniband (IB), I think you'd need transceivers with specific support for IB. That Ethernet and IB share the (Q)SFP(+) modular connector does not guarantee compatibility, although a quick web search shows a number of transceivers and DACs that explicitly list support for both.

That said, are you interested in IB fabrics or what they can enable? One use-case native to IB is RDMA, but has since been brought to -- so called "Converged" -- Ethernet in the form of RoCE, in support of high-performance storage technologies like SPDK that enable things like NVMe storage over the network.

If all you're looking for are the semantics of IB, and you're only ever going to have two nodes that are direct-attached, then the Linux fabric abstractions can be used the same way you'd use IB. The debate of Converged Ethernet (CE) vs IB is more about whether/how CE switches can uphold the same guarantees that an IB fabric would. Direct attachment avoids these concerns outright.

So I think perhaps you can get normal 40 Gb Ethernet DACs to go with these, and still have the ability to play with fabric abstractions atop Ethernet (or IP if you use RoCE v2, but that's not available on the CX-3).

Just bear in mind that IB and fabrics in general will get complicated very quickly, because they're meant to support cluster or converged computing, which try to make compute and storage resources uniformly accessible. So while you can use fabrics to transport a whole NVMe namespace from a NAS to a client machine with near line-rate performance, or set up some incredible RPC bindings between two machines, there may be a large learning curve to achieve these.

[Solved] Opening home server to the Internet via IPv6

I've been wanting to set up a small game server on my home network for myself and a few friends lately. Nothing I haven't done before - except the part where I open it up to the internet for people outside of my home network to play on....

litchralee ,

Could you let us know what the DNS issue was?

litchralee ,

If you describe what you configured using DNS and what tests you've performed, people in this community could also help debug that issue as well.

An AAAA records to map a hostname to an IPv6 address should be fairly trouble-free. If you create a new record, the "dig" command should be able to query it immediately, as the DNS servers will go through to the authoritative server, which has the new record. But if you modified an existing record, then the old record's TTL value might cause the old value to remain in DNS caches for a while.

When in doubt, you can also aim "dig" at the authoritative name server directly, to rule out an issue with your local DNS server or with your ISP's DNS server.

litchralee ,

If I understand correctly, you're now able to verify the AAAA on mobile. But you're still not able to connect to the web server from your mobile phone. Do I have that right?

I believe in a different comment here, you said that your mobile network doesn't support IPv6, and nor does a local WiFi network. In that case, it seems like your phone is performing DNS lookups just fine, but has no way to connect to an IPv6 destination.

If your desktop does have IPv6 connectivity but has DNS resolution issues, then I would now look into resolving that. To be clear, was your desktop a Linux/Unix system?

litchralee ,

I'm afraid I have no suggestions for DoT servers.

One tip for your debugging that might be useful is to use dig to directly query DNS servers, to help identify where a DNS issue may lay. For example, your earlier test on mobile happened to be using Google's DNS server on legacy IP (8.8.8.8). If you ran the following on your desktop, I would imagine that you would see the AAAA record:

dig @8.8.8.8 mydomain.example.com

If this succeeds, you know that Google's DNS server is a viable choice for resolving your AAAA record. You can then test your local network's DNS server, to see if it'll provide the AAAA record. And then you can test your local machine's DNS server (eg systemd-resolved). Somewhere, something is not returning your AAAA record, and you can slowly smoke it out. Good luck!

Installing some weird rails and a server in a rack ! A blog post by me! (blog.krafting.net)

I got a server case and some rails for free, they were annoying to build (yes, build), and I could not find anything regarding those rails online, so I decided to blog about it, in the hope of helping someone with all the same questions as me!...

litchralee ,

Nice job making it work!

This reminds me of when I installed my Dell m1000e blade server into my rack. As it turns out, the clearance behind the face of a 19" rack isn't standardized, so a protrusion on the ears would have interfered. The solution ended up being an angle grinder to remove the protrusion, and then re-leveling my rack, since otherwise the holes on the server wouldn't align unless the rails are absolutely plumb.

litchralee ,

It works, and that's what counts lol

Btw, I noticed your blog post was titled "random rail story #1". Should I infer that more rack rail-related blog posts will follow?

Platform for First Proxmox Server

Looking to build my first server out, trying to figure out if there is a "better" platform for my needs. Right now I'm just planning a mix of machines and containers in Proxmox for running a NAS and Plex server, router of some sort (also, any preferences on wireless access points?), a pihole if that's not just as easily done in...

litchralee ,

For wireless APs, Ubiquiti equipment is fairly well-priced and capable for prosumer gear, although I'm beginning to be less enthralled with the controller model for APs. They also can operate on 48vdc passive power, or 802.3af/at PoE, which might work nicely if you have a compatible switch.

I've heard from colleagues running Plex on Proxmox that core count is nice, except when doing transcoding, where you either want high single-corr performance or a GPU to offload. So an AMD Epic CPU might serve you well, if you can find one of the cheap ones being sold off from all the Chinese data centers on eBay.

Now with that said, have you considered deploying against existing equipment, and then identify deficiencies that new hardware would fix? That would certainly be the fastest way to get set up, and it lets you experiment for cheap, while waiting for any deals that might pop up.

litchralee ,

The multi port NIC can work, although I would recommend jumping straight to a managed or enterprise switch that can do VLANs. It saves on physical wiring and a managed switch often overlaps with other desired homelab features anyway, like PoE, IGMP/MLD snooping, and STP or loop-protect.

litchralee ,

Answering the question directly, your intuition is right that you'll want to limit the ways that your machine can be exploited. Since this is a Dell machine, I would think iDRAC is well suited to be the control mechanism here. iDRAC can accept SNMP commands and some newer versions can receive REST API calls.

But stepping back for a moment, is there any reason why you cannot configure the "AC Power Recovery" option in the system setup to boot the machine when power is restored? The default behavior is to remain as it was but you can configure it to always boot up.

From your description, it sounds like your APC unit notifies the server that the grid is down, which results in the OS shutting down. Ostensibly, the APC unit will soon diminish its battery supply and then the r320 will be without AC power. When the grid comes back up, the r320 will receive AC power and can then react by booting up, if so configured. Is this not feasible?

litchralee ,

If the server is sent a signal to shutdown due to a grid outage, who is telling it the grid was restored?

Ah, I see I forgot to explain a crucial step. When the UPS detects that grid power is lost, it sends a notification to the OS. In your case, it is received by apcupsd. What happens now is a two step process: 1) the UPS is instructed to power down after a fixed time period -- one longer than it would take for the OS to shut down, and 2) the OS is instructed to shut down. Here is one example of how someone has configured their machine like this. The UPS will stay off until grid power is restored.

In this way, the server will indeed lose power, shortly after the OS has already shut down. You should be able to configure the relevant delay parameters in apcupsd to preserve however much battery state you need to survive multiple grid events.

The reason the UPS is configured with a fixed time limit -- as opposed to, say, waiting until power draw drops below some number of watts -- is that it's easy and cheap to implement, and it's deterministic. Think about what would happen if an NFS mount or something got stuck during shutdown, thereby running down the battery, ending up with the very unexpected power loss the UPS was meant to avoid. Maybe all the local filesystems were properly unmounted in time, but when booting up later and mounting the filesystems, a second grid fault and a depleted battery state could result in data loss. Here, the risk of accidentally cutting off the shutdown procedure is balanced with the risk of another fault on power up.

litchralee ,

Looks like a reasonable deal. The mobo has IPMI, which if you've never used it, it's a dream for server management. It's no iDRAC or iLO, but it should work well enough for hands-off management.

litchralee ,

This answer would be incomplete without mentioning that Dell iDRAC and HPE iLO have a lot of proprietary functionality beyond what the IPMI standard requires. For example, iDRAC and iLO support rich KVM-like screen sharing, plus the ability to mount ISOs and other media onto the server. Indeed, so much more functionality exists in these implementations that a license key must be purchased to enable the most fancy features.

I will note that SuperMicro does simply call their offering as "SuperMicro IPMI" despite having a few of these proprietary features. But by and large, basic IPMI is an interoperability specification, with each implementation having their own unique strengths.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • incremental_games
  • meta
  • All magazines