Standardized by JEDEC earlier this year as JESD323, CUDIMMs tweak the traditional unbuffered DIMM by adding a clock driver (CKD) to the DIMM itself, with the tiny IC responsible for regenerating the clock signal driving the actual memory chips. By generating a clean clock locally on the DIMM (rather than directly using the clock from the CPU, as is the case today), CUDIMMs are designed to offer improved stability and reliability at high memory speeds, combating the electrical issues that would otherwise cause reliability issues at faster memory speeds. In other words, adding a clock driver is the key to keeping DDR5 operating reliably at high clockspeeds.
Why does desktop hardware become more and more complex and fragile?
I want my BunkerNet with 90s Amiga level machines with technology practical enough to be produced (with reasonable investment) at least in every 1mln city (with literate population and necessary raw resources available).
Yes, I've even started with something above that, running Windows 98SE, games and all.
But just ... how necessary it really is? Just (that is, 1.5 hrs ago, ADHD) returned home from a bicycle ride in a park, it's fun with a normal bicycle, it's fun with a Soviet bicycle which is barely that, it's fun with a foldable bicycle with switchable reductors, it's fun with roller skates, and it's fun on foot.
Can we treat computers the same? They are means to an end. NEW ROUNDED CORNERS AND ADS IN EVERY ORIFICE TO BE ALWAYS CONNECTED TO OUR NEW ARTIFICIAL INTELLIGENCE is not that end.
EDIT: ok, each 1mln city is asinine ; each 5-10mln people on the planet maybe?
Eh. LPCAMM seems more useful overall as a product. Faster DDR at this point in time has diminishing returns.
It'll be interesting to see how this plays out though, because there are a few different paths to solve this type of problem with DDR5. Personally, I'd love for much lower power, but a wider bus, which is where I thought things were heading.
Well we've seen CAS latency increase almost quicker than DDR speeds. CAMM should address this issue by shortening the distance from cpu to RAM, at least for laptops.
faster ram generally has dimishing returns on sustem use, however it does matter for gpu compute reasons on igpu (e. g gaming, and ML/AI would make use of the increased memory bandwith).
its not easily to simply just push a wider bus because memory bus size more or less affects design complexity, thus cost. its cheaper to push memory clocks than design a die with a wider bus.
Computational-Fluid-Dynamics simulations are RAM-limited, iirc.
I'm presuming many AI models are, too, since some of them require stupendous amounts of RAM, which no non-server machine would have.
"diminishing returns" is what Intel's "beloved" Celeron garbage was pushing.
When I ran Memtest86+ ( or the other version, don't remember ), & saw how insanely slow RAM was, compared with L2 or L3 cache, & then discovered how incredible the machine-upgrade going from SATA to NVMe was..
Get the fastest NVMe & RAM you can: it puts your CPU where it should have been, all along, and that difference between a "normal" build vs an effective build is the misframing the whole industry has been establishing, for decades.
Only if you need 2-4 sticks, otherwise they take up too much PCB space. Look at servers and how a good chunk of their volume is filled with dozens of sticks. You cant simply lay them down flat.
Well usually yes, but if cpu manufacturers decide to really lean into cramming lots of cores into cpu-s (Like Intel's big.LITTLE cpus, but even more cores), then we probably will need faster RAM-s, since more core == more memory bandwith demand, and currently this issue has been always resolved by faster RAMs. (Or we could just increase the memory channels)
En, I don't know. So much innovation has happened because of parent workarounds.
Also they can kind of stop big companies just completely copying some innovation and driving the inventor out of businesses
It's pretty apparent in the 3d printing industry. Companies like Prusa, E3d, Ultimaker, MakerBot and Aleph-Objects brought consumer 3d printing from basically a hot glue gun to better than some $100k+ industry machines. All of them used to be completely open sourced. Ok, MakerBot and to some degree Ultimaker just went off the deep end, but Prusa is now also holding back their design files for a while after release, E3d has released their new hotend as basically closed Source. Why? Well they want to avoid going the path of Aleph-Objects who had to sell out to a holding company because Chinese manufacturers copied everything as fast as it could be developed and sold it for a fraction of the price.
I'd love if it didn't have to be this way, but it kind of does now.
Edit: if near monopolies like intel AMD and Nvidia would have to give up their parents the world would most definitely be a more innovative place :D
What's the point of 5600gt? With the rising price of ram and nand, ddr4 motherboard winding down availability wise (only ASRock still has many options at this point), it feels like by the time it comes out, the saving on a 5600g, even 5700g will be null.
0.4Ghz on the base clock is kind of a big difference when you can't overclock that headroom back. I'd personally wait till the benchmarks are out to really gauge how much you are giving up.
Obviously, we aren’t gauging whether the APU is overcockable or not, but about their performance in base. Talking about overclock is moot when comparing base clock speed… we will look at benchmark and compare there.
AM4 has been around for so long and is owned by so many people, there's still a big market for those who want to upgrade without replacing their motherboard and RAM at the same time.
I agree it's a bit puzzling? It's crazy that AMD is releasing a new CPU for a 7 year old platform.
But admittedly I am personally still running with my trusty old Ryzen 5 1600, maybe I'd consider an upgrade just because it's easy and cheap, but it's not like I really need it.
I'm guessing there are a lot of AM4 motherboards out there, so there is still a market for making upgrades for them.
They have to produce Zen3 still for server contracts, so they're making the chips anyway. The ones that don't make the cut are still suitable as desktop chips.
It's a win-win. AMD gets to sell the chips. Consumers, particularly that already have AM4 boards, get the option of having these rather than replacing multiple components and taking their entire PC apart.
But yeah it's wild that a socket from September 2016 is still getting new CPUs now. AM4 is the best CPU socket there has ever been IMO.
Ah yes, same chip as in older Epyc, I didn't think of that. Such a clever design by AMD. 😀 👍
If I remember correctly, the early 2016 boards were not compatible with Ryzen, and although Wikipedia says September 2016, the earliest actual model listed is from February 2017. https://en.wikipedia.org/wiki/Socket_AM4
So in my book, the platform remains from 2017.
Here's the summary for the wikipedia article you mentioned in your comment:
Socket AM4 is a PGA microprocessor socket used by AMD's central processing units (CPUs) built on the Zen (including Zen+, Zen 2 and Zen 3) and Excavator microarchitectures. AM4 was launched in September 2016 and was designed to replace the sockets AM3+, FM2+ and FS1b as a single platform. It has 1331 pin slots and is the first from AMD to support DDR4 memory as well as achieve unified compatibility between high-end CPUs (previously using Socket AM3+) and AMD's lower-end APUs (on various other sockets). In 2017, AMD made a commitment to using the AM4 platform with socket 1331 until 2020. AM5 succeeded the AM4 platform in late 2022 with the introduction of the Ryzen 7000 series however, AMD has continued to release new AM4 based CPUs even after the release of AM5.
Ryzen desktop chips use the same compute chiplets as server (server has a long tail for adoption and a steady need for replacement parts), so they have a large supply of chiplets that don't meet server requirements but can be downclocked or given more voltage for desktop. This also fulfills the low end desktop market, so they don't have to produce lower end chips on more expensive nodes. There's also a lot of AM4 platforms that can get a new lease on life with a drop in Zen3 replacement.
Then you also have supply from the laptop side with similar issues (don't meet voltage requirements for efficiency), which is where the APUs come from.
Here's the summary for the wikipedia article you mentioned in your comment:
A three-dimensional integrated circuit (3D IC) is a MOS (metal-oxide semiconductor) integrated circuit (IC) manufactured by stacking as many as 16 or more ICs and interconnecting them vertically using, for instance, through-silicon vias (TSVs) or Cu-Cu connections, so that they behave as a single device to achieve performance improvements at reduced power and smaller footprint than conventional two dimensional processes. The 3D IC is one of several 3D integration schemes that exploit the z-direction to achieve electrical performance benefits in microelectronics and nanoelectronics.
3D integrated circuits can be classified by their level of interconnect hierarchy at the global (package), intermediate (bond pad) and local (transistor) level. In general, 3D integration is a broad term that includes such technologies as 3D wafer-level packaging (3DWLP); 2. 5D and 3D interposer-based integration; 3D stacked ICs (3D-SICs); 3D heterogeneous integration; and 3D systems integration.
Here's the summary for the wikipedia article you mentioned in your comment:
The meaning of life pertains to the significance of living or existence in general, and is sought through the question "What is the meaning of life? " Many other related questions include: "Why are we here? ", "What is life all about? ", or "What is the purpose of existence? " There have been many proposed answers to these questions from many different cultural and ideological backgrounds.
anandtech.com
Hot