Not really. It’s just a normal Zen 4 CPU with some server features like ECC memory support.
The biggest downfall of these chips is they have the same 28 PCI-E lanes as any consumer grade Zen 4 CPU. Quite the difference between that and the cheapest EPYC CPUs outside the 4000 series.
You’re going to run in to some serious I/O shortages if trying to fit a 10gbe card, an HBA card for storage, and a graphics card or two and some NVME drives.
Not really. It’s just a normal Zen 4 CPU with some server features like ECC memory support.
I’m pretty sure all the Zen CPUs have supported ECC memory, ever since the first generation of them.
A lot of the Zen based APUs don’t support ECC. The next thing is if it supports registered or unregistered modules - everything up to threadripper is unregistered (though I think some of the pro parts are registered), Epycs are registered.
That makes a huge difference in how much RAM you can add, and how much you pay for it.
Not officially. Only Ryzen Pro have official (unregistered) ECC support and not many motherboards support it either. AFAIK Threadripper doesn’t officially support it either but I could be wrong.
Many boards support ECC even when not mentioned. Most ASUS and ASRock boards do for example.
The newest Threadripper 7000 series not only support ECC, but require it to work. It only accepts DDR5 registered ECC RAM.
Consumer CPUs were lacking ECC reporting, so you never really knew if ECC was correcting errors or not.
No, even the earliest Ryzens support ECC reporting just fine, given the motherboard used supports it, which many boards do. Only the non-Pro APUs do not support ECC.
Probably best to look at it as a competitor to a Xeon D system, rather than any full-size server.
We use a few of the Dell XR4000 at work (https://www.dell.com/en-us/shop/ipovw/poweredge-xr4510c), as they’re small, low power, and able to be mounted in a 2-post comms rack.
Our CPU of choice there is the Xeon D-2776NT (https://www.intel.com/content/www/us/en/products/sku/226239/intel-xeon-d2776nt-processor-25m-cache-up-to-3-20-ghz/specifications.html), which features 16 cores @ 2.1GHz, 32 PCIe 4.0 lanes, and is rated 117W.
The ostensibly top of this range 4584PX, also with 16 cores but at double the clock speed, 28 PCIe 5.0 lanes, and 120W seems like it would be a perfectly fine drop-in replacement for that.
(I will note there is one significant difference that the Xeon does come with a built-in NIC; in this case the 4-port 25Gb “E823-C”, saving you space and PCIe lanes in your system)
As more PCIe 5.0 expansion options land, I’d expect the need for large quantities of PCIe to diminish somewhat. A 100Gb NIC would only require a x4 port, and even a x8 HBA could push more than 15GB/s. Indeed, if you compare the total possible PCIe throughput of those CPUs, 32x 4.0 is ~63GB/s, while 28x 5.0 gets you ~110GB/s.
Unfortunately, we’re now at the mercy of what server designs these wind up in. I have to say though, I fully expect it is going to be smaller designs marketed as “edge” compute, like that Dell system.
We’ll see if they even make them. I can’t imagine there’s a huge customer base who really needs to cram all that I/o through only two or 4 lanes. Why make these ubiquitous cards more expensive if most of the customers buying them are not short PCI-E lanes? So far most making use of 5.0 are graphics and storage devices. I’ve not seen any hint of someone making a sas or 10 gbe card that uses 5.0 and fewer lanes. Most cards for sale today still use 3.0 let alone 4.0.
I might as well just drop the cash on a real EPYC CPU with 128 lanes if I’m only going to be able to buy cutting edge expansion cards that companies may or may not be motivated to make.
Agreed the PCI layout is bad. My problem is the x16 slot.
I would prefer 8 slots/onboard with PCIe5 x2 from CPU. From the chipset 2 slots of PCIe4 x2. This would probably adequate IO. Aiming for 2x25 Gbits performance.
This is really nice for home servers. There has been a huge gap for years where the choice was a 16-64 core high watt monstrosity or use a 4 year old server CPU before every server went to high core counts.
8cores with ecc is perfect for my home use.
Could’ve just gotten a Ryzen then. These Epycs are essentially relabeled Ryzen CPUs.
Could be but finding a motherboard that has verified ECC is tricky. Most say works but not tested/supported so you’re on your own to figure out if ECC fully works.
The server/workstation focused ASRock Rack AM5 mainboards list plenty of ECC modules in their QVL. The “gaming-focused” ASUS B650E-E I’m using even has two ECC modules listed in its QVL.
So you could’ve already gotten verified ECC support, the fact that the same CPUs now exist with a different (EPYC) branding doesn’t change that. Finding these mainboards isn’t particularly tricky either.
The AsRock says ECC but not verified with Ryzen.
So you end up having to test it yourself like this guy and hope the version hasn’t changed between when he bought the motherboard and now.
“ASRock” and “ASRock Rack” are two different series of motherboards.
Here’s the QVL of one of their AM5 mainboards: https://www.asrockrack.com/general/productdetail.asp?Model=B650D4U-2L2T/BCM#Memory - it doesn’t limit these modules to specific CPUs. All CPUs with ECC compatibility also support these modules on this mainboard. Some of these Rack boards are over a year old, and they always had some ECC modules on their QVL. This - again - isn’t EPYC 4004 specific, they couldn’t have validated it with EPYC 4004 CPUs a year ago. In fact, their CPU support list doesn’t even list EPYC 4004 CPUs as of today, as they haven’t released a BIOS update adding (official) compatibility in for these CPUs (it will probably be released shortly though).
ASRock Rack AM4 mainboards also officially support ECC memory. So if you wanted verified ECC support on a comparatively cheap AMD platform you could’ve always gone for one of these boards with a regular Ryzen CPU (not an APU). The boards are a bit on the expensive side but if you want official support (for whatever reason you’d need that in a homelab environment) you can get it.
I’ve read there is an id pin on Epyc cpus that differentiates them from Ryzen. Der8aur made it work by masking the pin on the socket.
I’m curious, what do you or anorher “classic”(?) home user do that needs more than like an old intel 6500 with say 32GB RAM and some 1 TB SSD (hoarding etc goes to the NAS right?) of storage?
I know dockers consume, or so I have heard, but even a webserver, streaming etc is that really eating up the (pcie)bandwith?
I’m just a low end tinkerer who likes to buy over specced stuff so I wonder what’s you all doing with yours I guess!
Plex, Blue Iris, Minecraft mod servers for the kids. I’ll often use the server CPU for video filtering/encoding home videos off of VHS tapes because the nnedi3 filter takes a lot of CPU.
Years ago I lost data on a nas because the ram wasn’t ECC. So I won’t buy/build any PC without ECC unless it’s only going to be used for web browsing/gaming.
We’re on the same boat. I keep being told that all I get is “overkill”, but I like to think of it as “future-proffing”, even though I’ll probably upgrade something in my box within 3 months 🤣. Self-delusion my wife calls it. Some people don’t believe in God, I don’t believe in overkill.
deleted by creator
Here is an alternative Piped link(s):
https://piped.video/watch?v=zTFq-9K7JZE
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
Threadripper already accomplished all of this years ago. My TR2970WX has 24 cores/48 threads, 48 PCI-E lanes, and it supports ECC and non-ECC RAM. My AsRock Rack board has BMC support as well.
The Threadripper series was the perfect workstation CPU. I’ve had mine for a few years and it can handle anything I throw at it, it can easily transcode 2-3 4K videos while doing multiple other things.
It wasn’t cheap though, it was like $650 on sale, originally like a grand or so.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters NAS Network-Attached Storage Plex Brand of media server package SSD Solid State Drive mass storage
3 acronyms in this thread; the most compressed thread commented on today has 14 acronyms.
[Thread #757 for this sub, first seen 21st May 2024, 22:45] [FAQ] [Full list] [Contact] [Source code]