Great justification for switching to Graphene OS, more secure, more control, and google has to ask permission to install things and the play store is optional.
Right, but the system needed maintenance, and people would still need to ration water and not water their lawns, even if the water they sell to outsiders (less than 1%) stopped.
From the article is sounds more like, some town folk don't like not being able to water their lawns. Said folks targeted the people buying water, despite them being less than 1% of the water used. Not to mention apparently they are providing 45% of the towns tax revenue with their water purchases.
> apparently they are providing 45% of the towns tax revenue with their water purchases
"Revenue for the water sales to rural residents totaled $43,000 per year, about 15% of total revenue."
This means that 'total revenue' is about $287k. I would guess that's the revenue of the water system, not the town's entire tax base. Still a significant figure, but not 45% of the town's tax revenue.
Buy a Data60 (60 disk chassis), add 60 drives. Buy a 1U server (2 for redundancy). I'd recommend 5 stripes of 11 drives (55 total) with 5 global spares. Use a RAIDz3 so 8 disks of data per 11 drives.
Total storage should be around 8 * 24 * 5 = 960GB, likely 10% less because of marketing 10^9 bytes instead of 2^30 for drive sizes. Another 10% because ZFS doesn't like to get very full. So something like 777TB usable which easily fits 650TB.
I'd recommend a pair of 2TB NVMe with a high DWPD as a cache.
The disks will cost $18k, the data60 is 4U, and a server to connect it is 1U. If you want more space upgrade to 30TB $550 each) drives or buy another Data60 full of drives.
Sure, you could. The design would do something like:
We need a bigger memory controller.
To get more traces to the memory controller We need more pins on the CPU.
Now need a bigger CPU package to accommodate the pins.
Now we need a motherboard with more traces, which requires more layers, which requires a more expensive motherboard.
We need a bigger motherboard to accommodate the 6 or 8 dimm sockets.
The additional traces, longer traces, more layers on the motherboard, and related makes the signalling harder, likely needs ECC or even registered ECC.
We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel). All larger, more expensive more than 2x the power, and is likely to be in a $5-$15k workstation/server not a $2k framework desktop the size of a liter of milk or so.
> We need a more expensive CPU, more expensive motherboard, more power, more cooling, and a larger system. Congratulations you've reinvented threadripper (4 channel), siena (6 channel), Threadripper pro (8 channel), or epyc (12 channel).
This is the real story not the conspiracy-tinged market segmentation one. Which is silly because at levels where high-end consumer/enthusiast Ryzen (say, 9950 X3D) and lowest-end Threadripper/EPYC (most likely a previous-gen chip) just happen to truly overlap in performance, the former will generally cost you more!
Well sort of. Apple makes a competitive mac mini and macbook air with a 128 bit memory interface, decent design, solid build, nice materials, etc starting at $1k. PC laptops can match nearly any aspect, but rarely match the quality of the build, keyboard, trackpad, display, aluminum chassis, etc.
However Apple will let you upgrade to the pro (double the bandwidth), max (4x the bandwidth), and ultra (8x the bandwidth). The m4 max is still efficient, gives decent battery life in a thin light laptop. Even the ultra is pretty quiet/cool even in a tiny mac studio MUCH smaller than any thread ripper pro build I've seen.
Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.
> Does mystify me that x86 has a hard time matching even a mac mini pro on bandwidth, let alone the models with 2x or 4x the memory bandwidth.
The market dynamics are pretty clear. Having that much memory bandwidth only makes sense if you're going to provide an integrated GPU that can use that bandwidth; CPU-based laptop/desktop workloads that bandwidth-hungry are too rare. The PC market has long been relying on discrete GPUs for any high-performance GPU configuration, and the GPU market leader is the one that doesn't make x86 CPUs.
Intel's consumer CPU product line is a confusing mess, but at the silicon level it comes down to one or two designs for laptops (a low-power and a mid-power design) that are both adequately served by a 128-bit memory bus, and one or two desktop designs with only a token iGPU. The rest of the complexity comes from binning on clock speeds and core counts, and sometimes putting the desktop CPU in a BGA package for high-power laptops.
For Intel to make a part following the Strix Halo and Apple strategy, Intel would need to add a third major category of consumer CPU silicon, using far more than twice the total die size of any of their existing consumer CPUs, to go after a niche that's pretty small and very hard for Intel to break into given the poor quality of their current GPU IP. Intel doesn't have the cash to burn pursuing something like this.
It's a bit surprising AMD actually went for it, but they were in a better position than Intel to make a part like Strix Halo from both a CPU and GPU IP perspective. But they still ended up not including their latest GPU architecture, and only went for a 256-bit bus rather than 512-bit.
Yes, but that platform has in-package memory? Which is a higher degree of integration than even "soldered". That's the kind of platform Strix Halo is most comparable to.
(I suppose that you could devise a platform with support for mixing both "fast" in-package and "slow" DIMM-socketed memory, which could become interesting for all sorts of high-end RAM-hungry workloads, not just AI. No idea how that would impact the overall tradeoffs though, might just be infeasible.
...Also if persistent memory (phase-change or MRAM) can solve the well-known endurance issues with flash, maybe that ultimately becomes the preferred substrate for "slow" bulk RAM? Not sure about that either.)
Dunno, nice, quiet, small machine, using standard parts (power supply, motherboard, etc).
If you want the high memory bandwidth get the strix halo, if not get any normal PC. Sure apple has the bandwidth as well, and also soldered memory as well.
If you want dimms and you want the memory bandwidth get a threadripper (4 channel), siena (6 channel), thread ripper pro (8 channel), or Epyc (12 channel). But be prepared to double your power (at least), quadruple your space, double your cost, and still not have a decent GPU.
The 8 DRAM packages pretty clearly indicate you're not getting the extra capacity for end-to-end ECC as you would on a typical workstation or server memory module.
Many mobile devices have on-chip ECC because they have to for signal integrity reasons... technically the Raspberry Pi 5 has ECC too, by that definition!
You're right, I had asked my HP specialist to confirm that it was ECC, and he had said yes, but when I prompted him again on the distinction between link ECC (protects data in motion) vs true on-die ECC (protects data at rest in RAM, much more important), he admitted it was the former and canceled my order.
I now ordered a Beelink GTR9 Pro, which unlike the Framework has dual 10G Ethernet, not the weird 5G flavor. We'll see how that goes.
Ah, interesting. Did you get a ship date? I do like the idea of dual 10G. It would be nice to see a review on how well the GTR9's cooling works and how quiet it is under full load.
I've been tempted, but had a hard time finding a case where I needed more than the 9950x but less than a single socket epyc. Especially since the epyc motherboards are cheaper, CPUs are cheaper, and the Epycs have 3x the memory bandwidth of the thread ripper and 1.5x the memory bandwidth of the Threadripper pro.