ma...@mohawksoft.com wrote: 
> I have a RPI5 running ZFS, PostgreSQL, a DLNA server, and a full
> development stack that compiles just about any code I have laying around.
> 
> These things chave 8 gigs of RAM, 4 CPUs, use 15 watts of power, and cost
> less than a video card. My desktop is considerably bigger, but the core
> speed is only 2 or 3 times as fast as the ARM.

My rule of thumb: humans doing normal desktop tasks (whatever is
not pushing the envelope of that computing generation) cannot
distinguish less than a 100% performance improvement.

(AKA most of the time, everything is instantaneous. You only
notice when it isn't.)

...

> Would a stack of RPI5s, controlled by some sort of docker look-alike,
> perform better than a huge VMware server? Would it perform better than a
> large kubernetes cluster? Would they be more secure because they are
> physically separated.
> 
> Thoughts?

RPI - all of them to date - are limited by I/O. Let's suppose
the unit of computing is an 8GB RPI5, and the cost per unit to
interconnect and power and cool them is $20, so for each $100
increment we get 4 2.4GHz cores and 8GB RAM, a GPU and two lanes
of PCIe 2.0, and another 15W.

For some workloads this is great. If you were building a
security camera hub, for example, being able to add two cameras
worth of realtime processing for that price is very nice.
Anything where the individual tasks are reasonable but there are
a lot of them coming in seems like a good bet.

But somewhere around the 4-8 Pi mark, it will become obvious
that you need all three of better storage, networking, and
interconnections -- and you can only solve one of those at a
time with the RPI5. The coordination overhead starts taking a
larger chunk of each unit. Pretty soon you run into Amdahl's
law: eventually, the non-parallelizable part of any process will
be the bottleneck.

-dsr-
_______________________________________________
Discuss mailing list
Discuss@driftwood.blu.org
https://driftwood.blu.org/mailman/listinfo/discuss

Reply via email to