On Thu, July 15, 2010 09:29, Tim Cook wrote:
> On Thu, Jul 15, 2010 at 9:09 AM, David Dyer-Bennet <d...@dd-b.net> wrote:
>
>>
>> On Wed, July 14, 2010 23:51, Tim Cook wrote:

>> > You're clearly talking about something completely different than
>> everyone
>> > else.  Whitebox works GREAT if you've got 20 servers.  Try scaling it
>> to
>> > 10,000.  "A couple extras" ends up being an entire climate controlled
>> > warehouse full of parts that may or may not be in the right city.  Not
>> to
>> > mention you've then got full-time staff on-hand to constantly be
>> replacing
>> > parts.  Your model doesn't scale for 99% of businesses out there.
>> Unless
>> > they're google, and they can leave a dead server in a rack for years,
>> it's
>> > an unsustainable plan.  Out of the fortune 500, I'd be willing to bet
>> > there's exactly zero companies that use whitebox systems, and for a
>> > reason.
>>
>> You might want to talk to Google about that; as I understand it they
>> decided that buying expensive servers was a waste of money precisely
>> because of the high numbers they needed.  Even with the good ones, some
>> will fail, so they had to plan to work very well through server
>> failures,
>> so they can save huge amounts of money on hardware by buying cheap
>> servers rather than expensive ones.

> Obviously someone was going to bring up google, whose business model is
> unique, and doesn't really apply to anyone else.  Google makes it work
> because they order so many thousands of servers at a time, they can demand
> custom made parts for the servers, that are built to their specifications.

Certainly they're one of the most unusual setups out there, in several
ways (size, plus details of what they do with their computers.

>  Furthermore, the clustering and filesystem they use wouldn't function at
> all for 99% of the workloads out there.  Their core application: search,
> is
> what makes the hardware they use possible.  If they were serving up a
> highly
> transactional database that required millisecond latency it would be a
> different story.

Again, I'm not at all convinced of that "99%" bit.

Obviously low-latency transactional database applications are about the
polar opposite of what Google does.  However, transactional database
applications are nearer 1% than 99% of the workloads out there, at every
shop I've worked at or seen detailed descriptions of.

Big email farms, for example, don't generally have that kind of database
at all.  Big web farms probably do have some databases used that way --
but not for that high a percentage of their traffic, and generally running
on one big server while the web is spread across hundreds of servers.
Akamai is more like Google in a bunch of ways than most places.  Wikipedia
and ebay and amazon have huge web front-ends, while also needing
transactional database support.

Um, maybe I'm getting really too far afield from ZFS.  I'll shut up now :-) .
-- 
David Dyer-Bennet, d...@dd-b.net; http://dd-b.net/
Snapshots: http://dd-b.net/dd-b/SnapshotAlbum/data/
Photos: http://dd-b.net/photography/gallery/
Dragaera: http://dragaera.info

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to