:Speaking of which, I'd like to compliment you on the overall design of the
:Diablo system. It has scaled very well to handle a hundred million articles
:on-spool.
:
:Dumping, hash table 67108864 entries, record size 28 <== :-) :-) :-)
:@268435472
:diload: 104146775/104146944 entries loaded
:History file trim succeeded:
:-rw-r--r-- 1 news news 3184549904 Feb 15 05:53 /news/dhistory
:-rw-r--r-- 1 news news 3184549904 Feb 15 02:45 /news/dhistory.bak
:
:3 hours to rebuild dhistory on a SMP machine. Sigh.
:
:/dev/vinum/news 14154136 8491456 5662680 60% /news
:/dev/vinum/n0 31805976 26465776 5340200 83% /news/spool/news/N.00
:/dev/vinum/n1 31805976 26754544 5051432 84% /news/spool/news/N.01
:/dev/vinum/n2 31805976 27787840 4018136 87% /news/spool/news/N.02
:/dev/vinum/n3 31805976 26834120 4971856 84% /news/spool/news/N.03
:/dev/vinum/n4 31805976 27609456 4196520 87% /news/spool/news/N.04
:/dev/vinum/n5 31805976 26771072 5034904 84% /news/spool/news/N.05
:/dev/vinum/n6 31805976 27396296 4409680 86% /news/spool/news/N.06
:/dev/vinum/n7 31805976 26801120 5004856 84% /news/spool/news/N.07
:/dev/vinum/n8 31805976 8 31805968 0% /news/spool/news/N.08
:
:Yeah, I'm not using that last spool, so I could probably squeeze 120 million
:articles on here. No binaries obviously.
I have one word for this: "YowZeR!".
I assume you bumped up the default hash table size... of course you
must have!
:> p.s. I think large filesystems are another reason why NFS (and other remote
:> filesystems) is only going to become more important over time.
:
:I think "and other remote filesystems" is the concept. I'm using these
:spool servers instead of NFS. Many ISP's have done the NFS-mounted reader
:thing, and that works, if you've a NetApp or similar. However, NFS is so
:chatty, and NFS mounts tend to jam if the server dies. I think you'll
:continue to see a move towards some sort of "storage appliance" for various
:applications, just like the Diablo server is a storage appliance for Usenet
:articles. It's not exactly a filesystem, but it's similar in that it is a
:fit-for-purpose model to do the required task.
:
:Thanks for Diablo, Matt.
:
:... Joe
:
:-------------------------------------------------------------------------------
:Joe Greco - Systems Administrator [EMAIL PROTECTED]
:Solaria Public Access UNIX - Milwaukee, WI 414/342-4847
Your welcome!
Yes, I designed the Diablo reader *SPECIFICALLY* to be able to do
(multiple) remote spool serving. The idea was to be able to supply spool
redundancy so you could take a spool down without taking the
whole system down, and so you could run the 'frontend' readers on
pure-cpu boxes. It's predicated on the concept that a history lookup
is supposed to be cheap, which is usually true.
And, of course, the reader uses a multi-fork/multi-thread design,
resulting in an extremely optimal footprint.
The spools are supposed to be pure message respositories based
on the message-id and I've contemplated using the technology to back
other uses, such as a web-based messaging system or even email inboxes.
The article storage format is very robust in terms of crash recovery
though in retrospect I should have added a magic cookie + binary size
header to the beginning of each article to make recovery more reliable.
I'm glad that you and others are taking over source management of
the project, I have no time to do it any more :-(.
-Matt
Matthew Dillon
<[EMAIL PROTECTED]>
To Unsubscribe: send mail to [EMAIL PROTECTED]
with "unsubscribe freebsd-hackers" in the body of the message