On Tue, 8 May 2001 09:09:08 +0000 (UTC), [EMAIL PROTECTED] (Giles Lean) wrote: >Good performance on such storage systems might depend on keeping as >much work up to it as possible, to let the device determine what order >to service the requests. Attempts to minimise "head movement" may >hurt, not help. Letting the device determine the sequence of IO increases throughput and reduces performance. If you want the maximum throughput, so you can reduce the money you spend on storage, you que the requests and sort the ques based on the minimum work required to complete the aggregated requests. If you want performance, you put your request first and make the que wait. Some storage systems allow the specification of two or more priorities so your IO can go first and everyone else goes second. "lazy" page writes and all the other tricks used to keep IO in memory have the effect of reducing writes at the expense of data lost during a power failure. Some storage devices were built with batteries to allow writes after power loss. If the batteries could maintain writes for 5 seconds after poser loss, writes could be held up for nearly 5 seconds in the hope that many duplicate writes to the same location could be dropped. I know a lot of storage systems from the hardware up and few outperform an equivalent system where the money was focused on more memory in the computer. Most add on storage systems offering "spectacular" performance have make most financial sense when they are attached to a computer that is at a physical limit of expansion. If you have 4 Gb on a 32 bit computer, adding a storage system with 2 Gb of cache can be a sound investment. Adding the same 2 Gb cache to a 32 bit system expanded to just 2 Gb usually costs more than adding the extra 2 Gb to the computer. Once 64 bit computers with 32, 64 or 128 Gb of DDR become available, the best approach will go back to heaps of RAM on the computer and none on disk. If you are looking at one of the 64 bit replacements x86 style processor and equivalents, the best disk arrangement would be to have no file system or operating system intervention and have the whole disk allocated to the processor page function, similar to the theory behind AS/400s and equivalents. Each disk would be on a single fibre, service 64 Gb gigabyte and be mirrored on an adjacent disk. The only processing in the CPU would be ECC, the disk controller would perform the RAID 1 processing and perform the IO in a pendulum sweep pattern with just enough cache to handle one sweep. You would, of course, need power supplies big enough to cover a few extra sweeps and something to tell the page processing to flush everything when the power is dropping. When you have multiple computers in a cluster, you could build an intermediate device to handle the page flow much the same as a network switch. All these technologies were tried and proves several times in the last 30 years and work perfectly when the computer's maximum address space is larger than the total size of all open files. They worked perfectly when people had 100Mb databases on 200Mb disks in systems that could address 4Gb. Doubling the number of bits in the address range puts 64 bit systems out in front of both disks and memory again. There are already 128 bit and 256 bit processors in use so systems could be planned to stay ahead of disk design so you never have to worry about a file system again. The AMD slot A and Intel slot 1 could be sold the way you buy Turkish pizza, by the foot. Just walk up to the hardware shop and ask for 300 bits of address space. Shops could have specials, like an extra 100 bits of address space for all orders over $20. ---------------------------(end of broadcast)--------------------------- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])