In response to Mark Mielke <[EMAIL PROTECTED]>:
> Bill Moran wrote:
> > In response to Mark Mielke <[EMAIL PROTECTED]>:
> >
> >
> >> Bill Moran wrote:
> >>
> >>> I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing
> >>> consistency checking at that point.
> >>>
On Dec 26, 2007, at 4:28 PM, [EMAIL PROTECTED] wrote:
now, if you can afford solid-state drives which don't have noticable
seek times, things are completely different ;-)
Who makes one with "infinite" lifetime? The only ones I know of are
built using RAM and have disk drive backup with in
On Dec 26, 2007, at 10:21 AM, Bill Moran wrote:
I snipped the rest of your message because none of it matters.
Never use
RAID 5 on a database system. Ever. There is absolutely NO reason to
every put yourself through that much suffering. If you hate yourself
that much just commit suicide,
Shane Ambler wrote:
>> I achieve something closer to +20% - +60% over the theoretical
>> performance of a single disk with my four disk RAID 1+0 partitions.
>
> If a good 4 disk SATA RAID 1+0 can achieve 60% more throughput than a
> single SATA disk, what sort of percentage can be achieved from
Bill Moran wrote:
In response to Mark Mielke <[EMAIL PROTECTED]>:
Bill Moran wrote:
I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing
consistency checking at that point.
According to this:
http://www.freebsd.org/cgi/man.cgi?query=gmirror&apropos=0&sekt
In response to Mark Mielke <[EMAIL PROTECTED]>:
> Bill Moran wrote:
> >
> >> What do you mean "heard of"? Which raid system do you know of that reads
> >> all drives for RAID 1?
> >>
> >
> > I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing
> > consistency checking a
Greg Smith wrote:
On Thu, 27 Dec 2007, Shane Ambler wrote:
So in theory a modern RAID 1 setup can be configured to get similar
read speeds as RAID 0 but would still drop to single disk speeds (or
similar) when writing, but RAID 0 can get the faster write performance.
The trick is, you need a
Mark Mielke wrote:
Shane Ambler wrote:
So in a perfect setup (probably 1+0) 4x 300MB/s SATA drives could
deliver 1200MB/s of data to RAM, which is also assuming that all 4
channels have their own data path to RAM and aren't sharing.
(anyone know how segregated the on board controllers such as t
Shane Ambler wrote:
So in theory a modern RAID 1 setup can be configured to get similar
read speeds as RAID 0 but would still drop to single disk speeds (or
similar) when writing, but RAID 0 can get the faster write performance.
Unfortunately, it's a bit more complicated than that. RAID 1 has
On Thu, 27 Dec 2007, Shane Ambler wrote:
So in theory a modern RAID 1 setup can be configured to get similar read
speeds as RAID 0 but would still drop to single disk speeds (or similar) when
writing, but RAID 0 can get the faster write performance.
The trick is, you need a perfect controller
Fernando Hevia wrote:
I'll start a little ways back first -
Well, here rises another doubt. Should I go for a single RAID 1+0 storing OS
+ Data + WAL files or will I be better off with two RAID 1 separating data
from OS + Wal files?
earlier you wrote -
Database will be about 30 GB in size in
On Wed, 26 Dec 2007, [EMAIL PROTECTED] wrote:
yes, the two linux software implementations only read from one disk, but I
have seen hardware implementations where it reads from both drives, and if
they disagree it returns a read error rather then possibly invalid data (it's
up to the admin to f
[EMAIL PROTECTED] wrote:
however I was addressing the point that for reads you can't do any
checking until you have read in all the blocks.
if you never check the consistency, how will it ever be proven otherwise.
A scheme often used is to mark the disk/slice as "clean" during clean
system shut
On Wed, 26 Dec 2007, Mark Mielke wrote:
[EMAIL PROTECTED] wrote:
I could see a raid 1 array not doing consistancy checking (after all, it
has no way of knowing what's right if it finds an error), but since raid
5/6 can repair the data I would expect them to do the checking each time.
Your mes
Bill Moran wrote:
What do you mean "heard of"? Which raid system do you know of that reads
all drives for RAID 1?
I'm fairly sure that FreeBSD's GEOM does. Of course, it couldn't be doing
consistency checking at that point.
According to this:
http://www.freebsd.org/cgi/man.cgi?quer
On Wed, 26 Dec 2007, Mark Mielke wrote:
[EMAIL PROTECTED] wrote:
On Wed, 26 Dec 2007, Mark Mielke wrote:
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't kn
On Wed, 26 Dec 2007, Mark Mielke wrote:
[EMAIL PROTECTED] wrote:
Thanks for the explanation David. It's good to know not only what but also
why. Still I wonder why reads do hit all drives. Shouldn't only 2 disks be
read: the one with the data and the parity disk?
no, becouse the parity is of t
In response to Mark Mielke <[EMAIL PROTECTED]>:
> Bill Moran wrote:
> > In order to recalculate the parity, it has to have data from all disks.
> > Thus,
> > if you have 4 disks, it has to read 2 (the unknown data blocks included in
> > the parity calculation) then write 2 (the new data block and
In response to Mark Mielke <[EMAIL PROTECTED]>:
> [EMAIL PROTECTED] wrote:
> > On Wed, 26 Dec 2007, Mark Mielke wrote:
> >
> >> Florian Weimer wrote:
> seek/read/calculate/seek/write since the drive moves on after the
> read), when you read you must read _all_ drives in the set to check
Bill Moran wrote:
In order to recalculate the parity, it has to have data from all disks. Thus,
if you have 4 disks, it has to read 2 (the unknown data blocks included in
the parity calculation) then write 2 (the new data block and the new
parity data) Caching can help some, but if your data end
[EMAIL PROTECTED] wrote:
I could see a raid 1 array not doing consistancy checking (after all,
it has no way of knowing what's right if it finds an error), but since
raid 5/6 can repair the data I would expect them to do the checking
each time.
Your messages are spread across the thread. :-)
[EMAIL PROTECTED] wrote:
On Wed, 26 Dec 2007, Mark Mielke wrote:
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that perform
[EMAIL PROTECTED] wrote:
Thanks for the explanation David. It's good to know not only what but
also
why. Still I wonder why reads do hit all drives. Shouldn't only 2
disks be
read: the one with the data and the parity disk?
no, becouse the parity is of the sort (A+B+C+P) mod X = 0
so if X=10 (
On Wed, 26 Dec 2007, Mark Mielke wrote:
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that performs consistency
checking on
Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that performs consistency
checking on each read operation. 8-(
Dave ha
On Wed, 26 Dec 2007, Florian Weimer wrote:
seek/read/calculate/seek/write since the drive moves on after the
read), when you read you must read _all_ drives in the set to check
the data integrity.
I don't know of any RAID implementation that performs consistency
checking on each read operation
On Wed, 26 Dec 2007, Fernando Hevia wrote:
David Lang Wrote:
with only four drives the space difference between raid 1+0 and raid 5
isn't that much, but when you do a write you must write to two drives (the
drive holding the data you are changing, and the drive that holds the
parity data for th
In response to "Fernando Hevia" <[EMAIL PROTECTED]>:
>
> > David Lang Wrote:
> >
> > with only four drives the space difference between raid 1+0 and raid 5
> > isn't that much, but when you do a write you must write to two drives (the
> > drive holding the data you are changing, and the drive tha
> seek/read/calculate/seek/write since the drive moves on after the
> read), when you read you must read _all_ drives in the set to check
> the data integrity.
I don't know of any RAID implementation that performs consistency
checking on each read operation. 8-(
---(end of
> David Lang Wrote:
>
> with only four drives the space difference between raid 1+0 and raid 5
> isn't that much, but when you do a write you must write to two drives (the
> drive holding the data you are changing, and the drive that holds the
> parity data for that stripe, possibly needing to r
On Wed, 26 Dec 2007, Fernando Hevia wrote:
Mark Mielke Wrote:
In my experience, software RAID 5 is horrible. Write performance can
decrease below the speed of one disk on its own, and read performance will
not be significantly more than RAID 1+0 as the number of stripes has only
increased from
Mark Mielke Wrote:
>In my experience, software RAID 5 is horrible. Write performance can
>decrease below the speed of one disk on its own, and read performance will
>not be significantly more than RAID 1+0 as the number of stripes has only
>increased from 2 to 3, and if reading while writing, you
> Bill Moran wrote:
>
> RAID 10.
>
> I snipped the rest of your message because none of it matters. Never use
> RAID 5 on a database system. Ever. There is absolutely NO reason to
> every put yourself through that much suffering. If you hate yourself
> that much just commit suicide, it's les
On Wed, 26 Dec 2007, Mark Mielke wrote:
I believe hardware RAID 5 is also horrible, but since the hardware hides
it from the application, a hardware RAID 5 user might not care.
Typically anything doing hardware RAID 5 also has a reasonable sized write
cache on the controller, which softens th
Fernando Hevia wrote:
Database will be about 30 GB in size initially and growing 10 GB per
year. Data is inserted overnight in two big tables and during the day
mostly read-only queries are run. Parallelism is rare.
I have read about different raid levels with Postgres but the advice
found
RAID 10.
I snipped the rest of your message because none of it matters. Never use
RAID 5 on a database system. Ever. There is absolutely NO reason to
every put yourself through that much suffering. If you hate yourself
that much just commit suicide, it's less drastic.
--
Bill Moran
Collabor
Hi list,
I am building kind of a poor man's database server:
Pentium D 945 (2 x 3 Ghz cores)
4 GB RAM
4 x 160 GB SATA II 7200 rpm (Intel server motherboard has only 4 SATA ports)
Database will be about 30 GB in size initially and growing 10 GB per year.
Data is inserted overnight in two big tab
37 matches
Mail list logo