On Sun, Aug 30, 2009 at 11:56 PM, Merlin Moncure wrote:
> 192k written
> raid 10: six writes
> raid 5: four writes, one read (but the read and one of the writes is
> same physical location)
>
> now, by 'same physical' location, that may mean that the drive head
> has to move if the data is not in
On Sun, Aug 30, 2009 at 1:36 PM, Mark Mielke wrote:
> On 08/30/2009 11:40 AM, Merlin Moncure wrote:
>>
>> For random writes, raid 5 has to write a minimum of two drives, the
>> data being written and parity. Raid 10 also has to write two drives
>> minimum. A lot of people think parity is a big de
I've already learned my lesson and will never use raid 5 again. The
question is what I do with my 14 drives. Should I use only 1 pair for
indexes or should I use 4 drives? The wal logs are already slated for
an SSD.
Scott Marlowe wrote:
On Sat, Aug 29, 2009 at 2:46 AM, Greg Stark wrote:
On
On 08/30/2009 11:40 AM, Merlin Moncure wrote:
For random writes, raid 5 has to write a minimum of two drives, the
data being written and parity. Raid 10 also has to write two drives
minimum. A lot of people think parity is a big deal in terms of raid
5 performance penalty, but I don't -- relati
On Sun, Aug 30, 2009 at 4:40 PM, Merlin Moncure wrote:
> For random writes, raid 5 has to write a minimum of two drives, the
> data being written and parity. Raid 10 also has to write two drives
> minimum. A lot of people think parity is a big deal in terms of raid
> 5 performance penalty, but I
On Sat, Aug 29, 2009 at 9:59 AM, Scott Marlowe wrote:
> On Sat, Aug 29, 2009 at 2:46 AM, Greg Stark wrote:
>> On Sat, Aug 29, 2009 at 5:20 AM, Luke Koops wrote:
>>> Joseph S Wrote
If I have 14 drives in a RAID 10 to split between data tables
and indexes what would be the best way to alloc