> "ca" == Carsten Aulbert writes:
> "ls" == Lutz Schumann writes:
ca> X25-E drives and a converter from 3.5 to 2.5 inches. So far
ca> two systems have shown pretty bad instabilities with that.
instability after crashing or instability while running? Lutz
Schumann 2010-01-10 see
Hi
On Friday 22 January 2010 07:04:06 Brad wrote:
> Did you buy the SSDs directly from Sun? I've heard there could possibly be
> firmware that's vendor specific for the X25-E.
No.
So far I've heard that they are not readily available as certification
procedures are still underway (apart from
Did you buy the SSDs directly from Sun? I've heard there could possibly be
firmware that's vendor specific for the X25-E.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
On Thu, 21 Jan 2010, Edward Ned Harvey wrote:
Although it's not technically striped according to the RAID definition of
striping, it does achieve the same performance result (actually better) so
people will generally refer to this as striping anyway.
People will say a lot of things, but that d
No. But, that's where the hybrid solution comes in. ASM would be used for the
database files and ZFS for the redo/archive logs and undo. Corrupt blocks in
the datafiles would be repaired with data from redo during a recovery, and ZFS
should give you assurance that the redo didn't get corrupted.
Can ASM match ZFS for checksum and self healing? The reason I ask is
that the x45x0 uses inexpensive (less reluable) SATA drives. Even the
J4xxx paper you cite uses SAS for production data (only using SATA for
Oracle Flash, although I gave my concerns about that too).
The thing is, ZFS and
> zpool create testpool disk1 disk2 disk3
In the traditional sense of RAID, this would create a concatenated data set.
The size of the data set is the size of disk1 + disk2 + disk3. However,
since this is ZFS, it's not constrained to linearly assigning virtual disk
blocks to physical disk blocks
On Thursday 21 January 2010 10:29:16 Edward Ned Harvey wrote:
> > zpool create -f testpool mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0
> > mirror c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0
> > mirror c0t2d0 c1t2d0
> > mirror c4t2d0 c5t2d0 mirror c6t2d0 c7t2d0 mirror c0t
> Zfs does not strictly support RAID 1+0. However, your sample command
> will create a pool based on mirror vdevs which is written to in a
> load-shared fashion (not striped). This type of pool is ideal for
Although it's not technically striped according to the RAID definition of
striping, it do
> zpool create -f testpool mirror c0t0d0 c1t0d0 mirror c4t0d0 c6t0d0
> mirror c0t1d0 c1t1d0 mirror c4t1d0 c5t1d0 mirror c6t1d0 c7t1d0
> mirror c0t2d0 c1t2d0
> mirror c4t2d0 c5t2d0 mirror c6t2d0 c7t2d0 mirror c0t3d0 c1t3d0
> mirror c4t3d0 c5t3d0
> mirror c6t3d0 c7t3d0 mirr
On Jan 20, 2010, at 8:14 PM, Brad wrote:
> I was reading your old posts about load-shares
> http://opensolaris.org/jive/thread.jspa?messageID=294580 .
>
> So between raidz and load-share "striping", raidz stripes a file system block
> evenly across each vdev but with load sharing the file syst
I was reading your old posts about load-shares
http://opensolaris.org/jive/thread.jspa?messageID=294580 .
So between raidz and load-share "striping", raidz stripes a file system block
evenly across each vdev but with load sharing the file system block is written
on a vdev that's not filled up
"Zfs does not do striping across vdevs, but its load share approach
will write based on (roughly) a round-robin basis, but will also
prefer a less loaded vdev when under a heavy write load, or will
prefer to write to an empty vdev rather than write to an almost full
one."
I'm trying to visualize t
@hortnon - ASM is not within the scope of this project.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Have you looked at using Oracle ASM instead of or with ZFS? Recent Sun docs
concerning the F5100 seem to recommend a hybrid of both.
If you don't go that route, generally you should separate redo logs from actual
data so they don't compete for I/O, since a redo switch lagging hangs the
database
On Wed, 20 Jan 2010, Brad wrote:
Can anyone recommend a optimum and redundant striped configuration
for a X4500? We'll be using it for a OLTP (Oracle) database and
will need best performance. Is it also true that the reads will be
load-balanced across the mirrors?
Is this considered a raid
Can anyone recommend a optimum and redundant striped configuration for a X4500?
We'll be using it for a OLTP (Oracle) database and will need best performance.
Is it also true that the reads will be load-balanced across the mirrors?
Is this considered a raid 1+0 configuration?
zpool create -f
17 matches
Mail list logo