On Sun, Feb 14, 2010 at 12:51 PM, Tracey Bernath wrote:
> I went from all four disks of the array at 100%, doing about 170 read
> IOPS/25MB/s
> to all four disks of the array at 0%, once hitting nealyr 500 IOPS/65MB/s
> off the cache drive (@ only 50% load).
> And, keep in mind this was on less
On Feb 16, 2010, at 12:39 PM, Daniel Carosone wrote:
> On Mon, Feb 15, 2010 at 09:11:02PM -0600, Tracey Bernath wrote:
>> On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone wrote:
>>> Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
>>> add more devices and let them load-balance.
On Mon, Feb 15, 2010 at 09:11:02PM -0600, Tracey Bernath wrote:
> On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone wrote:
> > Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
> > add more devices and let them load-balance. This is especially true
> > if you're sharing ssd wri
On Mon, 15 Feb 2010, Tracey Bernath wrote:
If the device itself was full, and items were falling off the L2ARC, then I
could see having two
separate cache devices, but since I am only at about 50% utilization of the
available capacity, and
maxing out the IO, then mirroring seemed smarter.
Am
On Mon, Feb 15, 2010 at 5:51 PM, Daniel Carosone wrote:
> On Sun, Feb 14, 2010 at 11:08:52PM -0600, Tracey Bernath wrote:
> > Now, to add the second SSD ZIL/L2ARC for a mirror.
>
> Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
> add more devices and let them load-balance.
On Sun, Feb 14, 2010 at 11:08:52PM -0600, Tracey Bernath wrote:
> Now, to add the second SSD ZIL/L2ARC for a mirror.
Just be clear: mirror ZIL by all means, but don't mirror l2arc, just
add more devices and let them load-balance. This is especially true
if you're sharing ssd writes with ZIL, as
For those following the saga:
With the prefetch problem fixed, and data coming off the L2ARC instead of
the disks, the system switched from IO bound to CPU bound, I opened up the
throttles with some explicit PARALLEL hints in the Oracle commands, and we
were finally able to max out the single SSD:
OK, that was the magic incantation I was looking for:
- changing the noprefetch option opened the floodgates to the L2ARC
- changing the max queue depth relived the wait time on the drives, although
I may undo this again in the benchmarking since these drives all have NCQ
I went from all four di
comment below...
On Feb 12, 2010, at 2:25 PM, TMB wrote:
> I have a similar question, I put together a cheapo RAID with four 1TB WD
> Black (7200) SATAs, in a 3TB RAIDZ1, and I added a 64GB OCZ Vertex SSD, with
> slice 0 (5GB) for ZIL and the rest of the SSD for cache:
> # zpool status dpool
>
Thanks Brendan,
I was going to move it over to 8kb block size once I got through this index
rebuild. My thinking was that a disproportionate block size would show up as
excessive IO thruput, not a lack of thruput.
The question about the cache comes from the fact that the 18GB or so that it
says is
On Fri, Feb 12, 2010 at 02:25:51PM -0800, TMB wrote:
> I have a similar question, I put together a cheapo RAID with four 1TB WD
> Black (7200) SATAs, in a 3TB RAIDZ1, and I added a 64GB OCZ Vertex SSD, with
> slice 0 (5GB) for ZIL and the rest of the SSD for cache:
> # zpool status dpool
> poo
I have a similar question, I put together a cheapo RAID with four 1TB WD Black
(7200) SATAs, in a 3TB RAIDZ1, and I added a 64GB OCZ Vertex SSD, with slice 0
(5GB) for ZIL and the rest of the SSD for cache:
# zpool status dpool
pool: dpool
state: ONLINE
scrub: none requested
config:
I don't think adding an SSD mirror to an existing pool will do much for
performance. Some of your data will surely go to those SSDs, but I don't think
the solaris will know they are SSDs and move blocks in and out according to
usage patterns to give you an all around boost. They will just be use
Hi all,
just after sending a message to sunmanagers I realized that my question
should rather have gone here. So sunmanagers please excus ethe double
post:
I have inherited a X4140 (8 SAS slots) and have just setup the system
with Solaris 10 09. I first setup the system on a mirrored pool ov
14 matches
Mail list logo