Hi,
Roy Sigurd Karlsbakk wrote:
> Crucial RealSSD C300 has been released and showing good numbers for use as
> Zil and L2ARC. Does anyone know if this unit flushes its cache on request, as
> opposed to Intel units etc?
>
I had a chance to get my hands on a Crucial RealSSD C300/128MB yesterday
Looking forward to see your test report from intel x-25 and ocz vertex 2 pro...
Thanks.
Fred
-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Arne Jansen
Sent: 星期四, 六月 24, 2010 16:15
To: Roy Sigurd Karlsbakk
Cc: OpenS
On 23/06/2010 18:50, Adam Leventhal wrote:
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data all parity
information will end-up on one (z1) or two (z2) specific disks?
No. There are always small
On 23/06/2010 19:29, Ross Walker wrote:
On Jun 23, 2010, at 1:48 PM, Robert Milkowski wrote:
128GB.
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data all parity
information will end-up on one (
Arne Jansen wrote:
> Hi,
>
> Roy Sigurd Karlsbakk wrote:
>> Crucial RealSSD C300 has been released and showing good numbers for use as
>> Zil and L2ARC. Does anyone know if this unit flushes its cache on request,
>> as opposed to Intel units etc?
>>
>
> I had a chance to get my hands on a Cruci
Lori,
In my case what may have caused the problem is that after a previous
upgrade failed, I used this zfs send/recv procedure to give me (what I
thought was) a sane rpool:
http://blogs.sun.com/migi/entry/broken_opensolaris_never
Is it possible that a zfs recv of a root pool contains the dev
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
> On 23/06/2010 18:50, Adam Leventhal wrote:
>>> Does it mean that for dataset used for databases and similar environments
>>> where basically all blocks have fixed size and there is no other data all
>>> parity information will end-up on one
Arne Jansen wrote:
> Hi,
>
> Roy Sigurd Karlsbakk wrote:
>> Crucial RealSSD C300 has been released and showing good numbers for use as
>> Zil and L2ARC. Does anyone know if this unit flushes its cache on request,
>> as opposed to Intel units etc?
>>
>
> Also the IOPS with cache flushes is quite
On Thu, June 24, 2010 08:58, Arne Jansen wrote:
> Cross check: we pulled also while writing with cache enabled, and it lost
> 8 writes.
I'm SO pleased to see somebody paranoid enough to do that kind of
cross-check doing this benchmarking!
"Benchmarking is hard!"
> So I'd say, yes, it flushes i
On 24/06/2010 14:32, Ross Walker wrote:
On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
On 23/06/2010 18:50, Adam Leventhal wrote:
Does it mean that for dataset used for databases and similar environments where
basically all blocks have fixed size and there is no other data al
On Thu, 24 Jun 2010, Ross Walker wrote:
Raidz is definitely made for sequential IO patterns not random. To
get good random IO with raidz you need a zpool with X raidz vdevs
where X = desired IOPS/IOPS of single drive.
Remarkably, I have yet to see mention of someone testing a raidz which
is
I have a customer that described this issue to me in general terms.
I'd like to know how to replicated it, and what the best practice is to a avoid
the issue, or fix it in an accepted manner.
If they kernel patch, and reboot they may get messages informing them that the
pool version is down rev
On 24/06/2010 15:54, Bob Friesenhahn wrote:
On Thu, 24 Jun 2010, Ross Walker wrote:
Raidz is definitely made for sequential IO patterns not random. To
get good random IO with raidz you need a zpool with X raidz vdevs
where X = desired IOPS/IOPS of single drive.
Remarkably, I have yet to see
Where is the link to the script, and does it work with RAIDZ arrays? Thanks so
much.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Shawn,
I think this can happen if you apply patch 141445-09.
It should not happen in the future.
I believe the workaround is this:
1. Boot the system from the correct media.
2. Install the boot blocks on the root pool disk(s).
3. Upgrade the pool.
Thanks,
Cindy
On 06/24/10 09:24, Shawn
This day went from usual Thursday to worst day of my life in the span of about
10 seconds. Here's the scenario:
2 Computer, both Solaris 10u8, one is the primary, one is the backup. Primary
system is RAIDZ2, Backup is RAIDZ with 4 drives. Every night, Primary mirrors
to Backup using the 'zfs
But it's early (for me), and I can't remember the answer here.
I'm sizing an Oracle database appliance. I'd like to get one of the
F20 96GB flash accellerators to play with, but I can't imagine I'd be
using the whole thing for ZIL. The DB is likely to be a couple TB in size.
Couple of ques
On Jun 24, 2010, at 10:42 AM, Robert Milkowski wrote:
> On 24/06/2010 14:32, Ross Walker wrote:
>> On Jun 24, 2010, at 5:40 AM, Robert Milkowski wrote:
>>
>>
>>> On 23/06/2010 18:50, Adam Leventhal wrote:
>>>
> Does it mean that for dataset used for databases and similar environment
Hey Robert,
I've filed a bug to track this issue. We'll try to reproduce the problem and
evaluate the cause. Thanks for bringing this to our attention.
Adam
On Jun 24, 2010, at 2:40 AM, Robert Milkowski wrote:
> On 23/06/2010 18:50, Adam Leventhal wrote:
>>> Does it mean that for dataset used
On 24/06/2010 17:49, Erik Trimble wrote:
But it's early (for me), and I can't remember the answer here.
I'm sizing an Oracle database appliance. I'd like to get one of the F20
96GB flash accellerators to play with, but I can't imagine I'd be using
the whole thing for ZIL. The DB is likely to be
Ross Walker wrote:
Raidz is definitely made for sequential IO patterns not random. To get good
random IO with raidz you need a zpool with X raidz vdevs where X = desired
IOPS/IOPS of single drive.
I have seen statements like this repeated several times, though
I haven't been able to find an
On 06/24/10 03:27 AM, Brian Nitz wrote:
Lori,
In my case what may have caused the problem is that after a previous
upgrade failed, I used this zfs send/recv procedure to give me (what I
thought was) a sane rpool:
http://blogs.sun.com/migi/entry/broken_opensolaris_never
Is it possible that a
On 24/06/2010 20:52, Arne Jansen wrote:
Ross Walker wrote:
Raidz is definitely made for sequential IO patterns not random. To
get good random IO with raidz you need a zpool with X raidz vdevs
where X = desired IOPS/IOPS of single drive.
I have seen statements like this repeated several tim
On Tue, 22 Jun 2010, Arne Jansen wrote:
> We found that the zfs utility is very inefficient as it does a lot of
> unnecessary and costly checks.
Hmm, presumably somebody at Sun doesn't agree with that assessment or you'd
think they'd take them out :).
Mounting/sharing by hand outside of the zfs
24 matches
Mail list logo