On Tue, Apr 15, 2008 at 03:51:17PM -0700, Richard Elling wrote:
> UTSL. compressratio is the ratio of uncompressed bytes to compressed bytes.
> http://cvs.opensolaris.org/source/search?q=ZFS_PROP_COMPRESSRATIO&defs=&refs=&path=zfs&hist=&project=%2Fonnv
>
> IMHO, you will (almost) never get the sa
Hi Jorgen,
Jorgen Lundman wrote:
> Although, I'm having some issues finding the exact process to go from
> fmdump's "HD_ID_9" to zpool offline "c0t?d?" style input.
>
> I can not run "zpool status", nor "format" command as they hang. All the
> Sun documentation already assume you know the c?t?d
Although, I'm having some issues finding the exact process to go from
fmdump's "HD_ID_9" to zpool offline "c0t?d?" style input.
I can not run "zpool status", nor "format" command as they hang. All the
Sun documentation already assume you know the c?t?d? disk name. Today,
it is easier to just p
Today, we foudn the x4500 NFS stopped responding for about 1 minute, but
the server itself was idle. After a little looking around, we found this:
Apr 16 09:16:00 x4500-01.unix fmd: [ID 441519 daemon.error] SUNW-MSG-ID:
DISK-8000-0X, TYPE: Fault, VER: 1, SEVERITY: Major
Apr 16 09:16:00 x4500-01
UTSL. compressratio is the ratio of uncompressed bytes to compressed bytes.
http://cvs.opensolaris.org/source/search?q=ZFS_PROP_COMPRESSRATIO&defs=&refs=&path=zfs&hist=&project=%2Fonnv
IMHO, you will (almost) never get the same number looking at bytes as you
get from counting blocks.
-- richard
On Tue, Apr 15, 2008 at 12:12 PM, Bob Friesenhahn
<[EMAIL PROTECTED]> wrote:
> On Tue, 15 Apr 2008, Brandon High wrote:
> > I think RAID-Z is different, since the stripe needs to spread across
> > all devices for protection. I'm not sure how it's done.
>
> My understanding is that RAID-Z is indeed
Bob Friesenhahn wrote:
> On Tue, 15 Apr 2008, Maurice Volaski wrote:
>
>> 4 drive failures over 5 years. Of course, YMMV, especially if you
>> drive drunk :-)
>>
>
> Note that there is a difference between drive failure and media data
> loss. In a system which has been running fine for a w
On Fri, Apr 11, 2008 at 06:39:00PM -0700, George Wilson wrote:
> Solaris Installation Guide
> System Administration Guide: Basic
> ZFS Administration Guide
> System Administration Guide: Devices and File Systems
Where can I get the updated guides?
> For further inf
Marco Sommella wrote:
> Hi everyone, i’m new.
>
> I need to create a zfs filesystem in my home dir as normal user. I added
> in /etc/user_attr to my user account sys_mount and sys_config
> privileges. I executed:
Instead of using RBAC for this it is much easier and much more flexible
to use th
On Tue, 15 Apr 2008, Mark Maybee wrote:
> going to take 12sec to get this data onto the disk. This "impedance
> mis-match" is going to manifest as pauses: the application fills
> the pipe, then waits for the pipe to empty, then starts writing again.
> Note that this won't be smooth, since we need
I have some code that implements background media scanning so I am able to
detect bad blocks well before zfs encounters them. I need a script or
something that will map the known bad block(s) to a logical block so I can
force zfs to repair the bad block from redundant/parity data.
I can't fin
Does anybody has a guess how it takes to
import one zpool with lots of LUNs?
Our plan is to use ~ 400 LUNs in raidz 6+1 configuration
in one large pool.
Regards,
Ulrich
--
| Ulrich Graef, Senior System Engineer, OS Ambassador\
| Operating Systems, Performance \ Platform Techn
ZFS has always done a certain amount of "write throttling". In the past
(or the present, for those of you running S10 or pre build 87 bits) this
throttling was controlled by a timer and the size of the ARC: we would
"cut" a transaction group every 5 seconds based off of our timer, and
we would als
Truly :)
I was planning something like 3 pools concatenated. But we are only populating
12 bays at the moment.
Blake
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Tim schrieb:
> I'm sure you're already aware, but if not, 22 drives in a raid-6 is
> absolutely SUICIDE when using SATA disks. 12 disks is the upper end of
> what you want even with raid-6. The odds of you losing data in a 22
> disk raid-6 is far too great to be worth it if you care about your
On Tue, 15 Apr 2008, Brandon High wrote:
>
> I think RAID-Z is different, since the stripe needs to spread across
> all devices for protection. I'm not sure how it's done.
My understanding is that RAID-Z is indeed different and does NOT have
to spread across all devices for protection. It can us
On Tue, Apr 15, 2008 at 2:44 AM, Jayaraman, Bhaskar
<[EMAIL PROTECTED]> wrote:
> Thanks Brandon, so basically there is no way of knowing: -
> 1] How your file will be distributed across the disks
> 2] What will be the stripe size
You could look at the source to try to determine it. I'm not sure
Luke Scharf wrote:
> Maurice Volaski wrote:
>
>>> Perhaps providing the computations rather than the conclusions would
>>> be more persuasive on a technical list ;>
>>>
>>>
>> 2 16-disk SATA arrays in RAID 5
>> 2 16-disk SATA arrays in RAID 6
>> 1 9-disk SATA array in RAID 5.
>>
>
On Apr 15, 2008, at 11:18 AM, Bob Friesenhahn wrote:
> On Tue, 15 Apr 2008, Keith Bierman wrote:
>>
>> Perhaps providing the computations rather than the conclusions
>> would be more persuasive on a technical list ;>
>
> No doubt. The computations depend considerably on the size of the
> dis
On Tue, Apr 15, 2008 at 01:37:43PM -0400, Luke Scharf wrote:
>
> >>>zfs list /export/compress
> >>>
> >>>
> >>NAME USED AVAIL REFER MOUNTPOINT
> >>export-cit/compress 90.4M 1.17T 90.4M /export/compress
> >>
> >>is 2GB/90.4M = 2048 / 90.4 = 22.65
> >>
> >>
> >>That
Maurice Volaski wrote:
>> Perhaps providing the computations rather than the conclusions would
>> be more persuasive on a technical list ;>
>>
>
> 2 16-disk SATA arrays in RAID 5
> 2 16-disk SATA arrays in RAID 6
> 1 9-disk SATA array in RAID 5.
>
> 4 drive failures over 5 years. Of course, Y
>>> zfs list /export/compress
>>>
>>>
>> NAME USED AVAIL REFER MOUNTPOINT
>> export-cit/compress 90.4M 1.17T 90.4M /export/compress
>>
>> is 2GB/90.4M = 2048 / 90.4 = 22.65
>>
>>
>> That still leaves me puzzled what the precise definition of compressratio is?
>>
On Tue, 15 Apr 2008, Maurice Volaski wrote:
> 4 drive failures over 5 years. Of course, YMMV, especially if you
> drive drunk :-)
Note that there is a difference between drive failure and media data
loss. In a system which has been running fine for a while, the chance
of a second drive failing d
>Perhaps providing the computations rather than the conclusions would
>be more persuasive on a technical list ;>
2 16-disk SATA arrays in RAID 5
2 16-disk SATA arrays in RAID 6
1 9-disk SATA array in RAID 5.
4 drive failures over 5 years. Of course, YMMV, especially if you
drive drunk :-)
--
On Tue, Apr 15, 2008 at 12:03 PM, Keith Bierman <[EMAIL PROTECTED]> wrote:
>
> On Apr 15, 2008, at 10:58 AM, Tim wrote:
>
>
>
> On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski <[EMAIL PROTECTED]>
> wrote:
>
> > I have 16 disks in RAID 5 and I'm not worried.
> >
> > >I'm sure you're already aware
On Tue, 15 Apr 2008, Keith Bierman wrote:
>
> Perhaps providing the computations rather than the conclusions would be more
> persuasive on a technical list ;>
No doubt. The computations depend considerably on the size of the
disk drives involved. The odds of experiencing media failure on a
s
Right, a nice depiction of the failure modes involved and their
probabilities based on typical published mtbf of components and other
arguments/caveats, please? Does anyone have the cycles to actually
illustrate this or have urls to such studies?
On Tue, Apr 15, 2008 at 1:03 PM, Keith Bierman <[E
On Apr 15, 2008, at 10:58 AM, Tim wrote:
On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski
<[EMAIL PROTECTED]> wrote:
I have 16 disks in RAID 5 and I'm not worried.
>I'm sure you're already aware, but if not, 22 drives in a raid-6 is
>absolutely SUICIDE when using SATA disks. 12 disks is
On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski <[EMAIL PROTECTED]>
wrote:
> I have 16 disks in RAID 5 and I'm not worried.
>
> >I'm sure you're already aware, but if not, 22 drives in a raid-6 is
> >absolutely SUICIDE when using SATA disks. 12 disks is the upper end of
> what
> >you want even
On Tue, 15 Apr 2008, Luke Scharf wrote:
>
> AFAIK, ext3 supports sparse files just like it should -- but it doesn't
> dynamically figure out what to write based on the contents of the file.
Since zfs inspects all data anyway in order to compute the block
checksum, it can easily know if a block is
You can fill up an ext3 filesystem with the following command:
dd if=/dev/zero of=delme.dat
You can't really fill up a ZFS filesystme that way. I guess you could,
but I've never had the patience -- when several GB worth of zeroes takes
1kb worth of data, then it would take a very long time.
This may be my ignorance, but I thought all modern unix filesystems created
sparse files in this way?
-Original Message-
From: Stuart Anderson <[EMAIL PROTECTED]>
Date: Mon, 14 Apr 2008 15:45:03
To:Luke Scharf <[EMAIL PROTECTED]>
Cc:zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss
I have 16 disks in RAID 5 and I'm not worried.
>I'm sure you're already aware, but if not, 22 drives in a raid-6 is
>absolutely SUICIDE when using SATA disks. 12 disks is the upper end of what
>you want even with raid-6. The odds of you losing data in a 22 disk raid-6
>is far too great to be wor
On Tue, Apr 15, 2008 at 9:22 AM, David Collier-Brown <[EMAIL PROTECTED]> wrote:
> We've discussed this in considerable detail, but the original
> question remains unanswered: if an organization *must* use
> multiple pools, is there an upper bound to avoid or a rate
> of degradation to be cons
We've discussed this in considerable detail, but the original
question remains unanswered: if an organization *must* use
multiple pools, is there an upper bound to avoid or a rate
of degradation to be considered?
--dave
--
David Collier-Brown| Always do right. This will gratify
Sun
Hi everyone, i'm new.
I need to create a zfs filesystem in my home dir as normal user. I added in
/etc/user_attr to my user account sys_mount and sys_config privileges. I
executed:
/usr/sbin/mkfile 100M tank_file
/sbin/zpool create -R /export/home/marco/tank tank
/export/home/marco/tank_file
On Mon, Apr 14, 2008 at 9:41 PM, Tim <[EMAIL PROTECTED]> wrote:
> I'm sure you're already aware, but if not, 22 drives in a raid-6 is
> absolutely SUICIDE when using SATA disks. 12 disks is the upper end of what
> you want even with raid-6. The odds of you losing data in a 22 disk raid-6
> is far
37 matches
Mail list logo