Hello Luke,
Tuesday, April 15, 2008, 4:50:17 PM, you wrote:
LS> You can fill up an ext3 filesystem with the following command:
LS> dd if=/dev/zero of=delme.dat
LS> You can't really fill up a ZFS filesystme that way. I guess you could,
LS> but I've never had the patience -- when several GB wo
(I tried to post this yesterday, but I haven't seen it come through the list
yet. I apologize if this is a duplicate posting. I added some updated
information regarding a Sun bug ID below.)
We're in the process of setting up a Sun Cluster on two M5000s attached to a
DMX1000 array. The M5000s
Greetings,
snv_79a
AMD 64x2 in 64 bit kernel mode.
I'm in the middle of migrating a large zfs set from a pair of 1TB mirrors
to a 1.3TB RAIDz.
I decided to use zfs send | zfs receive, so the first order of business
was to snap the entire source filesystem.
# zfs snapshot -r [EMAIL PROTECTED
Dear All,
I've just joined this list, and am trying to understand the state of
play with using free backup solutions for ZFS, specifically on a Sun
x4500.
The x4500 we have is used as a file store, serving clients using NFS
only.
I'm handling the issue of recovery of accidentally deleted f
Hi folks,
We are scratching our heads over this.
We have a solaris box with 10 disks mounted over 2 linus iscsi target hosts.
Across this we run a zfs pool tank like so ,
NAME STATE READ WRITE CKSUM
tank ONLINE 0 0 0
raidz1 ONLINE
I hate it when I find a problem in a forum that matches my problem but no
replies. So I will update this with the information I have found.
First of all the Benelix LiveCD uses a different version of ZFS, hence why it
was having so much trouble trying to import the zpool from the other machine.
I'm having a serious problem with a customer running a T2000 with ZFS
configured as raidz1 with 4 disks, no spare.
The machine is mostly a cyrus imap server and web application server to run the
ajax app to email.
Yesterday we had a heavy slow down.
Tomcat runs smoothly, but the imap access is ve
Hello.
A video of the ZFS survivability demonstration is available on
YouTube. This was a live demonstration in St Petersburg Russia in
front of ~2000 people of ZFS reliability by operating a sledge hammer
on disk drives running a ZFS RAID Z2 group.
http://www.youtube.com/watch?v=CN6
Sam Nicholson wrote:
Greetings,
snv_79a
AMD 64x2 in 64 bit kernel mode.
I'm in the middle of migrating a large zfs set from a pair of 1TB mirrors
to a 1.3TB RAIDz.
I decided to use zfs send | zfs receive, so the first order of business
was to snap the entire source filesystem.
# zfs sna
Jacob Ritorto wrote:
> Right, a nice depiction of the failure modes involved and their
> probabilities based on typical published mtbf of components and other
> arguments/caveats, please? Does anyone have the cycles to actually
> illustrate this or have urls to such studies?
>
Yes, this is wha
Stuart Anderson wrote:
> On Tue, Apr 15, 2008 at 03:51:17PM -0700, Richard Elling wrote:
>
>> UTSL. compressratio is the ratio of uncompressed bytes to compressed bytes.
>> http://cvs.opensolaris.org/source/search?q=ZFS_PROP_COMPRESSRATIO&defs=&refs=&path=zfs&hist=&project=%2Fonnv
>>
>> IMHO, y
David wrote:
> I have some code that implements background media scanning so I am able to
> detect bad blocks well before zfs encounters them. I need a script or
> something that will map the known bad block(s) to a logical block so I can
> force zfs to repair the bad block from redundant/pari
On Wed, Apr 16, 2008 at 10:09:00AM -0700, Richard Elling wrote:
> Stuart Anderson wrote:
> >On Tue, Apr 15, 2008 at 03:51:17PM -0700, Richard Elling wrote:
> >
> >>UTSL. compressratio is the ratio of uncompressed bytes to compressed
> >>bytes.
> >>http://cvs.opensolaris.org/source/search?q=ZFS_
I see that 6528296 was fixed recently, and says that the fix is available in
b87. Since the latest SXCE available is build 86, I just installed SXCE 86,
and then did a BFU using
http://dlc.sun.com/osol/on/downloads/b87/on-bfu-nightly-osol-nd.i386.tar.bz2 on
April 15.
Can someone comment on wh
Stuart Anderson wrote:
> On Wed, Apr 16, 2008 at 10:09:00AM -0700, Richard Elling wrote:
>
>> Stuart Anderson wrote:
>>
>>> On Tue, Apr 15, 2008 at 03:51:17PM -0700, Richard Elling wrote:
>>>
>>>
UTSL. compressratio is the ratio of uncompressed bytes to compressed
bytes
David Lethe wrote:
> Read ... What? All I have is block x on physical device y. Granted zfs
> recalculates parity, but zfs won't do this unless I can read the appropriate
> storage pool and offset.
>
Read the data (file) which is using the block.
When you scrub a ZFS file system, if a b
new problem. We have patched the system and it has fixed the error creating
dirs/files on the ZFS filesystem. now I am getting permission errors with mv/cp
from one of these ZFS areas to a regular FreeBSD server using UFS. thoughts?
This message posted from opensolaris.org
___
Stuart Anderson <[EMAIL PROTECTED]> wrote:
> They report the exact same number as far as I can tell. With the caveat
> that Solaris ls -s returns the number of 512-byte blocks, whereas
> GNU ls -s returns the number of 1024byte blocks by default.
IIRC, this may be controlled by environment variab
Brandon High wrote:
> On Tue, Apr 15, 2008 at 2:44 AM, Jayaraman, Bhaskar
> <[EMAIL PROTECTED]> wrote:
>
>> Thanks Brandon, so basically there is no way of knowing: -
>> 1] How your file will be distributed across the disks
>> 2] What will be the stripe size
>>
>
> You could look at the s
On Tue, Apr 15, 2008 at 12:54 PM, David <[EMAIL PROTECTED]> wrote:
> I have some code that implements background media scanning so I am able to
> detect bad blocks well before zfs encounters them. I need a script or
> something that will map the known bad block(s) to a logical block so I can
>
On Wed, Apr 16, 2008 at 3:19 PM, Richard Elling <[EMAIL PROTECTED]> wrote:
> Brandon High wrote:
> > The stripe size will be across all vdevs that have space. For each
> > stripe written, more data will land on the empty vdev. Once the
> > previously existing vdevs fill up, writes will go to the ne
> In a case where a new vdev is added to an almost full zpool, more of
> the writes should land on the empty device though, right? So maybe 2
> slabs will land on the new vdev for every one that goes to an
> previously existing vdev.
(Un)Available disk space influences vdev selection. New writes
On Apr 15, 2008, at 13:18, Bob Friesenhahn wrote:
> ZFS raidz1 and raidz2 are NOT directly equivalent to RAID5 and RAID6
> so the failure statistics would be different. Regardless, single disk
> failure in a raidz1 substantially increases the risk that something
> won't be recoverable if there is
On Tue, 15 Apr 2008, Darren J Moffat wrote:
> Instead of using RBAC for this it is much easier and much more flexible
> to use the ZFS delgated admin.
>
> # zfs allow -u marco create,mount tank/home/marco
>
> This then allows:
>
> marco$ zfs create tank/home/marco/Documents
>
> See the zfs(1) man
Richard Elling wrote:
> David wrote:
>> I have some code that implements background media scanning so I am able to
>> detect bad blocks well before zfs encounters them. I need a script or
>> something that will map the known bad block(s) to a logical block so I can
>> force zfs to repair the b
On Wed, 16 Apr 2008, David Magda wrote:
>> RAID5 and RAID6 rebuild the entire disk while raidz1 and raidz2 only
>> rebuild existing data blocks so raidz1 and raidz2 are less likely to
>> experience media failure if the pool is not full.
>
> While the failure statistics may be different, I think an
On Wed, Apr 16, 2008 at 02:07:53PM -0700, Richard Elling wrote:
>
> >>Personally, I'd estimate using du rather than ls.
> >>
> >
> >They report the exact same number as far as I can tell. With the caveat
> >that Solaris ls -s returns the number of 512-byte blocks, whereas
> >GNU ls -s returns
If you don't do background scrubbing, you don't know about bad blocks in
advance. If you're running RAID-Z, this means you'll lose data if a block is
unreadable and another device goes bad. This is the point of scrubbing, it lets
you repair the problem while you still have redundancy. :-)
Wheth
28 matches
Mail list logo