-Original Message-
From: richard.ell...@sun.com [mailto:richard.ell...@sun.com]
Sent: Tuesday, December 16, 2008 8:04 PM
To: Jonathan
Cc: Glaser, David; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Drive Checksum error
Glaser, David wrote:
> Hi all,
>
> A few weeks ago I was
Hi all,
A few weeks ago I was inquiring of the group on how often to do zfs scrubs of
pools on our x4500's. Figures that the first time I try to do a monthly scrub
of our pools, we get one of the three machines to throw an error. On one of the
machines, there's one disk that has registered one
filesystems and snapshots, but
I'll keep an eye on it.
Thanks all.
Dave
-Original Message-
From: Paul Weaver [mailto:[EMAIL PROTECTED]
Sent: Tuesday, December 02, 2008 8:11 AM
To: Glaser, David; zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] How often to scrub?
> I have
Hi all,
I have a Thumper (ok, actually 3) with each having one large pool, multiple
filesystems and many snapshots. They are holding rsync copies of multiple
clients, being synced every night (using snapshots to keep 'incremental'
backups).
I'm wondering how often (if ever) I should do scrubs
Are you asking if zvol size is 100GB can you do a snapshot of it, or can the
snapshot grow to over 100GB? I have a x4500 with nightly snapshots being done
on a 7 terabyte filesystem (each nightly snapshot is about 20GB).
I don't believe there is a functional limit to the size of the snapshot tha
As shipped, there our x4500 have 8 raidz pools with 6 disks each in them. If
spaced right, you can loose 6(?) disks without the pool dying. The root disk is
mirrored, so if one dies it's not the end of the world. With the exception that
grub is thoroughly fraked up in that if the 0 disk dies, yo
I have a disk that went 'bad' on a x4500. It came up with UNAVAIL in a zpool
status and was 'unconfigured' in cfgadm. The x4500 has a cute little blue light
that tells you when it's able to be removed. With it on, I replaced the disk
and reconfigured it with cfgadm.
Now cfgadm lists it as confi
When you say 'removing a disk' from a zpool, do you mean shrinking a zpool by
logically taking disks away from it, or just removing a failing disk from a
zpool?
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Daniel Polombo
Sent: Wednesday, August 20, 2
We had the same problem, at least a good chunk of the zfs volumes died when the
drive failed. Granted, I don't think the drive actually failed, but a driver
issue/lockup. A reboot 2 weeks ago brought the machine back up and the drive
hasn't had a problem since. I was behind on two patches that
Could I trouble you for the x86 package? I don't seem to have much in the way
of software on this try-n-buy system...
Thanks,
Dave
-Original Message-
From: Will Murnane [mailto:[EMAIL PROTECTED]
Sent: Thursday, July 10, 2008 12:58 PM
To: Glaser, David
Cc: zfs-discuss@opensolari
ursday, July 10, 2008 12:50 PM
To: Glaser, David
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS send/receive questions
Glaser, David wrote:
> I guess what I was wondering if there was a direct method rather than the
> overhead of ssh.
As others have suggested use netcat (/us
I guess what I was wondering if there was a direct method rather than the
overhead of ssh.
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Darren J Moffat
Sent: Thursday, July 10, 2008 11:40 AM
To: Glaser, David
Cc: zfs-discuss@opensolaris.org
Subject
Is that faster than blowfish?
Dave
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Darren J Moffat
Sent: Thursday, July 10, 2008 12:27 PM
To: Florin Iucha
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS send/receive questions
Florin Iucha
Hi all,
I'm a little (ok, a lot) confused on the whole zfs send/receive commands. I've
seen mention of using zfs send between two different machines, but no good
howto in order to make it work. I have one try-n-buy x4500 that we are trying
to move data from onto a new x4500 that we've purchased
Figures... I just bought 3 x4500s
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Tim
Sent: Wednesday, July 09, 2008 3:59 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] X4540
So, I see Sun finally updated the Thumper, and it appears they're now using a
PCI-E backpla
Hi all, I'm new to the list and I thought I'd start out on the right foot. ZFS
is great, but I have a couple questions
I have a Try-n-buy x4500 with one large zfs pool with 40 1TB drives in it. The
pool is named backup.
Of this pool, I have a number of volumes.
backup/clients
backup/clien
16 matches
Mail list logo