Peter Taps wrote:
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it
simple let's not consider block sizes.
Let's
On 8/10/2010 9:57 PM, Peter Taps wrote:
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it
simple let's not consid
Hi Eric,
Thank you for your help. At least one part is clear now.
I still am confused about how the system is still functional after one disk
fails.
Consider my earlier example of 3 disks zpool configured for raidz-1. To keep it
simple let's not consider block sizes.
Let's say I send a write
x27;re 2hr snapshots that some of get deleted every 2 hours). There
are also errors relating to "incremental streams" which is strange
since I'm not using -I or -i at all.
Here are the commands again, and all the output.
+ zfs create -p bup-wrack/fsfs/zp1
+ zfs send -Rp z...@bup
t deleted every 2 hours). There are also
errors relating to "incremental streams" which is strange since I'm not
using -I or -i at all.
Here are the commands again, and all the output.
+ zfs create -p bup-wrack/fsfs/zp1
+ zfs send -Rp z...@bup-20100810-154542gmt
+ zfs recv -
First off double thanks for replying to my post. I tried to your advice but
something is way wrong. I have all 2TB drives disconnected, and the 7 500GB
drives connected. All 7 show up in bios and in format. Here all the drives are
the original 7 500Mb drives:
# format
Searching for disk
For those who missed it, Oracle/Sun announcement on Solaris 11:
Solaris 11 will be based on technologies currently available for
preview in OpenSolaris including:
* Image packaging system
* Crossbow network virtualization
* ZFS de-duplication
* CIFS file servic
Ian Collins wrote:
On 08/11/10 05:16 AM, Terry Hull wrote:
So do I understand correctly that really the "Right" thing to do is
to build
a pool not only with a consistent strip width, but also to build it with
drives on only one size? It also sounds like from a practical point of
view that bui
On Tue, Aug 10 at 15:40, Peter Taps wrote:
Hi,
First, I don't understand why parity takes so much space. From what
I know about parity, there is typically one parity bit per
byte. Therefore, the parity should be taking 1/8 of storage, not 1/3
of storage. What am I missing?
Think of it as 1 bit
Hi,
I am going through understanding the fundamentals of raidz. From the man pages,
a raidz configuration of P disks and N parity provides (P-N)*X storage space
where X is the size of the disk. For example, if I have 3 disks of 10G each and
I configure it with raidz1, I will have 20G of usable
On 08/11/10 05:16 AM, Terry Hull wrote:
So do I understand correctly that really the "Right" thing to do is to build
a pool not only with a consistent strip width, but also to build it with
drives on only one size? It also sounds like from a practical point of
view that building the pool full-s
On 08/10/10 10:09 PM, Phil Harman wrote:
On 10 Aug 2010, at 10:22, Ian Collins wrote:
On 08/10/10 09:12 PM, Andrew Gabriel wrote:
Another option - use the new 2TB drives to swap out the existing 1TB drives.
If you can find another use for the swapped out drives, this works well, and
avoi
On Aug 10, 2010, at 4:07 PM, Cindy Swearingen wrote:
> Hi Brian,
>
> Is the pool exported before the update/upgrade of PowerPath software?
Yes, that's the standard procedure.
> This recommended practice might help the resulting devices to be more
> coherent.
>
> If the format utility sees the
David Dyer-Bennet wrote:
On Tue, August 10, 2010 13:23, Dave Pacheco wrote:
David Dyer-Bennet wrote:
My full backup still doesn't complete. However, instead of hanging the
entire disk subsystem as it did on 111b, it now issues error messages.
Errors at the end.
[...]
cannot receive increment
The ZFS best practices is here:
http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide
Run zpool scrub on a regular basis to identify data integrity problems.
If you have consumer-quality drives, consider a weekly scrubbing
schedule. If you have datacenter-quality drives, consid
Hi Brian,
Is the pool exported before the update/upgrade of PowerPath software?
This recommended practice might help the resulting devices to be more
coherent.
If the format utility sees the devices the same way as ZFS, then I don't
see how ZFS can rename the devices.
If the format utility s
On Tue, 10 Aug 2010, seth keith wrote:
first off I don't have the exact failure messages here, and I did not take good
notes of the failures, so I will do the best I can. Please try and give me
advice anyway.
I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them all
wit
first off I don't have the exact failure messages here, and I did not take good
notes of the failures, so I will do the best I can. Please try and give me
advice anyway.
I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them
all with 2TB drives. Immediately I ran into trou
On Tue, August 10, 2010 13:23, Dave Pacheco wrote:
> David Dyer-Bennet wrote:
>> My full backup still doesn't complete. However, instead of hanging the
>> entire disk subsystem as it did on 111b, it now issues error messages.
>> Errors at the end.
> [...]
>> cannot receive incremental stream: mos
Yes, as long as the pools are on the same system, you can share
a spare between two pools, but we are not recommending sharing
spares at this time.
We'll keep you posted.
Thanks,
Cindy
On 08/10/10 07:39, Tony MacDoodle wrote:
I have 2 ZFS pools all using the same drive type and size. The quest
David Dyer-Bennet wrote:
My full backup still doesn't complete. However, instead of hanging the
entire disk subsystem as it did on 111b, it now issues error messages.
Errors at the end.
[...]
cannot receive incremental stream: most recent snapshot of
bup-wrack/fsfs/zp1/ddb does not
match incr
You would look for the device name that might be a problem, like this:
# fmdump -eV | grep c2t4d0
vdev_path = /dev/dsk/c2t4d0s0
vdev_path = /dev/dsk/c2t4d0s0
vdev_path = /dev/dsk/c2t4d0s0
vdev_path = /dev/dsk/c2t4d0s0
Then, review the file more closely for the details of these errors,
such as th
> From: Phil Harman
> Date: Tue, 10 Aug 2010 09:24:52 +0100
> To: Ian Collins
> Cc: Terry Hull , "zfs-discuss@opensolaris.org"
>
> Subject: Re: [zfs-discuss] RAID Z stripes
>
> On 10 Aug 2010, at 08:49, Ian Collins wrote:
>
>> On 08/10/10 06:21 PM, Terry Hull wrote:
>>> I am wanting to build
this new run, and we'll see what happens at the end of this run.
(These are from a bash trace as produced by "set -x")
+ zfs create -p bup-wrack/fsfs/zp1
+ zfs send -Rp z...@bup-20100810-154542gmt
+ zfs recv -Fud bup-wrack/fsfs/zp1
(The send and the receive are source and sink in a
My full backup still doesn't complete. However, instead of hanging the
entire disk subsystem as it did on 111b, it now issues error messages.
Errors at the end.
sending from @bup-daily-20100726-10CDT to
zp1/d...@bup-daily-20100727-10cdt
received 3.80GB stream in 136 seconds (28.6MB/sec)
Tony MacDoodle wrote:
I have 2 ZFS pools all using the same drive type and size. The
question is can I have 1 global hot spare for both of those pools?
Yes. A hot spare disk can be added to more than one pool at the same time.
--
Andrew Gabriel
___
z
I have 2 ZFS pools all using the same drive type and size. The question is
can I have 1 global hot spare for both of those pools?
Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Terry Hull
>
> I am wanting to build a server with 16 - 1TB drives with 2 8 drive
> RAID Z2 arrays striped together. However, I would like the capability
> of adding additional stripes of 2
On 10 Aug 2010, at 10:22, Ian Collins wrote:
> On 08/10/10 09:12 PM, Andrew Gabriel wrote:
>> Phil Harman wrote:
>>> On 10 Aug 2010, at 08:49, Ian Collins wrote:
On 08/10/10 06:21 PM, Terry Hull wrote:
> I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive RAID
> Z2
On 08/10/10 09:12 PM, Andrew Gabriel wrote:
Phil Harman wrote:
On 10 Aug 2010, at 08:49, Ian Collins wrote:
On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8
drive RAID Z2 arrays striped together. However, I would like the
capability of ad
Phil Harman wrote:
On 10 Aug 2010, at 08:49, Ian Collins wrote:
On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive
RAID Z2 arrays striped together. However, I would like the
capability of adding additional stripes of 2TB drives in the
If I create a ZFS mirrored zpool on FreeBSD (zfs v14) will I be able
to boot off an OpenSolaris-b131 CD and copy my data off (another) ZFS
mirror created by OpenSolaris (ZFS v22)? A simple question, but my data
is precious, so I ask beforehand. ;-)
_
On 1-8-2010 19:57, David Dyer-Bennet wrote:
I've kind of given up on that. This is a home "production" server;
it's got all my photos on it.
The uncertainty around OpenSolaris made me drop it. I'm very sorry to
say, because I loved the system. I do not want to worry all the time
though, so
On 10 Aug 2010, at 08:49, Ian Collins wrote:
On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8 dri
ve RAID Z2 arrays striped together. However, I would like the capa
bility of adding additional stripes of 2TB drives in the future. W
ill th
On 08/10/10 06:21 PM, Terry Hull wrote:
I am wanting to build a server with 16 - 1TB drives with 2 – 8 drive
RAID Z2 arrays striped together. However, I would like the capability
of adding additional stripes of 2TB drives in the future. Will this be
a problem? I thought I read it is best to kee
35 matches
Mail list logo