Ian,
It looks like the error message is wrong - slice 7 overlaps slice 4 - note
that slice 4 ends at c6404, but slice 7 starts at c6394.
Slice 6 is also completely contained within slice 4's range of cylinders, but
that won't matter unless you attempt to use it.
Trev
Ian Collins wrote:
Hello Matthew,
Friday, May 11, 2007, 7:04:06 AM, you wrote:
Check in your script (df -h?) if s6 isn't mounted anyway...
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
Hello Gino,
Monday, May 14, 2007, 4:07:31 PM, you wrote:
G> We are using a lot of EMC DAE2. Works well with ZFS.
Without head units?
Dual-pathed connections to hosts + MPxIO?
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
h
Trevor Watson wrote:
> Ian,
>
> It looks like the error message is wrong - slice 7 overlaps slice 4 -
> note that slice 4 ends at c6404, but slice 7 starts at c6394.
>
> Slice 6 is also completely contained within slice 4's range of
> cylinders, but that won't matter unless you attempt to use it.
>
Ian Collins wrote:
> Trevor Watson wrote:
>
>> Ian,
>>
>> It looks like the error message is wrong - slice 7 overlaps slice 4 -
>> note that slice 4 ends at c6404, but slice 7 starts at c6394.
>>
>> Slice 6 is also completely contained within slice 4's range of
>> cylinders, but that won't matte
I don't suppose that it has anything to do with the flag being "wm" instead of
"wu" on your second drive does it? Maybe if the driver thinks slice 2 is
writeable, it treats it as a valid slice?
Trev
Ian Collins wrote:
Ian Collins wrote:
Trevor Watson wrote:
Ian,
It looks like the error
Hello Robert,
> G> We are using a lot of EMC DAE2. Works well with
> ZFS.
>
> Without head units?
Yes. Just make sure to format disks to 512 bytes per sector if they are from
EMC.
> Dual-pathed connections to hosts + MPxIO?
sure. Also we are using some Xyratex JBOD boxes.
gino
This messa
Hello James,
Thursday, May 10, 2007, 11:12:57 PM, you wrote:
>
zfs will interpret zero'd sectors as holes, so wont really write them to disk, they just adjust the file size accordingly.
It does that only with compression turned on.
--
Best regards,
Robert
Hello Pal,
Friday, May 11, 2007, 6:41:41 PM, you wrote:
PB> Note! You can't even regret what you have added to a pool. Being
PB> able to evacuate a vdev and replace it by a bigger one would have
PB> helped. But this isn't possible either (currently).
Actually you can. See 'zpool replace'.
So yo
Yes I have tested this virtually with vmware.
Replacing disks by bigger ones works great.
But the new space becomes usable only after replacing *all* disks.
I hoped that new space will be usable after replacing 3 or 4 disks.
I think the best strategy for me now is buying
2 x 750 GB Disks and usi
> I have no idea what to make of all
> this, except that it ZFS has a problem with this
> hardware/drivers that UFS and other traditional file
> systems, don't. Is it a bug in the driver that
> ZFS is inadvertently exposing? A specific feature
> that ZFS assumes the hardware to have, but it
> doesn
Hi,
I have a problem with ZFS filesystem on array. ZFS was created
by Solaris 10 U2. Some glitches with array made it panic
Solaris on boot. I've installed snv63 (as snv60 contains some
important fixes), systems boots but kernel panic when
I try to import pool. This is with zfs_recover=1.
Config
On Tue, 15 May 2007, Trevor Watson wrote:
> I don't suppose that it has anything to do with the flag being "wm"
> instead of "wu" on your second drive does it? Maybe if the driver thinks
> slice 2 is writeable, it treats it as a valid slice?
If the slice doesn't take up the *entire* disk, then it
Has anyone else run into this situation? Does anyone have any solutions other
than removing snapshots or increasing the quota? I'd like to put in an RFE to
reserve some space so files can be removed when users are at their quota. Any
thoughts from the ZFS team?
Ben
> We have around 1000 use
[EMAIL PROTECTED] wrote on 05/15/2007 09:01:00 AM:
> Has anyone else run into this situation? Does anyone have any
> solutions other than removing snapshots or increasing the quota?
> I'd like to put in an RFE to reserve some space so files can be
> removed when users are at their quota. Any
I would use rsync; over NFS if possible otherwise over ssh:
(NFS performs significantly better on read than write so preferably share from
the old and mount on the new)
old# share -F nfs -o [EMAIL PROTECTED],[EMAIL PROTECTED] /my/data
(or edit /etc/dfs/dfstab and shareall)
new# mount -r old:/my/
Sorry I realize I was a bit misleading in the path handling and need to correct
this part:
new# mount -r old:/my/data /mnt
new# mkdir -p /my/data
new# cd /mnt ; rsync -aRHDn --delete ./ /my/data/
new# cd /mnt ; rsync -aRHD --delete ./ /my/data/
new# umount /mnt
..
new# cd /mnt ; rsync -aRHD --d
On Tue, May 15, 2007 at 09:36:35AM -0500, [EMAIL PROTECTED] wrote:
>
> * Ignore snapshot reservations when calculating quota -- Don't punish users
> for administratively driven snap policy.
See:
6431277 want filesystem-only quotas
> * Ignore COW overhead for quotas (allow unlink anytime) -- from
Hey,
I'm currently running on Nexenta alpha 6 and I have some corrupted data in a
pool.
The output from sudo zpool status -v data is:
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore
> Would you mind also doing:
>
> ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
>
> to see the raw performance of underlying hardware.
This dd command is reading from the block device,
which might cache dataand probably splits requests
into "maxphys" pieces (which happens to be 56K o
On May 15, 2007, at 13:13, Jürgen Keil wrote:
Would you mind also doing:
ptime dd if=/dev/dsk/c2t1d0 of=/dev/null bs=128k count=1
to see the raw performance of underlying hardware.
This dd command is reading from the block device,
which might cache dataand probably splits requests
into
On May 15, 2007, at 9:37 AM, XIU wrote:
Hey,
I'm currently running on Nexenta alpha 6 and I have some corrupted
data in a pool.
The output from sudo zpool status -v data is:
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.
With what Edward suggested, I got rid of the ldi_get_size() error by defining
the prop_op entry point appropriately.
However, the zpool create still fails - with zio_wait() returning 22.
bash-3.00# dtrace -n 'fbt::ldi_get_size:entry{self->t=1;}
fbt::ldi_get_size:entry/self->t/{}
fbt::ldi_get_s
Hey,
Using the steps on
http://www.opensolaris.org/jive/thread.jspa?messageID=39450&tstart=0confirms
that it's the iso file.
Removing the file does work, I'll just download the file again and let a
scrub clean up the error message.
Steve
On 5/15/07, eric kustarz <[EMAIL PROTECTED]> wrote:
On
> Each drive is freshly formatted with one 2G file copied to it.
How are you creating each of these files?
Also, would you please include a the output from the isalist(1) command?
> These are snapshots of iostat -xnczpm 3 captured somewhere in the
> middle of the operation.
Have you double-che
I seem to have got the same core dump, in a different way.
I had a zpool setup on a iscsi 'disk'. For details see:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-May/001162.html
But after a reboot the iscsi target was not longer available, so the iscsi
initiator could not provide the d
On 5/15/07, Matthew Flanagan <[EMAIL PROTECTED]> wrote:
On 5/15/07, eric kustarz <[EMAIL PROTECTED]> wrote:
>
> On May 12, 2007, at 2:12 AM, Matthew Flanagan wrote:
>
> >>
> >> On May 10, 2007, at 10:04 PM, Matthew Flanagan wrote:
> >>
> >>> Hi,
> >>>
> >>> I have a test server that I use for tes
Marko Milisavljevic wrote:
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can
deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives
me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only
35MB/s!?.
Our experience is that ZFS gets very clos
I have an opensolaris server running with a raidz zfs pool with almost 1TB of
storage. This is intended to be a central fileserver via samba and ftp for
all sorts of purposes. I also want to use it to backup my XP laptop. I am
having trouble finding out how I can setup solaris to allow my XP m
On May 15, 2007, at 9:32 PM, Hazvinei Mugwagwa wrote:
I have an opensolaris server running with a raidz zfs pool with
almost 1TB of storage. This is intended to be a central
fileserver via samba and ftp for all sorts of purposes. I also want
to use it to backup my XP laptop. I am having
Hello Matthew,
Yes, my machine is 32-bit, with 1.5G of RAM.
-bash-3.00# echo ::memstat | mdb -k
Page SummaryPagesMB %Tot
Kernel 123249 481 32%
Anon
I tried as you suggested, but I notice that output from iostat while
doing dd if=/dev/dsk/... still shows that reading is done in 56k
chunks. I haven't see any change in performance. Perhaps iostat
doesn't say what I think it does. Using dd if=/dev/rdsk/.. gives 256k,
and dd if=zfsfile gives 128k
On 5/15/07, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> Each drive is freshly formatted with one 2G file copied to it.
How are you creating each of these files?
zpool create tank c0d0 c0d1; zfs create tank/test; cp ~/bigfile /tank/test/
Actual content of the file is random junk from /dev/ra
33 matches
Mail list logo