I've run into this too... I believe the issue is that the block
size/allocation unit size in ZFS is much larger than the default size
on older filesystems (ufs, ext2, ext3).
The result is that if you have lots of small files smaller than the
block size, they take up more total space on the filesy
> Are you sure this isn't a case of CR 6433264 which
> was fixed
> long ago, but arrived in patch 118833-36 to Solaris
> 10?
It certainly looks similar, but this system already had 118833-36 when the
error occurred, so if this bug is truly fixed, it must be something else. Then
again, I wasn't
Problem solved... after the resilvers completed, the status reported that the
filesystem needed an upgrade.
I did a zpool upgrade -a, and after that completed and there was no resilvering
going on, the zpool add ran successfully.
I would like to suggest, however, that the behavior be fixed --
I'm trying to add some additional devices to my existing pool, but it's not
working. I'm adding a raidz group of 5 300 GB drives, but the command always
fails:
r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0
Assertion failed: nvlist_lookup_string(cnv, "path", &path) ==
I'm trying to add some additional devices to my existing pool, but it's not
working. I'm adding a raidz group of 5 300 GB drives, but the command always
fails:
r...@kronos:/ # zpool add raid raidz c8t8d0 c8t13d0 c7t8d0 c3t8d0 c5t8d0
Assertion failed: nvlist_lookup_string(cnv, "path", &path) ==
> OK, you asked for "creative" workarounds... here's one (though it requires
> that the filesystem be briefly unmounted, which may be deal-killing):
That is, indeed, creative. :) And yes, the unmount make it
impractical in my environment.
I ended up going back to rsync, because we had mor
Just wanted to voice another request for this feature.
I was forced on a previous Solaris10/ZFS system to rsync whole filesystems, and
snapshot the backup copy to prevent the snapshots from negatively impacting
users. This obviously has the effect of reducing available space on the system
by o
> > At the moment, I'm hearing that using h/w raid under my zfs may be
> >better for some workloads and the h/w hot spare would be nice to
> >have across multiple raid groups, but the checksum capabilities in
> >zfs are basically nullified with single/multiple h/w lun's
> >resulting in "reduced pro
write cache was enabled on all the ZFS drives, but disabling it gave a
negligible speed improvement: (FWIW, the pool has 50 drives)
(write cache on)
/bin/time tar xf /tmp/vbulletin_3-6-4.tar
real 51.6
user0.0
sys 1.0
(write cache off)
/bin/time tar xf /tmp/vbulletin_
Ah, thanks -- reading that thread did a good job of explaining what I was
seeing. I was going
nuts trying to isolate the problem.
Is work being done to improve this performance? 100% of my users are coming in
over NFS,
and that's a huge hit. Even on single large files, writes are slower by a
I had a user report extreme slowness on a ZFS filesystem mounted over NFS over
the weekend.
After some extensive testing, the extreme slowness appears to only occur when a
ZFS filesystem is mounted over NFS.
One example is doing a 'gtar xzvf php-5.2.0.tar.gz'... over NFS onto a ZFS
filesyste
Saw this while writing a script today -- while debugging the script, I was
ctrl-c-ing it a lot rather
than wait for the zfs create / zfs set commands to complete. After doing so,
my cleanup script
failed to zfs destroy the new filesystem:
[EMAIL PROTECTED]:/ # zfs destroy -f raid/www/user-test
> Brad,
>
> I have a suspicion about what you might be seeing and I want to confirm
> it. If it locks up again you can also collect a threadlist:
>
> "echo $
> Send me the output and that will be a good starting point.
I tried popping out a disk again, but for whatever reason, the system
just
Just a data point -- our netapp filer actually creates additional raid groups
that are added to the greater pool when you "add disks", much as zfs does now.
They aren't simply used to expand
the one large raid group of the volume.I've been meaning to rebuild the
whole thing to
get use of
Just wanted to point this out --
I have a large web tree that used to have UFS user quotas on it. I converted
to ZFS using
the model that each user has their own ZFS filesystem quota instead. I worked
around some
NFS/automounter issues, and it now seems to be working fine.
Except now I ha
The core dump timed out (related to the SCSI bus reset?), so I don't
have one. I can try it again, though, it's easy enough to reproduce.
I was seeing errors on the fibre channel disks as well, so it's possible
the whole thing was locked up.
BP
--
[EMAIL PROTECTED]
I have similar problems ... I have a bunch of D1000 disk shelves attached via
SCSI HBAs to a V880. If I do something as simple as unplug a drive in a raidz
vdev, it generates SCSI errors that eventually freeze the entire system. I can
access the filesystem okay for a couple minutes until the SCS
> Yeah, I ran into that in my testing, too. I suspect
> it's something
> that will come up in testing a LOT more than in real
> production use.
I disagree. I can see lots of situations where you want to attach new storage
and
remove or retire old storage from an existing pool. It would be grea
> First, ZFS allows one to take advantage of large, inexpensive Serial ATA
> disk drives. Paraphrased: "ZFS loves large, cheap SATA disk drives". So
> the first part of the solution looks (to me) as simple as adding some
> cheap SATA disk drives.
>
> Next, after extra storage space has been adde
I've run into this myself. (I am in a university setting). after reading bug
ID 6431277 (URL below for noobs like myself who didn't know what "see 6431277"
meant):
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6431277
...it's not clear to me how this will be resolved. What I'd r
20 matches
Mail list logo