>
> So either we're hitting a pretty serious zfs bug, or they're purposely
> holding back performance in Solaris 10 so that we all have a good
> reason to
> upgrade to 11. ;)
In general, for ZFS we try to push all changes from Nevada back to
s10 updates.
In particular, "6535160 Lock contenti
Okay, I've done a little more reading on ZFS and I see that metadata is already
stored redundantly. This means something had to go really wrong for me to lose
a bunch of directories, as I did. Okay, that's another piece of information
I'm not sure what to make of, but I'll add it to the pile :
If you have a large pool (# of devices), you are hitting:
6632372 ZFS label writing/checksumming hurts scalability of large configs
Which was introduced in build 77 and is fixed in build 81. If you have
a small pool, then I'm not sure what it could be.
- Eric
On Fri, Jan 04, 2008 at 04:07:08PM
just installed b77 and having an issue with 'zfs create':
[EMAIL PROTECTED]:~] # time zfs create test/bo
zfs create test/bo 157.21s user 8.21s system 99% cpu 2:46.47 total
[EMAIL PROTECTED]:~] #
Th
Oooh, I see build 74 suffered from
http://bugs.opensolaris.org/view_bug.do?bug_id=6603147
There's not much info on that bug page, and I certainly don't recall seeing the
blown assertion message. Is it possible, nonetheless, that this is the cause
of some of my problems? I will Live Upgrade AS
Heya,
> I have/had a zpool containing one filesystem.
Cool, simple scenario.
> I had to change my hostid and needed to import my pool, (I've done his
> OK in the past).
> After the import the mount of my filesystem failed.
I take it you did the 'export' part with the other hostid? Wondering if
On Jan 4, 2008 2:42 PM, George Shepherd - Sun Microsystems Home system
<[EMAIL PROTECTED]> wrote:
> Hi Folks..
>
> I have/had a zpool containing one filesystem.
>
> I had to change my hostid and needed to import my pool, (I've done his
> OK in the past).
> After the import the mount of my filesyste
Hi Folks..
I have/had a zpool containing one filesystem.
I had to change my hostid and needed to import my pool, (I've done his
OK in the past).
After the import the mount of my filesystem failed.
# zpool import homespool
cannot mount 'homespool/homes': mountpoint or dataset is busy
The data
[EMAIL PROTECTED] said:
> When i modify zfs FS propreties I get "device busy"
> -bash-3.00# zfs set mountpoint=/mnt1 pool/zfs1 cannot unmount '/mnt': Device
> busy
> Do you know how to identify porcess accessing this FS ? fuser doesn't work
> with zfs!
Actually, fuser works fine with ZFS here.
Hi,
Does copy-on-write happens every time when any data block of ZFS is getting
modified? or one needs to configure to enable COW for ZFS while creating?
Also where exactly COWed data written if my storage pool is a single physical
device or even multiple divices are there but used
Carol,
Probably "/mnt" is already in use ie. some other filesystem is mounted
there.
Can you please verify ?
What is the original mountpoint of pool/zfs1 ?
Regards,
Sanjeev.
Caroline Carol wrote:
> Hi all,
>
> When i modify zfs FS propreties I get "device busy"
>
> -bash-3.00# zfs set moun
Hi all,
When i modify zfs FS propreties I get "device busy"
-bash-3.00# zfs set mountpoint=/mnt1 pool/zfs1
cannot unmount '/mnt': Device busy
Do you know how to identify porcess accessing this FS ?
fuser doesn't work with zfs!
Thanks a lot
regards
Carol
_
12 matches
Mail list logo