On 12/12/2006, at 8:48 AM, Richard Elling wrote:
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors. Should I file this as a bug, or
should I just "not do that" :->
Robert Milkowski wrote:
Hello Richard,
Tuesday, December 5, 2006, 7:01:17 AM, you wrote:
RE> Dale Ghent wrote:
Similar to UFS's onerror mount option, I take it?
RE> Actually, it would be interesting to see how many customers change the
RE> onerror setting. We have some data, just need more
Hi Everybody,
I have some problems in solaris 10 installation.
After installing the first CD , I removed the CD from CDrom , after that the
machine is getting rebooting again and again. It is not asking second CD to
install.
If you have any idea. Please tell me.
Thanks &
BR> Yes, absolutely. Set var in /etc/system, reboot, system come up. That
BR> happened almost 2 months ago, long before this lock insanity problem
BR> popped up.
For the archives, a high level of lock activity can always be a problem.
The worst cases I've experienced were with record locking ove
> DD> To reduce the chance of it affecting the integrety of the filesystem,
> DD> there are multiple copies of the UB written, each with a checksum and a
> DD> generation number. When starting up a pool, the oldest generation copy
> DD> that checks properly will be used. If the import can't find
Hello Darren,
Tuesday, December 12, 2006, 2:10:30 AM, you wrote:
>> A while back we had a Sun engineer come to our office and talk about
>> the benefits of ZFS. I asked him the question "Can the uber block
>> become corrupt and what happeneds if it does?", to which he did not
>> have the answer b
Hello Ben,
Monday, December 11, 2006, 9:34:18 PM, you wrote:
BR> Robert Milkowski wrote:
>> Hello eric,
>>
>> Saturday, December 9, 2006, 7:07:49 PM, you wrote:
>>
>> ek> Jim Mauro wrote:
>>
> Could be NFS synchronous semantics on file create (followed by
> repeated flushing of the wr
Hello Richard,
Tuesday, December 5, 2006, 7:01:17 AM, you wrote:
RE> Dale Ghent wrote:
>>
>> Similar to UFS's onerror mount option, I take it?
RE> Actually, it would be interesting to see how many customers change the
RE> onerror setting. We have some data, just need more days in the hour.
Som
> A while back we had a Sun engineer come to our office and talk about
> the benefits of ZFS. I asked him the question "Can the uber block
> become corrupt and what happeneds if it does?", to which he did not
> have the answer but swore to me that he would get it to me. I still
> haven't gotten tha
IANA ZFS guru, but I have read explanations like this:
When ZFS reads in the uberblock, it computes the uberblock's checksum and compares
it against the stored checksum for that block. If they don't match, it uses
another copy of the uberblock.
Ross Hosman wrote:
A while back we had a Sun e
A while back we had a Sun engineer come to our office and talk about the
benefits of ZFS. I asked him the question "Can the uber block become corrupt
and what happeneds if it does?", to which he did not have the answer but swore
to me that he would get it to me. I still haven't gotten that answe
Gino Ruopolo wrote:
Hi All,
we have some ZFS pools on production with more than 100s fs and more
than 1000s snapshots on them. Now we do backups with zfs send/receive
with some scripting but I'm searching for a way to mirror each zpool
to an other one for backup purposes (so including all snapsh
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
Should I file this as a bug, or should I just "not do that" :->
Don't do that. The same should happen if you umoun
Jim Hranicky wrote:
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
So you had a pool and were sharing filesystems over NFS, NFS clients had
active mounts, you removed /etc/zfs/zpool.c
By mistake, I just exported my test filesystem while it was up
and being served via NFS, causing my tar over NFS to start
throwing stale file handle errors.
Should I file this as a bug, or should I just "not do that" :->
Ko,
This message posted from opensolaris.org
__
This worked.
I've restarted my testing but I've been fdisking each drive before I
add it to the pool, and so far the system is behaving as expected
when I spin a drive down, i.e., the hot spare gets automatically used.
This makes me wonder if it's possible to ensure that the forced
addition of
Robert Milkowski wrote:
Hello eric,
Saturday, December 9, 2006, 7:07:49 PM, you wrote:
ek> Jim Mauro wrote:
Could be NFS synchronous semantics on file create (followed by
repeated flushing of the write cache). What kind of storage are you
using (feel free to send privately if you need to)
> You are likely hitting:
>
> 6397052 unmounting datasets should process
> /etc/mnttab instead of traverse DSL
>
> Which was fixed in build 46 of Nevada. In the
> meantime, you can remove
> /etc/zfs/zpool.cache manually and reboot, which will
> remove all your
> pools (which you can then re-impo
You are likely hitting:
6397052 unmounting datasets should process /etc/mnttab instead of traverse DSL
Which was fixed in build 46 of Nevada. In the meantime, you can remove
/etc/zfs/zpool.cache manually and reboot, which will remove all your
pools (which you can then re-import on an individual
Nevermind:
# zfs destroy [EMAIL PROTECTED]:28
cannot open '[EMAIL PROTECTED]:28': I/O error
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discus
Here's the truss output:
402:ioctl(3, ZFS_IOC_POOL_LOG_HISTORY, 0x080427B8) = 0
402:ioctl(3, ZFS_IOC_OBJSET_STATS, 0x0804192C) = 0
402:ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x0804243C) = 0
402:ioctl(3, ZFS_IOC_DATASET_LIST_NEXT, 0x0804243C) Err#3 ESRCH
402:ioctl(3, ZFS_IOC_
Hello James,
Saturday, November 18, 2006, 11:34:52 AM, you wrote:
JM> as far as I can see, your setup does not mee the minimum
JM> redundancy requirements for a Raid-Z, which is 3 devices.
JM> Since you only have 2 devices you are out on a limb.
Actually only two disks for raid-z is fine and you
BTW, I'm also unable to export the pool -- same error.
Jim
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Jakob Praher wrote:
hi all,
I'd like to build a solid storage server using zfs and opensolaris. The
server more or less should have a NAS role, thus using nfsv4 to export
the data to other nodes.
...
what would be your reasonable advice?
First of all, figure out what you need in terms of cap
on a blade 1500...
bash-3.00# zfs set sharenfs=rw pool
cannot set sharenfs for 'pool': out of space
bash-3.00# zpool iostat pool
capacity operationsbandwidth
pool used avail read write read write
-- - - - - - -
pool
Ok, so I'm planning on wiping my test pool that seems to have problems
with non-spare disks being marked as spares, but I can't destroy it:
# zpool destroy -f zmir
cannot iterate filesystems: I/O error
Anyone know how I can nuke this for good?
Jim
This message posted from opensolaris.org
__
> What happens is that /home/thomas/zfs gets mounted
> and then the
> automounter starts. (Or /home/thomas is found
> missing and then
> the zfs mount is not completed)
>
> Probably requires legacy mount point.
>
>
> Casper
> ___
I'm experiencing thi
Anton B. Rang writes:
> If your database performance is dominated by sequential reads, ZFS may
> not be the best solution from a performance perspective. Because ZFS
> uses a write-anywhere layout, any database table which is being
> updated will quickly become scattered on the disk, so that s
dudekula mastan wrote:
Hi ALL,
Is it possible to install solaris 10 on HP-VISUALIZE XL - CLASS server ?
The ZFS discussion alias is probably not the best place to ask this.
In general they way to find out about Solaris support on a particular
hardware platform is to look at the hardwa
29 matches
Mail list logo