Jonathan Loran wrote:
> Since no one has responded to my thread, I have a question: Is zdb
> suitable to run on a live pool? Or should it only be run on an exported
> or destroyed pool? In fact, I see that it has been asked before on this
> forum, but is there a users guide to zdb?
>
>
Is it also true that ZFS can't be re-implemented in GPLv2 code because then the
CDDL-based patent protections don't apply?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/m
> cannot share 'tank/software': smb add share failed
you meant to post this in storage-discuss
but type:
chmod 777 /tank/software
zfs set sharesmb=name=software tank/software
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.openso
This is really strange. Check out this error:
[EMAIL PROTECTED] ~]# zfs get sharesmb tank/software tank/music
NAME PROPERTY VALUE SOURCE
tank/music sharesmb offdefault
tank/software sharesmb offdefault
(same begin state for both filesystems)
[E
On May 5, 2008, at 4:43 PM, Bob Friesenhahn wrote:
> On Mon, 5 May 2008, eric kustarz wrote:
>>
>> That's not true:
>> http://blogs.sun.com/erickustarz/entry/zil_disable
>>
>> Perhaps people are using "consistency" to mean different things
>> here...
>
> Consistency means that fsync() assures t
On Mon, 5 May 2008, [EMAIL PROTECTED] wrote:
> The problem is the fact that NFS mounts cannot be done across
> filesystems as implemented with ZFS and Solaris 10. For example, we have
> client machines mounting to /groups/accounting... but we also have
> clients mounting to /groups directly.
On my
On Mon, 5 May 2008, Marcelo Leal wrote:
> I'm calling consistency, "a coherent local view"...
> I think that was one option to debug (if not a NFS server), without
> generate a corrupted filesystem.
In other words your flight reservation will not be lost if the system
crashes.
Bob
==
On Mon, 5 May 2008, eric kustarz wrote:
>
> That's not true:
> http://blogs.sun.com/erickustarz/entry/zil_disable
>
> Perhaps people are using "consistency" to mean different things here...
Consistency means that fsync() assures that the data will be written
to disk so no data is lost. It is not
After struggling for some time to try and wedge a ZFS file server into
our environment, I have come to the conclusion that I'm simply going to
have to live without quotas. They have been immensely useful in the past
5 years or so in allowing us to keep track of which groups are hogging
disk space
Since no one has responded to my thread, I have a question: Is zdb
suitable to run on a live pool? Or should it only be run on an exported
or destroyed pool? In fact, I see that it has been asked before on this
forum, but is there a users guide to zdb?
Thanks,
Jon
--
- _/
On May 5, 2008, at 1:43 PM, Bob Friesenhahn wrote:
> On Mon, 5 May 2008, Marcelo Leal wrote:
>
>> Hello, If you believe that the problem can be related to ZIL code,
>> you can try to disable it to debug (isolate) the problem. If it is
>> not a fileserver (NFS), disabling the zil should not impact
On Mon, 5 May 2008, Marcelo Leal wrote:
> Hello, If you believe that the problem can be related to ZIL code,
> you can try to disable it to debug (isolate) the problem. If it is
> not a fileserver (NFS), disabling the zil should not impact
> consistency.
In what way is NFS special when it come
Hello Leal,
I've been already warned
(http://www.opensolaris.org/jive/message.jspa?messageID=231349) that ZIL could
be a cause and I made tests with zil_disabled. I run scrub and system crashed
exactly at after the same period and the same error. ZIL known to cause some
problems on writes, whi
I have a Solaris 10u3 x86 patched up with the important kernel/zfs/fs
patches (now running kernel 120012-14).
after executing a 'zpool scrub' on one of my pools, i see I/O read errors:
# zpool status | grep ONLINE | grep -v '0 0 0'
state: ONLINE
c2t1d0 ONLINE 9
Hello,
If you believe that the problem can be related to ZIL code, you can try to
disable it to debug (isolate) the problem. If it is not a fileserver (NFS),
disabling the zil should not impact consistency.
Leal.
This message posted from opensolaris.org
[Jeff Bonwick:]
| That said, I suspect I know the reason for the particular problem
| you're seeing: we currently do a bit too much vdev-level caching.
| Each vdev can have up to 10MB of cache. With 132 pools, even if
| each pool is just a single iSCSI device, that's 1.32GB of cache.
|
| We need
Rustam wrote:
> Hello Robert,
>
>> Which would happen if you have problem with HW and you're getting
>> wring checksums on both side of your mirrors. Maybe PS?
>>
>> Try memtest anyway or sunvts
>>
> Unfortunately, SunVTS doesn't run on non-Sun/OEM hardware. And memtest
> requires too much
Hello Robert,
> Which would happen if you have problem with HW and you're getting
> wring checksums on both side of your mirrors. Maybe PS?
>
> Try memtest anyway or sunvts
Unfortunately, SunVTS doesn't run on non-Sun/OEM hardware. And memtest requires
too much downtime which I cannot afford right
18 matches
Mail list logo