I wrote a simple script to dump zfs snapshots in a similar way to
what I did with ufsdump. In particular, it supports dump levels.
This is still very crude, but I'd appreciate comments. Does what
I'm doing make sense, or do I misunderstand zfs?
Thanks.
-
#!/bin/ksh
#
# XXX - real option pa
>>
>> CIFS uses TCP. NFS uses either TCP or UDP, and usually UDP by default.
>
> For Sun systems, NFSv3 using 32kByte [rw]size over TCP has been
> the default configuration for 10+ years. Do you still see clients running
> NFSv2 over UDP?
Yes, I see that TCP is the default in Solaris 9. Is it
Bob Friesenhahn wrote:
> On Fri, 28 Mar 2008, abs wrote:
>
>
>> Sorry for being vague but I actually tried it with the cifs in zfs
>> option, but I think I will try the samba option now that you mention
>> it. Also is there a way to actually improve the nfs performance
>> specifically?
>>
Fred Oliver wrote:
> I am having trouble destroying a zfs file system (device busy) and fuser
> isn't telling me who has the file open:
>
> # zfs destroy files/custfs/cust12/2053699a
> cannot unmount '/files/custfs/cust12/2053699a': Device busy
>
> # zfs unmount files/custfs/cust12/2053699a
> cann
Is there a zfs recv-like command that will list a table of contents
for what's in a stream?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On Dec 7, 2007, at 1:05 PM, Karl Pielorz wrote:
>
>
> --On 07 December 2007 11:18 -0600 Jason Morton
> <[EMAIL PROTECTED]> wrote:
>
>> I am using ZFS on FreeBSD 7.0_beta3. This is the first time i have
>> used
>> ZFS and I have run into something that I am not sure if this is
>> normal,
>> bu
Marion Hakanson wrote:
[EMAIL PROTECTED] said:
I am having trouble destroying a zfs file system (device busy) and fuser
isn't telling me who has the file open:
. . .
This situation appears to occur every night during a system test. The only
peculiar operation on the errant file system is tha
[EMAIL PROTECTED] said:
> I am having trouble destroying a zfs file system (device busy) and fuser
> isn't telling me who has the file open:
> . . .
> This situation appears to occur every night during a system test. The only
> peculiar operation on the errant file system is that another system
I am having trouble destroying a zfs file system (device busy) and fuser
isn't telling me who has the file open:
# zfs destroy files/custfs/cust12/2053699a
cannot unmount '/files/custfs/cust12/2053699a': Device busy
# zfs unmount files/custfs/cust12/2053699a
cannot unmount '/files/custfs/cust12
On Fri, 28 Mar 2008, abs wrote:
> Sorry for being vague but I actually tried it with the cifs in zfs
> option, but I think I will try the samba option now that you mention
> it. Also is there a way to actually improve the nfs performance
> specifically?
CIFS uses TCP. NFS uses either TCP or
Trevor Watson wrote:
> I don't suppose that there's any chance it could be caused by the
> disks being powered down could it?
Note that the default retry interval for disks is 60 seconds... sounds like
a plausible explanation.
-- richard
>
> Neal Pollack wrote:
>> For the last few builds of Nev
abs wrote:
> Sorry for being vague but I actually tried it with the cifs in zfs
> option, but I think I will try the samba option now that you mention
> it. Also is there a way to actually improve the nfs performance
> specifically?
We have some recommendations for improving NFS with ZFS on th
Sorry for being vague but I actually tried it with the cifs in zfs option, but
I think I will try the samba option now that you mention it. Also is there a
way to actually improve the nfs performance specifically?
cheers,
abs
"Peter Brouwer, Principal Storage Architect, Office of the Chief Tec
That is the first thing i checked. Prior to that I was getting somewhere
around 1 ~ 5 MB/sec. Thank you though.
Dale Ghent <[EMAIL PROTECTED]> wrote:
Have you turned on the "Ignore cache flush commands" option on the
xraids? You should ensure this is on when using ZFS on them.
/dale
On Ma
[EMAIL PROTECTED]:~ #mdb -k
Loading modules: [ unix krtld genunix specfs dtrace uppc pcplusmp ufs md mpt ip
hook neti sctp arp usba uhci fcp fctl qlc nca lofs zfs random fcip crypto
logindmux ptm nfs ]
::memstat
Page SummaryPagesMB %Tot
-
Hi There,
Is there any chance you could go into a little more detail, perhaps even
document the procedure, for the benefit of others experiencing a similar
problem?
We had a mirrored array, which after a powercut shows the zpool as faulted, and
are keen to find a way to recover the zpool.
Hi There,
Were you able to fix this problem in the end?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hi Lukas,
I've encountered a problem similar to yours where a zfs pool became
unaccessible after a reboot with the error "The pool metadata is corrupted."
In my case I'm running on Solaris 10 8/07 127112-11. Can you explain how you
determined the offsets for modifying vdev_uberblock_compare,
I don't suppose that there's any chance it could be caused by the disks being
powered down could it?
Neal Pollack wrote:
For the last few builds of Nevada, if I come back to my workstation after
long idle periods such as overnight, and try any command that would touch
the zfs filesystem, it han
Hello eric,
Thursday, March 27, 2008, 9:36:42 PM, you wrote:
ek> On Mar 27, 2008, at 9:24 AM, Bob Friesenhahn wrote:
>> On Thu, 27 Mar 2008, Neelakanth Nadgir wrote:
>>>
>>> This causes the sync to happen much faster, but as you say,
>>> suboptimal.
>>> Haven't had the time to go through the bu
20 matches
Mail list logo