Hi, guys,
I have add devid support for EFI, (not putback yet) and test it with a
zfs mirror, now the mirror can recover even a usb harddisk is unplugged
and replugged into a different usb port.
But there is still something need to improve. I'm far from zfs expert,
correct me if I'm wrong.
Fir
On Wed, May 24, 2006 at 11:55:28AM -0400, Matthew B Sweeney - Sun Microsystems
Inc. wrote:
> Do we have an FAQ regarding ZFS and removable media? IHAC who's looking
> to know if a single ZFS can span several removable devices.
Yes, that work fine.
--matt
___
> I previously wrote about my scepticism on the claims that zfs selectively
> enables and disables write cache, to improve throughput over the usual
> solaris defaults prior to this point.
I have snv_38 here. With a zpool thus :
bash-3.1# zpool status
pool: zfs0
state: ONLINE
scrub: scrub c
On Tue, May 30, 2006 at 12:30:50PM +0200, Constantin Gonzalez Schmitz wrote:
> Hi,
>
> >>Yes, a trivial wrapper could:
> >>1. Store all property values in a file in the fs
> >>2. zfs send...
> >>3. zfs receive...
> >>4. Set all the properties stored in that file
> >
> >IMHO 3. and 4. need to be sw
I previously wrote about my scepticism on the claims that zfs selectively
enables and disables write cache, to improve throughput over the usual
solaris defaults prior to this point.
I posted my observations that this did not seem to be happening in any
meaningful way, for my zfs, on build nv3
Do you want the vmcore file from /var/crash or something else? Where can I
upload it to, supportfiles.sun.com? The bzip'd vmcore file is ~35MB.
Thanks,
Nate
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensol
Jason Williams wrote:
Hi James,
Thanks for the quick response! Please find the requested info in the
attached log file. Also, thank you for giving the commands you needed
run. Not very adept at Solaris debugging yet. :-) If you need anything
else please let me know.
-Original Message- Fr
-r:
ZFS's output aggregation mechanisms seem entirely adequate in terms of
throughput, given that the ZIL should mask what would otherwise be poor disk
utilization in the event of many small, synchronous writes. The problems are
purely on the input side (just as they are with RAID-Z).
The rea
Thanks, that's exactly what I was looking for.
Ed Plese
On Wed, Jun 14, 2006 at 10:09:35AM -0700, Eric Schrock wrote:
> No, but this is a known issue. See:
>
> 6431277 want filesystem-only quotas
>
> - Eric
>
> On Wed, Jun 14, 2006 at 11:58:25AM -0500, Ed Plese wrote:
> > It seems by design
Nathanael,
This looks like a bug. We are trying to clean up after an error in
zfs_getpage() when we trigger this panic. Can you make a core file
available? I'd like to take a closer look.
I've filed a bug to track this:
6438702 error handling in zfs_getpage() can trigger "page not lo
No, but this is a known issue. See:
6431277 want filesystem-only quotas
- Eric
On Wed, Jun 14, 2006 at 11:58:25AM -0500, Ed Plese wrote:
> It seems by design that ZFS counts the space used by snapshots towards
> the filesystem quotas. I can see many cases where this would be the
> desired beha
Ed Plese wrote:
It seems by design that ZFS counts the space used by snapshots towards
the filesystem quotas. I can see many cases where this would be the
desired behavior, but for the common case of using one filesystem per
user home directory in combination with quotas and snapshots, this
does
It seems by design that ZFS counts the space used by snapshots towards
the filesystem quotas. I can see many cases where this would be the
desired behavior, but for the common case of using one filesystem per
user home directory in combination with quotas and snapshots, this
doesn't seem to work t
For Output ops, ZFS could setup a 10MB I/O transfer to disk
starting at sector X, or chunk that up in 128K while still
assigning the samerangeof disk blocks forthe
operations. Yes there will be more control information going
around, a little more CPU consumed, but the disk w
billtodd wrote:
I do want to comment on the observation that "enough concurrent 128K I/O can
saturate a disk" - the apparent implication being that one could therefore do
no better with larger accesses, an incorrect conclusion. Current disks can
stream out 128 KB in 1.5 - 3 ms., while taking 5
Hi,
Dne středa 14 červen 2006 15:38 David Blacklock napsal(a):
> -thanks,
> Dave Blacklock
You can subscribe yourself here:
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
--
Jan Spitalnik
JSC-QA
___
zfs-discuss mailing list
zfs-discuss@opens
-thanks,
Dave Blacklock
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
It's easy to reproduce seconds long mkdir (and other ops
also) with a side load of dd even to local ZFS. ZFS team
have started investigating this:
6429205 each zpool needs to monitor it's throughput and throttle heavy
writers
which will help bound the time to such operation
Jason Williams wrote:
Setup:
-T2000 running Solaris Express Build 41.
-Qlogic 2342 HBA (using both ports multipathed via MPXIO).
-StorageTek FLX210 (Engenio 2882) FC array sliced into two 6 disk RAID-1
volumes (multipathed via MPXIO).
-Brocade SilkWorm 3850 running FabricOS 4.2.0.
-Created a str
19 matches
Mail list logo