Wee Yeh Tan wrote:
On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
bash-3.00# mdb -k
Loading modules: [ unix krtld genunix dtrace specfs ufs sd pcisch md
ip sctp usba fcp fctl qlc ssd crypto lofs zfs random ptm cpc nfs ]
> segmap_percent/D
segmap_percent:
segmap_percent: 12
(it's stat
Mario Goebbels wrote:
With "one disk" I basically mean pools consisting of a single toplevel vdev.
The current documentation poses this restriction, either a single disk or a mirror.
The thing I have in mind is the ability to create a single pool of all disks in
a system as top level device
I tried your kit, specifically the "Detailed Steps for the Install"
Unfortunately, it didn't work. I copy / pasted the profile from the README
file, then did pfinstall /tmp/profile. The error was:
Error: Field 1 - Keyword "pool" is invalid.
Perhaps I messed up?
This error is what I would e
On 4/23/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
bash-3.00# mdb -k
Loading modules: [ unix krtld genunix dtrace specfs ufs sd pcisch md ip sctp
usba fcp fctl qlc ssd crypto lofs zfs random ptm cpc nfs ]
> segmap_percent/D
segmap_percent:
segmap_percent: 12
(it's static IIRC)
segmap_per
Hello zfs-discuss,
Relatively low traffic to the pool but sync takes too long to complete
and other operations are also not that fast.
Disks are on 3510 array. zil_disable=1.
bash-3.00# ptime sync
real 1:21.569
user0.001
sys 0.027
During sync zpool iostat and vmstat loo
Eric Schrock wrote:
This is:
6538017 ZFS boot to support gzip decompression
This should be fixed in the near future. In the meantime, lzjb should
work just fine (albeit with lower compression ratio).
Unfortunately, lzjb is not working well and needs to be fixed as well, see:
6541114 GRUB
Hello Robert,
Friday, April 13, 2007, 2:07:11 AM, you wrote:
RM> Hello Enda,
RM> Thursday, April 12, 2007, 2:36:39 PM, you wrote:
EOCSMSI>> Robert Milkowski wrote:
>>> Hello Enda,
>>>
>>> Wednesday, April 11, 2007, 4:21:35 PM, you wrote:
>>>
>>> EOCSMSI> Robert Milkowski wrote:
> Hello zf
Mario Goebbels wrote:
With "one disk" I basically mean pools consisting of a single toplevel vdev.
The current documentation poses this restriction, either a single disk or a mirror.
Yes, it is still the case that the roopool has to be either a single
vdev pool or a mirror.
Currently, we
Hello Wee,
Sunday, April 22, 2007, 11:25:23 AM, you wrote:
WYT> On 4/20/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>> Hello Wee,
>>
>> Friday, April 20, 2007, 5:20:00 AM, you wrote:
>>
>> WYT> On 4/20/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>> >> You can limit how much memory zfs can
Hello zfs-discuss,
bash-3.00# uname -a
SunOS nfs-10-1.srv 5.10 Generic_125100-04 sun4u sparc SUNW,Sun-Fire-V440
zil_disable set to 1
Disks are over FCAL from 3510.
bash-3.00# dtrace -n fbt::*SYNCHRONIZE*:entry'{printf("%Y",walltimestamp);}'
dtrace: description 'fbt::*SYNCHRONIZE*:entry' matched
> The kit that I promised for patching an install image
> to support the profile-based install of systems with
> zfs root file systems has been posted. It's at:
>
> http://www.opensolaris.org/os/community/install/files/
> zfsboot-kit-20060418.i386.tar.bz2
>
> Unpack it and see the README file fo
On Sun, Apr 22, 2007 at 08:53:04PM +0300, Cyril Plisko wrote:
> Hi,
>
> I am having problem booting from the zfs filesystem with compression
> set to gzip. I netinstalled machine and switched the compression to
> gzip during early installation stages. After the installation I am
> getting straigh
On Sun, Apr 22, 2007 at 01:57:50PM -0700, Andrew wrote:
> eschrock wrote:
> > Unfortunately, there is one exception to this rule. ZFS currently does
> > not handle write failure in an unreplicated pool. As part of writing
> > out data, it is sometimes necessary to read in space map data. If this
>
eschrock wrote:
> Unfortunately, there is one exception to this rule. ZFS currently does
> not handle write failure in an unreplicated pool. As part of writing
> out data, it is sometimes necessary to read in space map data. If this
> fails, then we can panic due to write failure. This is a known b
With "one disk" I basically mean pools consisting of a single toplevel vdev.
The current documentation poses this restriction, either a single disk or a
mirror.
The thing I have in mind is the ability to create a single pool of all disks in
a system as top level devices, basically a JBOD or eve
Hi,
I am having problem booting from the zfs filesystem with compression
set to gzip.
I netinstalled machine and switched the compression to gzip during
early installation
stages. After the installation I am getting straight to the GRUB
prompt instead of
the normal menu. The attempt to manually e
Mario Goebbels wrote:
I was wondering if it's planned to give some control over the metaslab
allocation into the hands of the user. What I have in mind is an
attribute on a ZFS filesystem that acts as modifier to the allocator.
Scenarios for this would be directly controlling performance
chara
Mario Goebbels wrote:
I just perused through the ZFS Best Practices wiki entry on
solarisinternals.com and it says that for ZFS boot, the pool
is restricted to one disk and mirror pools. Is this still
applicable to build 62 and the mentioned "new code"?
I'm not sure what you mean by "one dis
I was wondering if it's planned to give some control over the metaslab
allocation into the hands of the user. What I have in mind is an attribute on a
ZFS filesystem that acts as modifier to the allocator. Scenarios for this would
be directly controlling performance characteristics, e.g. having
I just perused through the ZFS Best Practices wiki entry on
solarisinternals.com and it says that for ZFS boot, the pool is restricted to
one disk and mirror pools. Is this still applicable to build 62 and the
mentioned "new code"? The availability of the "zpool set bootfs" command
suggests tha
On 4/20/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
Hello Wee,
Friday, April 20, 2007, 5:20:00 AM, you wrote:
WYT> On 4/20/07, Robert Milkowski <[EMAIL PROTECTED]> wrote:
>> You can limit how much memory zfs can use for its caching.
>>
WYT> Indeed, but that memory will still be locked. Ho
21 matches
Mail list logo