Ivan Wang wrote:
So any date on when install utility will support zfs root fresh install?
almost can't wait for that.
Hi Ivan,
there's no firm date for this yet, though the install team are
working *really* hard at getting this to happen as soon as humanly
possible.
James C. McPherson
--
So
So any date on when install utility will support zfs root fresh install?
almost can't wait for that.
Cheers,
Ivan.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/
After replacing a bad disk and waiting for the resilver to complete, I
started a scrub of the pool. Currently, I have the pool mounted
readonly, yet almost a quarter of the I/O is writes to the new disk.
In fact, it looks like there are so many checksum errors, that zpool
doesn't even list them p
Lori Alt wrote:
Can write-cache not be turned on manually as the user is sure that it is
only ZFS that is using the entire disk?
yes it can be turned on. But I don't know if ZFS would then know about it.
I'd still feel more comfortably with it being turned off unless ZFS itself
does it.
Matthew Ahrens wrote:
Miroslav Pendev wrote:
I did some more testing, here is what I found:
- I can destroy older and newer snapshots, just not that particular
snapshot
- I added some more memory total 1GB, now after I start the destroy
command, ~500MB RAM are taken right away, there is sti
On Wed, Apr 04, 2007 at 11:04:06PM +0200, Robert Milkowski wrote:
> If I stop all activity to x4500 with a pool made of several raidz2 and
> then I issue spare attach I get really poor performance (1-2MB/s) on a
> pool with lot of relatively small files.
Does that mean the spare is resilvering whe
Hello Adam,
Wednesday, April 4, 2007, 7:08:07 PM, you wrote:
AL> On Wed, Apr 04, 2007 at 03:34:13PM +0200, Constantin Gonzalez wrote:
>> - RAID-Z is _very_ slow when one disk is broken.
AL> Do you have data on this? The reconstruction should be relatively cheap
AL> especially when compared with
It has been pointed out to me that if you have set up
a zfs boot configuration using the old-style prototype
code (where you had to have a ufs boot slice), and you
BFU that system with a version of the Solaris archives
that contain the new zfsboot support, your system will
panic. So until we figu
Resent, for Fred...
Hi gurus,
When creating some small files an ZFS directory, used blocks number is
not what could be espected:
hinano# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool2 702K 16.5G 26.5K /pool2
pool2/new 604K 16.5G34K /po
Can write-cache not be turned on manually as the user is sure that it is
only ZFS that is using the entire disk?
yes it can be turned on. But I don't know if ZFS would then know about it.
I'd still feel more comfortably with it being turned off unless ZFS itself
does it.
But maybe someone
On Wed, Apr 04, 2007 at 10:08:07AM -0700, Adam Leventhal wrote:
> On Wed, Apr 04, 2007 at 03:34:13PM +0200, Constantin Gonzalez wrote:
> > - RAID-Z is _very_ slow when one disk is broken.
>
> Do you have data on this? The reconstruction should be relatively cheap
> especially when compared with th
On Wed, Apr 04, 2007 at 07:57:21PM +1000, Darren Reed wrote:
> From: "Darren J Moffat" <[EMAIL PROTECTED]>
> ...
> >The other problem is that you basically need a global unique registry
> >anyway so that compress algorithm 1 is always lzjb, 2 is gzip, 3 is
> >etc etc. Similarly for crypto a
On Wed, Apr 04, 2007 at 10:08:07AM -0700, Adam Leventhal wrote:
> On Wed, Apr 04, 2007 at 03:34:13PM +0200, Constantin Gonzalez wrote:
> > - RAID-Z is _very_ slow when one disk is broken.
>
> Do you have data on this? The reconstruction should be relatively cheap
> especially when compared with th
On Wed, Apr 04, 2007 at 03:34:13PM +0200, Constantin Gonzalez wrote:
> - RAID-Z is _very_ slow when one disk is broken.
Do you have data on this? The reconstruction should be relatively cheap
especially when compared with the initial disk access.
Adam
--
Adam Leventhal, Solaris Kernel Developme
> Gino,
> I just had a similar experience and was able to
> import the pool when I
> added the readonly option (zpool import -f -o ro
> )
>
no way ... We still get a panic :(
gino
This message posted from opensolaris.org
___
zfs-discuss mai
Hello Constantin,
Wednesday, April 4, 2007, 3:34:13 PM, you wrote:
CG> - RAID-Z is slow when writing, you basically get only one disk's bandwidth.
CG> (Yes, with variable block sizes this might be slightly better...)
No, it's not.
It's actually very fast for writing, in many cases it would be
Hi,
Manoj Joseph wrote:
> Can write-cache not be turned on manually as the user is sure that it is
> only ZFS that is using the entire disk?
yes it can be turned on. But I don't know if ZFS would then know about it.
I'd still feel more comfortably with it being turned off unless ZFS itself
does
Hi Matt,
I have tried both and every time the server panics with the same error message.
I guess my pool is foobar
Thanks a lot for the help,
Bertrand.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolari
Frederic Payet - Availability Services wrote:
Hi gurus,
When creating some small files an ZFS directory, used blocks number is
not what could be espected:
hinano# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool2 702K 16.5G 26.5K /pool2
pool2/new
Constantin Gonzalez wrote:
Do I still have the advantages of having the whole disk
'owned' by zfs, even though it's split into two parts?
I'm pretty sure that this is not the case:
- ZFS has no guarantee that someone will do something else with that other
partition, so it can't assume the r
Hi,
> Now that zfsboot is becoming available, I'm wondering how to put it to
> use. Imagine a system with 4 identical disks. Of course I'd like to use
you lucky one :).
> raidz, but zfsboot doesn't do raidz. What if I were to partition the
> drives, such that I have 4 small partitions that make
Hello Viktor,
Wednesday, April 4, 2007, 1:17:58 PM, you wrote:
VT> I can get USED in bytes of file systems in zfs pool but i do not
VT> know how to get USED in bytes for pools.
VT> I need such exact value of used size on pool to get system
VT> overheads while using snapshots. Any information
I can get USED in bytes of file systems in zfs pool but i do not know how to
get USED in bytes for pools.
I need such exact value of used size on pool to get system overheads while
using snapshots. Any information about this?
This message posted from opensolaris.org
__
>From: "Darren J Moffat" <[EMAIL PROTECTED]>
>...
>> The other problem is that you basically need a global unique registry
>> anyway so that compress algorithm 1 is always lzjb, 2 is gzip, 3 is
>> etc etc. Similarly for crypto and any other transform.
>
>I've two thoughts on that:
>1) if t
From: "Darren J Moffat" <[EMAIL PROTECTED]>
...
The other problem is that you basically need a global unique registry
anyway so that compress algorithm 1 is always lzjb, 2 is gzip, 3 is
etc etc. Similarly for crypto and any other transform.
I've two thoughts on that:
1) if there is to be
Hi everyone,
Now that zfsboot is becoming available, I'm wondering how to put it to
use. Imagine a system with 4 identical disks. Of course I'd like to use
raidz, but zfsboot doesn't do raidz. What if I were to partition the
drives, such that I have 4 small partitions that make up a zfsboot
partit
26 matches
Mail list logo