We are trying to quantify the amount of physical memory that is consumed by
Solaris versus the number of file systems which are mounted within a ZFS pool.
This is for a situation where there would be 15,000 to 20,000 file systems.
Has anyone measured this? I'm assuming U2 or U3 of Solaris 10
Good timing, I'd like some feedback for some work I'm doing below...
Matt B wrote:
I am trying to determine the best way to move forward with about 35 x86 X4200's
Each box has 4x 73GB internal drives.
Cool. Nice box.
All the boxes will be built using Solaris 10 11/06. Additionally, these boxe
I executed sync just before this happened
ultra:ultra# mdb -k unix.0 vmcore.0
Loading modules: [ unix krtld genunix specfs dtrace ufs sd pcipsy md ip sctp
usba fctl nca crypto zfs random nfs ptm cpc fcip sppp lofs ]
$c
vpanic(7b653bd8, 7036fca0, 7036fc70, 7b652990, 0, 60002d0b480)
zio_d
I am curious as to what people are using in both test and production
environments WRT large numbers of ZFS filesystems. Tens of thousands,
hundreds? Does anyone have numbers around boot times, shutdown times
system performance with LARGE numbers of fs's. How about sharing many
filesystems via NF
On 06 March, 2007 - Matt B sent me these 2,5K bytes:
> I am trying to determine the best way to move forward with about 35 x86
> X4200's
> Each box has 4x 73GB internal drives.
>
> This would leave each disk with 64GB of free space, totaling 256GB. I
> would then create a single ZFS pool of all
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 02/16 - 02/28
=
Size of all threads during per
I am trying to determine the best way to move forward with about 35 x86 X4200's
Each box has 4x 73GB internal drives.
All the boxes will be built using Solaris 10 11/06. Additionally, these boxes
are part of a highly available production environment with an uptime
expectation of 6 9's ( just a f
Brian Hechinger wrote On 03/06/07 14:52,:
On Tue, Mar 06, 2007 at 02:49:35PM -0700, Lori Alt wrote:
The latest on when the update zfsboot support will
go into Nevada is either build 61 or 62. We are
making some final fixes and getting tests run. We
are aiming for 61, but we might just mi
On Tue, Mar 06, 2007 at 02:49:35PM -0700, Lori Alt wrote:
> The latest on when the update zfsboot support will
> go into Nevada is either build 61 or 62. We are
> making some final fixes and getting tests run. We
> are aiming for 61, but we might just miss it. In
> that case, we should be puttin
The latest on when the update zfsboot support will
go into Nevada is either build 61 or 62. We are
making some final fixes and getting tests run. We
are aiming for 61, but we might just miss it. In
that case, we should be putting back into 62.
Lori
___
Michael Lee wrote:
Hi,
I need to copy files from an old ZFS pool on an old hard drive to a new one on
a new HD.
With UFS, you can just mount a partition from an old drive to copy files to a new drive.
What's the equivalent process to do that with ZFS?
Use 'zpool import' to make the old pool
On March 6, 2007 11:23:26 AM -0800 Brian Gao <[EMAIL PROTECTED]> wrote:
ZFS claims that it can recover user error such as accidentally deleting
of files. How does it work? Does it only work for mirrored or RAID-Z
pool? What is the command to perform the task?
zfs snapshot
Also for COW, I unde
Hi Brian,
Brian Gao wrote:
ZFS claims that it can recover user error such as accidentally
deleting of files.
Can you show us where you read that ? At the moment, the only way this
is possible, is by taking regular snapshots of your ZFS filesystems,
allowing users to go back to a previous snaps
ZFS claims that it can recover user error such as accidentally deleting of
files. How does it work? Does it only work for mirrored or RAID-Z pool? What is
the command to perform the task?
Also for COW, I understand that during the transaction (while data is been
undated), ZFS keeps a copy of t
The pNFS protocol doesn't preclude varying meta-data server designs
and their various locking strategies.
As an example, there has been work going on at University of Michigan/
CITI
to extend the Linux/NFSv4 implementation to allow for a pNFS server on
top of the Polyserve solution.
Spencer
On Mon, Mar 05, 2007 at 08:20:33PM -0600, Mike Gerdts wrote:
> On 2/28/07, Dean Roehrich <[EMAIL PROTECTED]> wrote:
> >ASM was Storage-Tek's rebranding of SAM-QFS. SAM-QFS is already a shared
> >(clustering) filesystem. You need to upgrade :) Look for "Shared QFS".
>
> ASM as Oracle states it i
Jesse, You can change txg_time with mdb
echo "txg_time/W0t1" | mdb -kw
-r
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> [EMAIL PROTECTED] wrote on
> 03/05/2007 03:56:28 AM:
>
> > one question,
> > is there a way to stop the default txg push
> behaviour (push at regular
> > timestep-- default is 5sec) but instead push them
> "on the fly"...I
> > would imagine this is better in the case of an
> application doing bi
18 matches
Mail list logo