On Thu, May 20, 2010 at 8:34 PM, Richard Elling
wrote:
> On May 20, 2010, at 11:07 AM, Asif Iqbal wrote:
>
>> On Thu, May 20, 2010 at 1:51 PM, Asif Iqbal wrote:
>>> I have a T2000 with a dual port 4gb hba (QLE2462) and a 3510FC with
>>> one controller 2gb/s attached to it.
>>> I am running sol 10
On Jun 3, 2010, at 13:36, Garrett D'Amore wrote:
Perhaps you have been unlucky. Certainly, there is a window with N
+1 redundancy where a single failure leaves the system exposed in
the face of a 2nd fault. This is a statistics game...
It doesn't even have to be a drive failure, but an unr
send the output of "zfs list -o space"
-- richard
On Jun 3, 2010, at 1:06 PM, Andres Noriega wrote:
> Hi everyone, I have a question about the zfs list output. I created a large
> zpool and then carved out 1TB volumes (zfs create -V 1T vtl_pool/lun##).
> Looking at the zfs list output, I'm a l
On Thu, 3 Jun 2010, David Dyer-Bennet wrote:
But is having a RAIDZ2 drop to single redundancy, with replacement
starting instantly, actually as good or better than having a RAIDZ3 drop
to double redundancy, with actual replacement happening later? The
"degraded" state of the RAIDZ3 has the same
On Thu, 3 Jun 2010, Garrett D'Amore wrote:
On Thu, 2010-06-03 at 11:36 -0700, Ketan wrote:
Thanx Rick .. but this guide does not offer any method to reduce the ARC cache
size on the fly without rebooting the system. And the system's memory
utilization is running very high since 2 weeks now an
On Jun 3, 2010, at 3:16 AM, Erik Trimble wrote:
> Expanding a RAIDZ (which, really, is the only thing that can't do right now,
> w/r/t adding disks) requires the Block Pointer (BP) Rewrite functionality
> before it can get implemented.
Strictly speaking BP rewrite is not required to expand a RAI
> "cs" == Cindy Swearingen writes:
okay wtf. Why is this thread still alive?
cs> The mirror mount feature
It's unclear to me from this what state the feature's in:
http://hub.opensolaris.org/bin/view/Project+nfs-namespace/
It sounds like mirror mounts are done but referrals are not,
Cassandra Pugh writes:
> I am trying to set this up as an automount.
>
> Currently I am trying to set mounts for each area, but I have a lot to
> mount.
>
> When I run showmount -e nfs_server I do see all of the shared directories.
I ran into this same problem some mnths ago... I can't remember
On Thu, Jun 3, 2010 at 1:06 PM, Ketan wrote:
> So you want me to run this on production global zone running 3 other
> production applications .. :-)
It's probably lower impact than a reboot...
-B
--
Brandon High : bh...@freaks.com
___
zfs-discuss ma
On Thu, Jun 3, 2010 at 5:37 AM, Edward Ned Harvey
wrote:
> noob says he's planning to partition the OS drive. Last I knew, easier said
> than done. I'm sure it's not impossible, but it might not be
> straightforward either.
I think we was going partition it to install Win 7 on one partition
and
On Thu, Jun 3, 2010 at 12:50 PM, Cassandra Pugh wrote:
> The special case here is that I am trying to traverse NESTED zfs systems,
> for the purpose of having compressed and uncompressed directories.
Make sure to use "mount -t nfs4" on your linux client. The standard
"nfs" type only supports nfs
If your other single ZFS shares are working, then I think the answer is
that the Linux client version doesn't support the nested access feature,
I'm guessing.
You could also test the nested access between your Solaris 10 10/09
server and a Solaris 10 10/09 client, if possible, to be sure this i
Hi everyone, I have a question about the zfs list output. I created a large
zpool and then carved out 1TB volumes (zfs create -V 1T vtl_pool/lun##).
Looking at the zfs list output, I'm a little thrown off by the AVAIL amount.
Can anyone clarify for me why it is saying 2T?
NAME USED
So you want me to run this on production global zone running 3 other production
applications .. :-)
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-dis
I am trying to set this up as an automount.
Currently I am trying to set mounts for each area, but I have a lot to
mount.
When I run showmount -e nfs_server I do see all of the shared directories.
-
Cassandra
(609) 243-2413
Unix Administrator
"From a little spark may burst a mighty flame."
-Da
No usernames is not an issue. I have many shares that work, but they are
single zfs file systems.
The special case here is that I am trying to traverse NESTED zfs systems,
for the purpose of having compressed and uncompressed directories.
-
Cassandra
(609) 243-2413
Unix Administrator
"From a l
frank+lists/z...@linetwo.net said:
> I remember, and this was a few years back but I don't see why it would be any
> different now, we were trying to add drives 1-2 at a time to medium-sized
> arrays (don't buy the disks until we need them, to hold onto cash), and the
> Netapp performance kept goin
On Thu, Jun 03, 2010 at 12:40:34PM -0700, Frank Cusack wrote:
> On 6/3/10 12:06 AM -0400 Roman Naumenko wrote:
> >I think there is a difference. Just quickly checked netapp site:
> >
> >Adding new disks to a RAID group If a volume has more than one RAID
> >group, you can specify the RAID group to w
frank+lists/z...@linetwo.net said:
> Well in that case it's invalid to compare against Netapp since they can't do
> it either (seems to be the consensus on this list). Neither zfs nor Netapp
> (nor any product) is really designed to handle adding one drive at a time.
> Normally you have to add an
On 6/3/10 12:06 AM -0400 Roman Naumenko wrote:
I think there is a difference. Just quickly checked netapp site:
Adding new disks to a RAID group If a volume has more than one RAID
group, you can specify the RAID group to which you are adding disks.
hmm that's a surprising feature to me.
I rem
On 6/3/10 8:45 AM +0200 Juergen Nickelsen wrote:
Richard Elling writes:
And some time before I had suggested to a my buddy zfs for his new
home storage server, but he turned it down since there is no
expansion available for a pool.
Heck, let him buy a NetApp :-)
Definitely a possibility, g
On 6/2/10 11:10 PM -0400 Roman Naumenko wrote:
Well, I explained it not very clearly. I meant the size of a raidz array
can't be changed.
For sure zpool add can do the job with a pool. Not with a raidz
configuration.
Well in that case it's invalid to compare against Netapp since they
can't do i
On Thu, June 3, 2010 12:03, Bob Friesenhahn wrote:
> On Thu, 3 Jun 2010, David Dyer-Bennet wrote:
>>
>> In an 8-bay chassis, there are other concerns, too. Do I keep space
>> open
>> for a hot spare? There's no real point in a hot spare if you have only
>> one vdev; that is, 8-drive RAIDZ3 is cl
On Thu, June 3, 2010 13:04, Garrett D'Amore wrote:
> On Thu, 2010-06-03 at 11:49 -0500, David Dyer-Bennet wrote:
>> hot spares in place, but I have the bays reserved for that use.
>>
>> In the latest upgrade, I added 4 2.5" hot-swap bays (which got the
>> system
>> disks out of the 3.5" hot-swap b
On Thu, 2010-06-03 at 11:36 -0700, Ketan wrote:
> Thanx Rick .. but this guide does not offer any method to reduce the ARC
> cache size on the fly without rebooting the system. And the system's memory
> utilization is running very high since 2 weeks now and just 5G of memory is
> free. And the
Hi Cassandra,
The mirror mount feature allows the client to access files and dirs that
are newly created on the server, but this doesn't look like your problem
described below.
My guess is that you need to resolve the username/permission issues
before this will work, but some versions of Linux
On Thu, Jun 3, 2010 at 10:53 AM, Cassandra Pugh wrote:
> I have ensured that they all have a sharenfs option, as I have done with
> other shares.
You can verify this from your linux client with:
# showmount -e nfs_server
> My client is linux. I would assume we are using nfs v3.
> I also notice
Thanx Rick .. but this guide does not offer any method to reduce the ARC cache
size on the fly without rebooting the system. And the system's memory
utilization is running very high since 2 weeks now and just 5G of memory is
free. And the arc cache is showing 40G of usage. and its not decreasin
On Thu, 2010-06-03 at 11:49 -0500, David Dyer-Bennet wrote:
> hot spares in place, but I have the bays reserved for that use.
>
> In the latest upgrade, I added 4 2.5" hot-swap bays (which got the system
> disks out of the 3.5" hot-swap bays). I have two free, and that's the
> form-factor SSDs co
Thanks for getting back to me!
I am using Solaris 10 10/09 (update 8)
I have created multiple nested zfs directories in order to compress some but
not all sub directories in a directory.
I have ensured that they all have a sharenfs option, as I have done with
other shares.
This is a special case
On Jun 3, 2010, at 10:33 AM, Ketan wrote:
> We are having a server running zfs root with 64G RAM and the system has 3
> zones running oracle fusion app and zfs cache is using 40G memory as per
>
> kstat zfs:0:arcstats:size. and system shows only 5G of memory is free rest is
> taken by kernel an
On Thu, 2010-06-03 at 12:22 -0400, Dennis Clarke wrote:
> > If you're clever, you'll also try to make sure each side of the mirror
> > is on a different controller, and if you have enough controllers
> > available, you'll also try to balance the controllers across stripes.
>
> Something like this
On Thu, 2010-06-03 at 08:50 -0700, Marty Scholes wrote:
> Maybe I have been unlucky too many times doing storage admin in the 90s, but
> simple mirroring still scares me. Even with a hot spare (you do have one,
> right?) the rebuild window leaves the entire pool exposed to a single failure.
>
We are having a server running zfs root with 64G RAM and the system has 3 zones
running oracle fusion app and zfs cache is using 40G memory as per
kstat zfs:0:arcstats:size. and system shows only 5G of memory is free rest is
taken by kernel and 2 remaining zones.
Now my problem is that fusion
On Thu, 2010-06-03 at 12:03 -0500, Bob Friesenhahn wrote:
> On Thu, 3 Jun 2010, David Dyer-Bennet wrote:
> >
> > In an 8-bay chassis, there are other concerns, too. Do I keep space open
> > for a hot spare? There's no real point in a hot spare if you have only
> > one vdev; that is, 8-drive RAIDZ
On Thu, 3 Jun 2010, David Dyer-Bennet wrote:
In an 8-bay chassis, there are other concerns, too. Do I keep space open
for a hot spare? There's no real point in a hot spare if you have only
one vdev; that is, 8-drive RAIDZ3 is clearly better than 7-drive RAIDZ2
plus a hot spare. And putting ev
On Jun 3, 2010, at 8:36 AM, Freddie Cash wrote:
> On Wed, Jun 2, 2010 at 8:10 PM, Roman Naumenko wrote:
> Well, I explained it not very clearly. I meant the size of a raidz array
> can't be changed.
> For sure zpool add can do the job with a pool. Not with a raidz configuration.
>
> You can't i
On Thu, June 3, 2010 10:50, Garrett D'Amore wrote:
> On Thu, 2010-06-03 at 10:35 -0500, David Dyer-Bennet wrote:
>> On Thu, June 3, 2010 10:15, Garrett D'Amore wrote:
>> > Using a stripe of mirrors (RAID0) you can get the benefits of multiple
>> > spindle performance, easy expansion support (just
On Thu, June 3, 2010 10:50, Marty Scholes wrote:
> David Dyer-Bennet wrote:
>> My choice of mirrors rather than RAIDZ is based on
>> the fact that I have
>> only 8 hot-swap bays (I still think of this as LARGE
>> for a home server;
>> the competition, things like the Drobo, tends to have
>> 4 or 5
> If you're clever, you'll also try to make sure each side of the mirror
> is on a different controller, and if you have enough controllers
> available, you'll also try to balance the controllers across stripes.
Something like this ?
# zpool status fibre0
pool: fibre0
state: ONLINE
status: Th
David Dyer-Bennet wrote:
> My choice of mirrors rather than RAIDZ is based on
> the fact that I have
> only 8 hot-swap bays (I still think of this as LARGE
> for a home server;
> the competition, things like the Drobo, tends to have
> 4 or 5), that I
> don't need really large amounts of storage (af
On Thu, 2010-06-03 at 10:35 -0500, David Dyer-Bennet wrote:
> On Thu, June 3, 2010 10:15, Garrett D'Amore wrote:
> > Using a stripe of mirrors (RAID0) you can get the benefits of multiple
> > spindle performance, easy expansion support (just add new mirrors to the
> > end of the raid0 stripe), and
On Wed, Jun 2, 2010 at 8:10 PM, Roman Naumenko wrote:
> Well, I explained it not very clearly. I meant the size of a raidz array
> can't be changed.
> For sure zpool add can do the job with a pool. Not with a raidz
> configuration.
>
You can't increase the number of drives in a raidz vdev, no.
On Thu, June 3, 2010 10:15, Garrett D'Amore wrote:
> Using a stripe of mirrors (RAID0) you can get the benefits of multiple
> spindle performance, easy expansion support (just add new mirrors to the
> end of the raid0 stripe), and 100% data redundancy. If you can afford
> to pay double for your
Using a stripe of mirrors (RAID0) you can get the benefits of multiple
spindle performance, easy expansion support (just add new mirrors to the
end of the raid0 stripe), and 100% data redundancy. If you can afford
to pay double for your storage (the cost of mirroring), this is IMO the
best soluti
> Expanding a RAIDZ (which, really, is the only thing
> that can't do right
> now, w/r/t adding disks) requires the Block Pointer
> (BP) Rewrite
> functionality before it can get implemented.
>
> We've been promised BP rewrite for awhile, but I have
> no visibility as
> to where development on
On Wed, June 2, 2010 17:54, Roman Naumenko wrote:
> Recently I talked to a co-worker who manages NetApp storages. We discussed
> size changes for pools in zfs and aggregates in NetApp.
>
> And some time before I had suggested to a my buddy zfs for his new home
> storage server, but he turned it do
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Frank Contrepois
>
> r...@fsbu ~# zfs destroy fsbu01-zp/mailbo...@zfs-auto-snap:daily-2010-
> 04-14-00:00:00
> cannot destroy 'fsbu01-zp/mailbo...@zfs-auto-snap:daily-2010-04-14-
> 00:00:00': d
> From: Cindy Swearingen [mailto:cindy.swearin...@oracle.com]
>
> A temporary clone is created for an incremental receive and
> in some cases, is not removed automatically.
>
> 1. Determine clone names:
> # zdb -d | grep %
>
> 2. Destroy identified clones:
> # zfs destroy
>
> 3. Destroy snaps
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Ross Walker
>
> To get around this create a basic NTFS partition on the new third
> drive, copy the data to that drive and blow away the dynamic mirror.
Better yet, build the opensolaris machi
Erik Trimble said the following, on 06/02/2010 07:16 PM:
Roman Naumenko wrote:
Recently I talked to a co-worker who manages NetApp storages. We
discussed size changes for pools in zfs and aggregates in NetApp.
And some time before I had suggested to a my buddy zfs for his new
home storage serve
Brandon High said the following, on 06/02/2010 11:47 PM:
On Wed, Jun 2, 2010 at 3:54 PM, Roman Naumenko wrote:
And some time before I had suggested to a my buddy zfs for his new home storage
server, but he turned it down since there is no expansion available for a pool.
There's no e
Richard Elling said the following, on 06/02/2010 08:50 PM:
On Jun 2, 2010, at 3:54 PM, Roman Naumenko wrote:
Recently I talked to a co-worker who manages NetApp storages. We discussed size
changes for pools in zfs and aggregates in NetApp.
And some time before I had suggested to a my buddy zfs
53 matches
Mail list logo