These days it's like the dot com revolution round two. The
only difference is, this time there will be no crash.
Everyone is wiser and the internet has already proven that
it is THE place to do business.
Acquisitions are happening at a record pace. Google
picking up Youtube. News corp picki
Hello,
Thanks.
Here is the needed info:
zpool status
pool: tank
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
tankONLINE 0 0 0
c1d0s6ONLINE 0 0 0
errors: No known data errors
"df -h" retu
On Dec 2, 2006, at 12:35 PM, Dick Davies wrote:
On 02/12/06, Chad Leigh -- Shire.Net LLC <[EMAIL PROTECTED]> wrote:
On Dec 2, 2006, at 10:56 AM, Al Hopper wrote:
> On Sat, 2 Dec 2006, Chad Leigh -- Shire.Net LLC wrote:
>> On Dec 2, 2006, at 6:01 AM, [EMAIL PROTECTED] wrote:
>> When you
For background on what this is, see:
http://www.opensolaris.org/jive/message.jspa?messageID=24416#24416
http://www.opensolaris.org/jive/message.jspa?messageID=25200#25200
=
zfs-discuss 11/16 - 11/30
=
Size of all threads during per
Luke Schwab wrote:
I simply created a zpool with an array disk like
hosta# zpool created testpool c6td0 //runs within a second hosta#
zpool export testpool // runs within a second hostb# zpool import
testpool // takes 5-7 minutes
If STMS(mpxio) is disabled, it takes from 45-60 seconds. I
On Wed, Dec 06, 2006 at 12:35:58PM -0800, Jim Hranicky wrote:
> > If those are the original path ids, and you didn't
> > move the disks on the bus? Why is the is_spare flag
>
> Well, I'm not sure, but these drives were set as spares in another pool
> I deleted -- should I have done something to
Hold fire on the re-init until one of the devs chips in, maybe I'm barking up
the wrong tree ;)
--a
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
> If those are the original path ids, and you didn't
> move the disks on the bus? Why is the is_spare flag
Well, I'm not sure, but these drives were set as spares in another pool
I deleted -- should I have done something to the drives (fdisk?) before
rearranging it?
The rest of the options are
Hi Luke,
Is the 4884 using two or four ports? Also, how many FSs are involved?
Best Regards,
Jason
On 12/6/06, Luke Schwab <[EMAIL PROTECTED]> wrote:
I, too, experienced a long delay while importing a zpool on a second machine. I
do not have any filesystems in the pool. Just the Solaris 10 Op
I simply created a zpool with an array disk like
hosta# zpool created testpool c6td0 //runs within a second
hosta# zpool export testpool // runs within a second
hostb# zpool import testpool // takes 5-7 minutes
If STMS(mpxio) is disabled, it takes from 45-60 seconds. I tested this with
LUN
I, too, experienced a long delay while importing a zpool on a second machine. I
do not have any filesystems in the pool. Just the Solaris 10 Operating system,
Emulex 10002DC HBA, and a 4884 LSI array (dual attached).
I don't have any file systems created but when STMS(mpxio) is enabled I see
Hi Luke,
That's really strange. We did the exact same thing moving between two
hosts (export/import) and it took maybe 10 secs. How big is your
zpool?
Best Regards,
Jason
On 12/6/06, Luke Schwab <[EMAIL PROTECTED]> wrote:
Doug,
I should have posted the reason behind this posting.
I have 2 v2
Jim Davis wrote:
eric kustarz wrote:
What about adding a whole new RAID-Z vdev and dynamicly stripe across
the RAID-Zs? Your capacity and performance will go up with each
RAID-Z vdev you add.
Thanks, that's an interesting suggestion.
Have you tried using the automounter as suggested b
Edward Pilatowicz wrote:
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come u
Hi Doug,
Actually, our config is:
3 RAID-5 volume groups on the array (with 2 LUNs each).
1 RAID-Z zpool per physical host composed of one LUN from each of the
3 volume groups.
This allows for loss of 3 drives in a worst case, and 4 drives in a best case.
-J
On 12/6/06, Douglas Denny <[EMAIL
On 12/6/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
The configuration is a T2000 connected to a StorageTek FLX210 array
via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z
the LUNs across 3 array volume groups. For performance reasons we're
in the process of changing to
On Wed, Dec 06, 2006 at 07:28:53AM -0700, Jim Davis wrote:
> We have two aging Netapp filers and can't afford to buy new Netapp gear,
> so we've been looking with a lot of interest at building NFS fileservers
> running ZFS as a possible future approach. Two issues have come up in the
> discussion
Hi Jim,
That looks interesting though, I'm not a zfs expert by any means but look at
some of the properties of the children elements of the mirror:-
version=3
name='zmir'
state=0
txg=770
pool_guid=5904723747772934703
vdev_tree
type='root'
id=0
guid=5904723747772934703
children[0]
type='mirror'
id
Hi Doug,
The configuration is a T2000 connected to a StorageTek FLX210 array
via Qlogic QLA2342 HBAs and Brocade 3850 switches. We currently RAID-Z
the LUNs across 3 array volume groups. For performance reasons we're
in the process of changing to striped zpools across RAID-1 volume
groups. The pe
eric kustarz wrote:
What about adding a whole new RAID-Z vdev and dynamicly stripe across
the RAID-Zs? Your capacity and performance will go up with each RAID-Z
vdev you add.
Thanks, that's an interesting suggestion.
Have you tried using the automounter as suggested by the linux faq?:
h
On 12/6/06, Jason J. W. Williams <[EMAIL PROTECTED]> wrote:
We've been using MPXIO (STMS) with ZFS quite solidly for the past few
months. Failover is instantaneous when a write operations occurs
after a path is pulled. Our environment is similar to yours, dual-FC
ports on the host, and 4 FC port
One of our file servers internally to Sun that reproduces this running nv53
here is the dtrace output:
unix`mutex_vector_enter+0x120
zfs`metaslab_group_alloc+0x1a0
zfs`metaslab_alloc_dva+0x10c
zfs`metaslab_alloc+0x3c
zfs`zio_dv
Hi Luke,
We've been using MPXIO (STMS) with ZFS quite solidly for the past few
months. Failover is instantaneous when a write operations occurs
after a path is pulled. Our environment is similar to yours, dual-FC
ports on the host, and 4 FC ports on the storage (2 per controller).
Depending on y
Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in
the discussion
- Adding new disks to a RAID-Z pool (Netapps
You can add more disks to a pool that is in raid-z you just can't
add disks to the existing raid-z vdev.
cd /usr/tmp
mkfile -n 100m 1 2 3 4 5 6 7 8 9 10
zpool create t raidz /usr/tmp/1 /usr/tmp/2 /usr/tmp/3
zpool status t
zfs list t
zpool add -f t raidz2 /usr/tmp/4 /usr/tmp/5 /usr/tmp/6 /usr
>- The default scheme of one filesystem per user runs into problems with
>linux NFS clients; on one linux system, with 1300 logins, we already have
>to do symlinks with amd because linux systems can't mount more than about
>255 filesystems at once. We can of course just have one filesystem
>e
Thanks so much.. anyway resilvering worked its way, I got everything resolved
zpool status -v
pool: mypool
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
mypool ONLINE 0 0 0
mirrorONLINE 0 0 0
Hi,
I am running Solaris 10 ZFS and I do not have STMS multipathing enables. I have
dual FC connections to storage using two ports on an Emulex HBA.
In the Solaris ZFS admin guide. It says that a ZFS file system monitors disks
by their path and their device ID. If a disk is switched between co
Still ... I don't think a core file is appropriate. Sounds like a bug is
in order if one doesn't already exist. ("zpool dumps core when missing
devices are used" perhaps?)
Wee Yeh Tan wrote:
Ian,
The first error is correct in that zpool-create will not, unless
forced, create a file system if
On Wed, 6 Dec 2006, Jim Davis wrote:
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in the
discussion
- Adding new disks to a RAI
On Wed, 6 Dec 2006, Jim Davis wrote:
> We have two aging Netapp filers and can't afford to buy new Netapp gear,
> so we've been looking with a lot of interest at building NFS fileservers
> running ZFS as a possible future approach. Two issues have come up in the
> discussion
>
> - Adding new disk
We have two aging Netapp filers and can't afford to buy new Netapp gear,
so we've been looking with a lot of interest at building NFS fileservers
running ZFS as a possible future approach. Two issues have come up in the
discussion
- Adding new disks to a RAID-Z pool (Netapps handle adding new
Ian,
The first error is correct in that zpool-create will not, unless
forced, create a file system if it knows that another filesystem
presides in the target vdev.
The second error was caused by your removal of the slice.
What I find discerning is that the zpool created.
Can you provide the res
Here's the output of zdb:
zmir
version=3
name='zmir'
state=0
txg=770
pool_guid=5904723747772934703
vdev_tree
type='root'
id=0
guid=5904723747772934703
children[0]
type='mirror'
id=0
guid=1506718
Hi
We allocate in lun sizes of 60 Gb so to get to 120 gb, but this is not
limited to 2 disks could be in the 100's of luns. We currently plan putting
somthing in the region of 20 to 30 TB in the pools using 60 gb lun sizes.
But we will need share support at some point due to costumer demand of us
Le 06/12/2006 à 05:05:55+0100, Flemming Danielsen a écrit
> Hi
> I have 2 questions on use of ZFS.
> How do I ensure I have site redundancy using zfs pools, as I see it we only
> ensures mirrors between 2 disks. I have 2 HDS on one each site and I want to
> be
> able to loose the one of them and
Hello,
I try to create a zfs file system according to
"Creating a Basic ZFS File System" section of
"Creating a Basic ZFS File System" document of SUN.
The problem is that the device has a ufs filesystem the partiotion
I am trying to work with; it is in fact empty and does not contain any
file
37 matches
Mail list logo