[EMAIL PROTECTED] wrote on 17/07/2007 05:12:49 AM:
> I'm going to be setting up about 6 virtual machines (Windows &
> Linux) in either VMWare Server or Xen on a CentOS 5 box. I'd like to
> connect to a ZFS iSCSI target to store the vm images and be able to
> use zfs snapshots for backup. I have
I found a very nice doc. that describes the steps to create a kernel dump:
"The Solaris Operating System on x86 Platforms - Crashdump Analysis
Operating System Internals"
http://opensolaris.org/os/community/documentation/files/book.pdf
-> 7.2.2.Forcing system crashdumps
Rayson
On 7/17/07, Ja
James C. McPherson wrote:
>
>
> The T3B with fw v3.x (I think) and the T4 (aka 6020 tray) allow
> more than two volumes, but you're still quite restricted in what
> you can do with them.
>
You are limited to two raid groups with slices on top of those raid
groups presented as LUNs. I'd just st
Stuart Anderson wrote:
> On Tue, Jul 17, 2007 at 02:49:08PM +1000, James C. McPherson wrote:
>> Stuart Anderson wrote:
>>> Running Solaris 10 Update 3 on an X4500 I have found that it is possible
>>> to reproducibly block all writes to a ZFS pool by running "chgrp -R"
>>> on any large filesystem in
On Tue, Jul 17, 2007 at 02:49:08PM +1000, James C. McPherson wrote:
> Stuart Anderson wrote:
> >Running Solaris 10 Update 3 on an X4500 I have found that it is possible
> >to reproducibly block all writes to a ZFS pool by running "chgrp -R"
> >on any large filesystem in that pool. As can be seen b
Darren Dunham wrote:
>> My meta* commands all return:
>> "... there are no existing databases"
> Then you're not using SVM volumes.
Correct. No metadb, no SVM.
>> This is the T3 array:
>>
>> [EMAIL PROTECTED]: format
>> Searching for disks...done
>> AVAILABLE DISK SELECTIONS:
>> .
>>
Stuart Anderson wrote:
> Running Solaris 10 Update 3 on an X4500 I have found that it is possible
> to reproducibly block all writes to a ZFS pool by running "chgrp -R"
> on any large filesystem in that pool. As can be seen below in the zpool
> iostat output below, after about 10-sec of running th
Running Solaris 10 Update 3 on an X4500 I have found that it is possible
to reproducibly block all writes to a ZFS pool by running "chgrp -R"
on any large filesystem in that pool. As can be seen below in the zpool
iostat output below, after about 10-sec of running the chgrp command all
writes to t
Bill Sommerfeld wrote:
> On Mon, 2007-07-16 at 18:19 -0700, Russ Petruzzelli wrote:
>
>> Or am I just getting myself into shark infested waters?
>>
>
> configurations that might be interesting to play with:
> (emphasis here on "play"...)
>
> 1) use the T3's management CLI to reconfigure th
Russ Petruzzelli wrote:
>Thanks Darren,
>
> re: " You can use ZFS on that volume, but it will have no
> redundancy at the ZFS level, only at the disk level controlled by
> the T3."
>
> I believe it is an older T3.
Performance-wise, these are pretty wimpy. You should be able t
On Mon, 2007-07-16 at 18:19 -0700, Russ Petruzzelli wrote:
> Or am I just getting myself into shark infested waters?
configurations that might be interesting to play with:
(emphasis here on "play"...)
1) use the T3's management CLI to reconfigure the T3 into two raid-0
volumes, and mirror them w
> I'm using this system in a test lab, so data integrity is not too
> important for me. I mainly want to see what kind of performance I can
> get out of the zfs/T3 setup.
> I do see a note on pg 31 of the zfs admin guide that recommends against this
> configuration. (but saying it is possible).
Thanks Darren,
re: "You can use ZFS on that volume, but it will have no redundancy
at the ZFS level, only at the disk level controlled by the T3."
I believe it is an older T3.
I'm using this system in a test lab, so data integrity is not too important for me. I mainly want to see what
> My meta* commands all return:
> "... there are no existing databases"
Then you're not using SVM volumes.
> This is the T3 array:
>
> [EMAIL PROTECTED]: format
> Searching for disks...done
> AVAILABLE DISK SELECTIONS:
> .
>2. c1t1d0
> /[EMAIL PROTECTED],0/[EMAIL PROTECTE
My meta* commands all return:
"... there are no existing databases"
This is the T3 array:
[EMAIL PROTECTED]: format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
.
2. c1t1d0
/[EMAIL PROTECTED],0/[EMAIL PROTECTED],70/SUNW,[EMAIL
PROTECTED]/[EMAIL PROTECTED],0/[EMA
Does anyone have an update on this bugfix?
I'm trying to use some 3124 cards in production, and its painful!
Thanks,
Murray
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
I'm brand new to zfs.
I have a system with a T3 array that I want to configure with zfs.
The T3 array had been setup with the Sun Volume manager in one big 250
Gb volume.
I want to remove this volume and setup zfs.
My problem is the system has had a new OS installed (Sol10u3) since the
volume
Darren Dunham wrote:
>>> If it helps at all. We're having a similar problem. Any LUN's
>>> configured with their default owner to be SP B, don't get along with
>>> ZFS. We're running on a T2000, With Emulex cards and the ssd driver.
>>> MPXIO seems to work well for most cases, but the SAN g
> > If it helps at all. We're having a similar problem. Any LUN's
> > configured with their default owner to be SP B, don't get along with
> > ZFS. We're running on a T2000, With Emulex cards and the ssd driver.
> > MPXIO seems to work well for most cases, but the SAN guys are not
> > comf
Carisdad wrote:
> Peter Tribble wrote:
>
>> # powermt display dev=all
>> Pseudo name=emcpower0a
>> CLARiiON ID=APM00043600837 []
>> Logical device ID=600601600C4912003AB4B247BA2BDA11 [LUN 46]
>> state=alive; policy=CLAROpt; priority=0; queued-IOs=0
>> Owner: default=SP B, current=SP B
>>
Hello Magesh,
Monday, July 2, 2007, 4:12:11 PM, you wrote:
MR> We are looking at the alternatives to VXVM/VXFS. One of the
MR> feature which we liked in Veritas, apart from the obvious ones is
MR> the ability to call the disks by name and group them in to a disk group.
MR> Especially in SAN base
Cyril Plisko wrote:
> On 7/16/07, tayo <[EMAIL PROTECTED]> wrote:
>
>> Hi ,
>>
>> Can one increase (or decrease ) a ZFS file system like the Veritas one
>> (vxresize)?
>> What is the command line syntax please ?
>> ..you can just make up an example ..
>>
>> for example in veritas :
>> "/etc/v
Sorry, my question is not clear enough. These pools contain a zone each.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Peter Baumgartner wrote:
> I'm going to be setting up about 6 virtual machines (Windows & Linux) in
> either VMWare Server or Xen on a CentOS 5 box. I'd like to connect to a
> ZFS iSCSI target to store the vm images and be able to use zfs snapshots
> for backup. I have no experience with ZFS, so
Mike Salehi wrote:
> Greetings,
>
> Given zfs pools, how does one import these pools to another node in
> the cluster.
zpool export
zpool import
-- richard
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.or
I had originally considered something similar, but... for ZFS snapshot
abilities, I am leaning more towards zfs-hosted NFS... Most of the other VMs
(FreeBSD, for example) can install onto NFS, it wouldn't actually be going
over the network, and it would allow file-level restore instead of
drive-le
I'm going to be setting up about 6 virtual machines (Windows & Linux) in
either VMWare Server or Xen on a CentOS 5 box. I'd like to connect to a ZFS
iSCSI target to store the vm images and be able to use zfs snapshots for
backup. I have no experience with ZFS, so I have a couple of questions
befor
On 7/16/07, tayo <[EMAIL PROTECTED]> wrote:
> Hi ,
>
> Can one increase (or decrease ) a ZFS file system like the Veritas one
> (vxresize)?
> What is the command line syntax please ?
> ..you can just make up an example ..
>
> for example in veritas :
> "/etc/vx/bin/vxresize -x -F vxfs -g DG1 vol
Greetings,
Given zfs pools, how does one import these pools to another node in
the cluster.
Mike
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo
Hi ,
Can one increase (or decrease ) a ZFS file system like the Veritas one
(vxresize)?
What is the command line syntax please ?
..you can just make up an example ..
for example in veritas :
"/etc/vx/bin/vxresize -x -F vxfs -g DG1 volume_name new_total_size"
Will increase "volume_name"
Scott Lovenberg wrote:
>> eric kustarz wrote:
>>
>>> On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
>>>
>>>
>>>
You sir, are a gentleman and a scholar!
>> Seriously, this is exactly
>>
>>> the information I was looking for, thank you very
>>>
>>
> eric kustarz wrote:
> > On Jul 9, 2007, at 11:21 AM, Scott Lovenberg wrote:
> >
> >
> >> You sir, are a gentleman and a scholar!
> Seriously, this is exactly
> > the information I was looking for, thank you very
> much!
> >>
> >> Would you happen to know if this has improved
> since build 6
On Mon, 16 Jul 2007, Kwang-Hyun Baek wrote:
> Is there any way to fix this? I actually tried to destroy the pool and
> try to create a new one, but it doesn't let me. Whenever I try, I get
> the following error:
>
> [EMAIL PROTECTED]:/var/crash# zpool create -f pool c0d0s5
> internal error: No s
Is there any way to fix this? I actually tried to destroy the pool and try to
create a new one, but it doesn't let me. Whenever I try, I get the following
error:
[EMAIL PROTECTED]:/var/crash# zpool create -f pool c0d0s5
internal error: No such process
Abort (core dumped)
After that zpool list
Lori Alt wrote:
>> Since it seems that we won't be swapping on ZVOLS I need to find out
>> more how we will be providing swap and dump space in a root pool.
>>
> The current plan is to provide what we're calling (for lack of a
> better term. I'm open to suggestions.) a "pseudo-zvol". It's
> p
35 matches
Mail list logo