Richard Elling wrote:
Darren J Moffat wrote:
So with that in mind this is my plan so far.
On the target (the V880):
Put all the 12 36G disks into a single zpool (call it iscsitpool).
Use iscsitadm to create 2 targets of 202G each.
On the initiator (the v40z):
Use iscsiadm to discover (import)
Darren J Moffat wrote:
So with that in mind this is my plan so far.
On the target (the V880):
Put all the 12 36G disks into a single zpool (call it iscsitpool).
Use iscsitadm to create 2 targets of 202G each.
On the initiator (the v40z):
Use iscsiadm to discover (import) the 2 202G targets.
Cre
I have 12 36G disks (in a single D2 enclosure) connected to a V880 that
I want to "share" to a v40z that is on the same gigabit network switch.
I've already decided that NFS is not the answer - the performance of ON
consolidation builds over NFS just doesn't cut it for me.
[However think about
Torrey,
On 8/1/06 10:30 AM, "Torrey McMahon" <[EMAIL PROTECTED]> wrote:
> http://www.sun.com/storagetek/disk_systems/workgroup/3510/index.xml
>
> Look at the specs page.
I did.
This is 8 trays, each with 14 disks and two active Fibre channel
attachments.
That means that 14 disks, each with a
On Aug 1, 2006, at 4:25 PM, Eric Ziegast wrote:
Are hot spares implmented yet?
I got Solaris 2006 6/06 installed and setup a ZFS pool and ZFS
filesystem.
So I suspect I don't have support for hot spares in Solaris 2006
6/06 (aka U2).
Is there a Nevada version which supports zpool spares
Eric Ziegast wrote:
Are hot spares implmented yet?
Yep, they are going to be in U3, they are not in U2 ( Solaris 2006 6/06
). Hot spares was integrated in nevada build 42.
check out:
http://www.opensolaris.org/os/community/zfs/version/3/
eric
I got Solaris 2006 6/06 installed and set
Hello Robert,
Wednesday, August 2, 2006, 12:22:11 AM, you wrote:
RM> Hello Neil,
RM> Tuesday, August 1, 2006, 8:45:02 PM, you wrote:
NP>> Robert Milkowski wrote On 08/01/06 11:41,:
>>> Hello Robert,
>>>
>>> Monday, July 31, 2006, 12:48:30 AM, you wrote:
>>>
>>> RM> Hello ZFS,
>>>
>>> RM>
Are hot spares implmented yet?
I got Solaris 2006 6/06 installed and setup a ZFS pool and ZFS filesystem.
# zpool status
pool: data
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
data
Hello Richard,
Monday, July 31, 2006, 9:23:46 PM, you wrote:
RE> Robert Milkowski wrote:
>> Hello Richard,
>>
>> Monday, July 31, 2006, 6:29:03 PM, you wrote:
>>
>> RE> Malahat Qureshi wrote:
Is any one have a comparison between zfs vs. vxfs, I'm working on
a presentation for my mana
Hello Neil,
Tuesday, August 1, 2006, 8:45:02 PM, you wrote:
NP> Robert Milkowski wrote On 08/01/06 11:41,:
>> Hello Robert,
>>
>> Monday, July 31, 2006, 12:48:30 AM, you wrote:
>>
>> RM> Hello ZFS,
>>
>> RM>System was rebooted and after reboot server again
>>
>> RM> System is snv_39, SPA
On Aug 1, 2006, at 14:18, Torrey McMahon wrote:
(I hate when I hit the Send button when trying to change windows)
Eric Schrock wrote:
On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote:
The correct comparison is done when all the factors are taken
into account. Making blank
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Neil Perrin wrote:
> I suppose if you know
> the disk only contains zfs slices then write caching could be
> manually enabled using "format -e" -> cache -> write_cache -> enable
When will we have write cache control over ATA/SATA drives? :-).
- --
Je
I've submitted these to Roch and co before on the NFS list and off
list. My favorite case was writing 6250 8k files (randomly generated)
over NFS from a solaris or linux client. We originally were getting
20K/sec when I was using RAIDZ, but between switching to RAID-5 backed
iscsi luns in a zpool
Robert Milkowski wrote On 08/01/06 11:41,:
Hello Robert,
Monday, July 31, 2006, 12:48:30 AM, you wrote:
RM> Hello ZFS,
RM>System was rebooted and after reboot server again
RM> System is snv_39, SPARC, T2000
RM> bash-3.00# ptree
RM> 7 /lib/svc/bin/svc.startd -s
RM> 163 /sbin/sh
(I hate when I hit the Send button when trying to change windows)
Eric Schrock wrote:
On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote:
The correct comparison is done when all the factors are taken into
account. Making blanket statements like, "ZFS & JBODs are always ideal"
Eric Schrock wrote:
On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote:
The correct comparison is done when all the factors are taken into
account. Making blanket statements like, "ZFS & JBODs are always ideal"
or "ZFS on top of a raid controller is a bad idea" or "SATA drives ar
On Tue, Aug 01, 2006 at 01:31:22PM -0400, Torrey McMahon wrote:
>
> The correct comparison is done when all the factors are taken into
> account. Making blanket statements like, "ZFS & JBODs are always ideal"
> or "ZFS on top of a raid controller is a bad idea" or "SATA drives are
> good enoug
Hello Robert,
Monday, July 31, 2006, 12:48:30 AM, you wrote:
RM> Hello ZFS,
RM>System was rebooted and after reboot server again
RM> System is snv_39, SPARC, T2000
RM> bash-3.00# ptree
RM> 7 /lib/svc/bin/svc.startd -s
RM> 163 /sbin/sh /lib/svc/method/fs-local
RM> 254 /usr/sbi
Frank Cusack wrote:
On July 31, 2006 11:32:15 PM -0400 Torrey McMahon
<[EMAIL PROTECTED]> wrote:
You're comparing apples to a crate of apples. A more useful
comparison would be something along
the lines a single R0 LUN on a 3510 with controller to a single
3510-JBOD with ZFS across all the
d
Luke Lonergan wrote:
Torrey,
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Monday, July 31, 2006 8:32 PM
You might want to check the specs of the the 3510. In some
configs you
only get 2 ports. However, in others you can get 8.
Really? 8 acti
Joe Little wrote:
On 7/31/06, Dale Ghent <[EMAIL PROTECTED]> wrote:
On Jul 31, 2006, at 8:07 PM, eric kustarz wrote:
>
> The 2.6.x Linux client is much nicer... one thing fixed was the
> client doing too many commits (which translates to fsyncs on the
> server). I would still recommend the S
For those outside of Sun you can get the same at
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6gl?a=view
-Angelo
On 1 Aug 2006, at 11:37, Cindy Swearingen wrote:
Hi Patrick,
Here's a pointer to the volume section in the ZFS admin guide:
http://docsview.sfbay/app/docs/doc/817-2271/6mhupg
On Tue, Aug 01, 2006 at 05:31:24PM +0200, Thomas Maier-Komor wrote:
>
> Thanks for the pointer. This looks exactly like what I am currently
> missing. The idea of having permission sets looks beneficial, too.
>
> Will this go into the next or a following update release of Solaris 10
> or will it
Sorry, here's the correct URL:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6gl?a=view
Cindy
Al Hopper wrote:
On Tue, 1 Aug 2006, Cindy Swearingen wrote:
Hi Patrick,
Here's a pointer to the volume section in the ZFS admin guide:
http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6gl?a=vi
Hi Patrick,
Here's a pointer to the volume section in the ZFS admin guide:
http://docsview.sfbay/app/docs/doc/817-2271/6mhupg6gl?a=view
I welcome any comments--
Cindy
Patrick Petit wrote:
Eric Schrock wrote:
On Tue, Aug 01, 2006 at 01:10:44PM +0200, Patrick Petit wrote:
Hi There,
I l
Eric Schrock wrote:
On Tue, Aug 01, 2006 at 01:10:44PM +0200, Patrick Petit wrote:
Hi There,
I looked at the ZFS admin guide in attempt to find a way to leverage ZFS
capabilities (storage pool, mirroring, dynamic stripping, etc.) for Xen
domU file systems that are not ZFS. Couldn't find a
Today you can give someone the 'ZFS File System Management' role to
allow them to manipulate ZFS datasets. For finer grained control, we're
planning on building it directly into ZFS:
http://www.opensolaris.org/jive/thread.jspa?threadID=11130&tstart=15
Feel free to comment on the details in the a
On Aug 1, 2006, at 03:43, [EMAIL PROTECTED] wrote:
So what does this exercise leave me thinking? Is Linux 2.4.x really
screwed up in NFS-land? This Solaris NFS replaces a Linux-based NFS
server that the clients (linux and IRIX) liked just fine.
Yes; the Linux NFS server and client work tog
On Tue, Aug 01, 2006 at 01:10:44PM +0200, Patrick Petit wrote:
> Hi There,
>
> I looked at the ZFS admin guide in attempt to find a way to leverage ZFS
> capabilities (storage pool, mirroring, dynamic stripping, etc.) for Xen
> domU file systems that are not ZFS. Couldn't find an answer whether
Hi everybody,
this question has probably been asked before, but I couldn't find an answer to
it anywhere...
What privileges are require to be able to do a snapshot as a regular user? Is
it already possible to pass ownership of a ZFS filesystem to a specific user,
so that he's able to do a snap
Bill Moore <[EMAIL PROTECTED]> wrote:
> On Mon, Jul 31, 2006 at 06:08:04PM -0400, Jan Schaumann wrote:
> > # echo '::offsetof vdev_t vdev_nowritecache' | mdb -k
> > offsetof (vdev_t, vdev_nowritecache) = 0x4c0
>
> Ok, then try this:
>
> echo '::spa -v' | mdb -k | awk '/dev.dsk/{print $1"+4c0/
Hi There,
I looked at the ZFS admin guide in attempt to find a way to leverage ZFS
capabilities (storage pool, mirroring, dynamic stripping, etc.) for Xen
domU file systems that are not ZFS. Couldn't find an answer whether ZFS
could be used only as a "regular" volume manager to create logical
>Right, but I never had this speed problem when the NFS server was
>running Linux on hardware that had the quarter of the CPU power and
>half the disk i/o capacity that the new Solaris-based one has.
>So either Linux's NFS client was more compatible with the bugs in
>Linux's NFS server and
>So what does this exercise leave me thinking? Is Linux 2.4.x really
>screwed up in NFS-land? This Solaris NFS replaces a Linux-based NFS
>server that the clients (linux and IRIX) liked just fine.
Yes; the Linux NFS server and client work together just fine but generally
only because the Lin
34 matches
Mail list logo