We have been using some 1068-1078 based cards (both raid:AOC-USAS-H4IR
and jbod:LSISAS3801E) with b87-b90 and in s10u5 without issue for some
time. Both the downloaded LSI driver and the bundled one have worked
fine for us for around 6 months of moderate usage. The LSI jbod card is
similar to the
LOL, I guess Sun forgot that they had xvm! I wonder if you could use a
converter (vmware converter) to make it work on vbox etc?
I would also like to see this available as an upgrade to our 4500's..
Webconsole/zfs just stinks because it only paints a tiny fraction of the
overall need for a web dr
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Bryan Cantrill
Sent: Tuesday, November 11, 2008 12:39 PM
To: Adam Leventhal
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] OpenStorage GUI
On Tue, Nov 11, 2008 at 09:31:26AM -0800, Adam Leventhal
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Chris Greer
Sent: Wednesday, November 12, 2008 3:20 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] OpenStorage GUI
Do you have any info on this upgrade path?
I can't seem to find anything ab
e out the whole guts in one tray (from
the bottom rear?).
-Andy
-Original Message-
From: Chris Greer [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2008 3:57 PM
To: Andy Lubel; zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] OpenStorage GUI
I was hoping for a swap out o
Afaik, the drives are pretty much the same, its the chipset that
changed, which also meant a change of cpu and memory.
-Andy
From: Tim [mailto:[EMAIL PROTECTED]
Sent: Wednesday, November 12, 2008 7:24 PM
To: Andy Lubel
Cc: Chris Greer; zfs-discuss
e are HP-UX 11i and OS X 10.4.9 and they both
have corresponding performance characteristics.
Any insight would be appreciated - we really like zfs compared to any
filesystem we have EVER worked on and dont want to revert if at all possible!
TIA,
Andy
yeah i saw that post about the other arrays but none for this EOL'd hunk of
metal. i have some 6130's but hopefully by the time they are implemented we
will have retired this nfs stuff and stepped into zvol iscsi targets.
thanks anyways.. back to the drawing board on how to resolve this!
-And
cache memsize : 1024 MBytes
fc_topology: auto
fc_speed : 2Gb
disk_scrubber : on
ondg : befit
Am i missing something? As far as the RW test, i will tinker some more and
paste the results soonish.
Thanks in advance,
Andy Lubel
-Original Message-
so what you are saying is that if we were using NFS v4 things should be
dramatically better?
do you think this applies to any NFS v4 client or only Suns?
-Original Message-
From: [EMAIL PROTECTED] on behalf of Erblichs
Sent: Sun 4/22/2007 4:50 AM
To: Leon Koll
Cc: zfs-discuss@opensolar
What I'm saying is ZFS doesn't play nice with NFS in all the scenarios I could
think of:
-Single second disk in a v210 (sun72g) write cache on and off = ~1/3 the
performance of UFS when writing files using dd over an NFS mount using the same
disk.
-2 raid 5 volumes composing of 6 spindles ea
They do need to start on the "next" filesystem and it seems very ideal
for Apple. If they didn't then apple will be making a huge mistake
because whatever FS's exist now, zfs has already pretty much trump'd it
on almost every level except for maturity.
I'm expecting ZFS and ISCSI(initiator and ta
Anyone who has an Xraid should have one (or 2) of these BBC modules.
good mojo.
http://store.apple.com/1-800-MY-APPLE/WebObjects/AppleStore.woa/wa/RSLID
?mco=6C04E0D7&nplm=M8941G/B
Can you tell I <3 apple?
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of
I think it will be in the next.next (10.6) OSX, we just need to get apple to
stop playing with their silly cell phone (that I cant help but want, damn
them!).
I have similar situation at home, but what I do is use Solaris 10 on a
cheapish x86 box with 6 400gb IDE/SATA disks, I then make them into
Im using:
zfs set:zil_disable 1
On my se6130 with zfs, accessed by NFS and writing performance almost
doubled. Since you have BBC, why not just set that?
-Andy
On 5/24/07 4:16 PM, "Albert Chin"
<[EMAIL PROTECTED]> wrote:
> On Thu, May 24, 2007 at 11:55:58AM -0700, Grant Kelly wrote:
>>
; -END PGP SIGNATURE-
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Andy Lubel
Application Administrator / IT Department
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Andy Lubel
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
my data on the RAIDZ and remount the ZFS after
> reinstall, or am I screwed?
>
> Please help ...
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.ope
On 9/4/07 4:34 PM, "Richard Elling" <[EMAIL PROTECTED]> wrote:
> Hi Andy,
> my comments below...
> note that I didn't see zfs-discuss@opensolaris.org in the CC for the
> original...
>
> Andy Lubel wrote:
>> Hi All,
>>
>> I have been as
On 9/6/07 2:51 PM, "Joe S" <[EMAIL PROTECTED]> wrote:
> Has anyone here attempted to store their MS Exchange data store on a
> ZFS pool? If so, could you please tell me about your setup? A friend
> is looking for a NAS solution, and may be interested in a ZFS box
> instead of a netapp or something
guess we can probably be OK using SXCE (as Joyent did).
Thanks,
Andy Lubel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
On 9/18/07 1:02 PM, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:
>
> Hey Andy,
>
> On Tue, Sep 18, 2007 at 12:59:02PM -0400, Andy Lubel wrote:
>> I think we are very close to using zfs in our production environment.. Now
>> that I have snv_72 installed an
On 9/18/07 2:26 PM, "Neil Perrin" <[EMAIL PROTECTED]> wrote:
>
>
> Andy Lubel wrote:
>> On 9/18/07 1:02 PM, "Bryan Cantrill" <[EMAIL PROTECTED]> wrote:
>>
>>> Hey Andy,
>>>
>>> On Tue, Sep 18, 2007 at 12:59:
corrupted data.
>
> That would also be my preference, but if I were forced to use hardware
> RAID, the additional loss of storage for ZFS redundancy would be painful.
>
> Would anyone happen to have any good recommendations for an enterprise
> scale storage subsystem suitab
On 9/20/07 7:31 PM, "Paul B. Henson" <[EMAIL PROTECTED]> wrote:
> On Thu, 20 Sep 2007, Tim Spriggs wrote:
>
>> It's an IBM re-branded NetApp which can which we are using for NFS and
>> iSCSI.
Yeah its fun to see IBM compete with its OEM provider Netapp.
>
> Ah, I see.
>
> Is it comparable s
On 9/25/07 3:37 AM, "Sergiy Kolodka" <[EMAIL PROTECTED]>
wrote:
> Hi Guys,
>
> I'm playing with Blade 6300 to check performance of compressed ZFS with Oracle
> database.
> After some really simple tests I noticed that default (well, not really
> default, some patches applied, but definitely noo
I gave up.
The 6120 I just ended up not doing zfs. And for our 6130 since we don't
have santricity or the sscs command to set it, I just decided to export each
disk and create an array with zfs (and a RAMSAN zil), which made performance
acceptable for us.
I wish there was a firmware that just ma
Yeah im pumped about this new release today.. such harmony in my
storage to be had. now if OSX only had a native iscsi target/initiator!
-Andy Lubel
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Peter Woodman
Sent: Friday, October 26, 2007 8:14
Jumpering drives by removing the cover? Do you mean opening the chassis
because they aren't removable from the outside?
Your cable is longer than 1 meter inside of a chasis??
I think sataI is 2 meters and sataII is 1 meter.
As far as a system setting for demoting these to sataI I don't know, b
Marvell controllers work great with solaris.
Supermicro AOC-SAT2-MV8 is what I currently use. I bought it on
recommendation from this list actually. I think I paid 110$ for mine.
-Andy
On 11/2/07 4:10 PM, "Peter Schuller" <[EMAIL PROTECTED]> wrote:
> Hello,
>
> Short version: Can anyone reco
On 11/15/07 9:05 AM, "Robert Milkowski" <[EMAIL PROTECTED]> wrote:
> Hello can,
>
> Thursday, November 15, 2007, 2:54:21 AM, you wrote:
>
> cyg> The major difference between ZFS and WAFL in this regard is that
> cyg> ZFS batch-writes-back its data to disk without first aggregating
> cyg> it in N
Arcea, nice!
Any word on whether 3ware has come around yet? I've been bugging them for
months to do something to get a driver made for solaris.
-Andy
From: [EMAIL PROTECTED] on behalf of James C. McPherson
Sent: Thu 11/22/2007 5:06 PM
To: mike
Cc: zfs-discu
With my (COTS) LSI 1068 and 1078 based controllers I get consistently
better performance when I export all disks as jbod (MegaCli -
CfgEachDskRaid0).
I even went through all the loops and hoops with 6120's, 6130's and
even some SGI storage and the result was always the same; better
performa
> With my (COTS) LSI 1068 and 1078 based controllers I get consistently
> better performance when I export all disks as jbod (MegaCli -
> CfgEachDskRaid0).
>
>
>> Is that really 'all disks as JBOD'? or is it 'each disk as a single
>> drive RAID0'?
single disk raid0:
./MegaCli -CfgEachDskRaid
On Feb 26, 2008, at 10:23 AM, Rich Teer wrote:
> On Tue, 26 Feb 2008, Joerg Schilling wrote:
>
>> Hi Rich, I asked you a question that you did not yet answer:
>
> Hi Jörg,
>
>> Are you interested only in full backups and in the ability to
>> restore single
>> files from that type of backups?
>>
On Mar 11, 2008, at 4:58 PM, Bart Smaalders wrote:
> Frank Bottone wrote:
>> I'm using the latest build of opensolaris express available from
>> opensolaris.org.
>>
>> I had no problems with the install (its an AMD64 x2 3800+, 1gb
>> physical ram, 1 ide drive for the os and 4*250GB sata drives at
Paul B. Henson wrote:
>> On Thu, 8 May 2008, Mark Shellenbaum wrote:
>>
>>> we already have the ability to allow users to create/destroy snapshots
>>> over NFS. Look at the ZFS delegated administration model. If all you
>>> want is snapshot creation/destruction then you will need to grant
>>> "s
filesystem
creation upon connection to an AD joined cifs server? samba had some cool
stuff with preexec and I just wonder if something like that is available for
the kernel mode cifs driver.
-Andy
-Original Message-
From: [EMAIL PROTECTED] on behalf of Andy Lubel
Sent: Sun 5/11/2008 2:24 A
On May 14, 2008, at 10:39 AM, Chris Siebenmann wrote:
> | Think what you are looking for would be a combination of a snapshot
> | and zfs send/receive, that would give you an archive that you can
> use
> | to recreate your zfs filesystems on your zpool at will at later
> time.
>
> Talking of
On May 16, 2008, at 10:04 AM, Robert Milkowski wrote:
> Hello James,
>
>
>>> 2) Does anyone have experiance with the 2540?
>
> JCM> Kinda. I worked on adding MPxIO support to the mpt driver so
> JCM> we could support the SAS version of this unit - the ST2530.
>
> JCM> What sort of experience are
The limitation existed in every Sun branded Engenio array we tested -
2510,2530,2540,6130,6540. This limitation is on volumes. You will not be able
to present a lun larger than that magical 1.998TB. I think it is a combination
of both in CAM and the firmware. Can't do it with sscs either...
On May 21, 2008, at 11:15 AM, Bob Friesenhahn wrote:
> I encountered an issue that people using OS-X systems as NFS clients
> need to be aware of. While not strictly a ZFS issue, it may be
> encounted most often by ZFS users since ZFS makes it easy to support
> and export per-user filesystems.
On May 27, 2008, at 1:44 PM, Rob Logan wrote:
>
>> There is something more to consider with SSDs uses as a cache device.
> why use SATA as the interface? perhaps
> http://www.tgdaily.com/content/view/34065/135/
> would be better? (no experience)
We are pretty happy with RAMSAN SSD's (ours is RAM
Did you try mounting with nfs version 3?
mount -o vers=3
On May 28, 2008, at 10:38 AM, kevin kramer wrote:
> that is my thread and I'm still having issues even after applying
> that patch. It just came up again this week.
>
> [locahost] uname -a
> Linux dv-121-25.centtech.com 2.6.18-53.1.14.el
On May 29, 2008, at 9:52 AM, Jim Klimov wrote:
I've installed SXDE (snv_89) and found that the web console only
listens on https://localhost:6789/ now, and the module for ZFS admin
doesn't work.
It works for out of the box without any special mojo. In order to get
the webconsole to list
Hello,
I've got a real doozie.. We recently implemented a b89 as zfs/nfs/
cifs server. The NFS client is HP-UX (11.23).
What's happening is when our dba edits a file on the nfs mount with
vi, it will not save.
I removed vi from the mix by doing 'touch /nfs/file1' then 'echo abc
> /nfs/f
, in a couple months we will be dumping this server
with new x4600's.
Thanks for the help,
-Andy
On Jun 5, 2008, at 6:19 PM, Robert Thurlow wrote:
> Andy Lubel wrote:
>
>> I've got a real doozie.. We recently implemented a b89 as zfs/
>> nfs/ cifs server. The
On Jun 6, 2008, at 11:22 AM, Andy Lubel wrote:
> That was it!
>
> hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
> nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
> hpux-is-old.com -> nearline.host NFS C SETATTR3 FH=F6B3
> nearline.host -> hpux-is-old.c
On Jun 9, 2008, at 12:28 PM, Andy Lubel wrote:
>
> On Jun 6, 2008, at 11:22 AM, Andy Lubel wrote:
>
>> That was it!
>>
>> hpux-is-old.com -> nearline.host NFS C GETATTR3 FH=F6B3
>> nearline.host -> hpux-is-old.com NFS R GETATTR3 OK
>> hpux-is-ol
On Jun 11, 2008, at 11:35 AM, Bob Friesenhahn wrote:
> On Wed, 11 Jun 2008, Al Hopper wrote:
>> disk drives. But - based on personal observation - there is a lot of
>> hype surrounding SSD reliability. Obviously the *promise* of this
>> technology is higher performance and *reliability* with lo
50 matches
Mail list logo