Thank you for your input folks. The MTU 9000 idea worked like a charm. I have
the Intel X25 also, but the capacity was not what I am after for a 6 device
array. I have looked and looked at review after review and thats why I
started with the Intel path, albeit that firmware upgrade in May wa
On Fri, Oct 9, 2009 at 9:25 PM, Derek Anderson wrote:
>
> GigE wasn't giving me the performance I had hoped for so I spring for some
> 10Gbe cards. So what am I doing wrong.
>
> My setup is a Dell 2950 without a raid controller, just a SAS6 card. The
> setup is as such
> :
> mirror rpool (bo
On Fri, Jul 17, 2009 at 2:42 PM, Brandon High wrote:
> The keynote was given on Wednesday. Any more willingness to discuss
> dedup on the list now?
The following video contains a de-duplication overview from Bill and Jeff:
https://slx.sun.com/1179275620
Hope this helps,
- Ryan
--
http://prefet
On Mon, 12 Oct 2009, Mark Shellenbaum wrote:
> Does it only fail under NFS or does it only fail when inheriting an ACL?
It only fails over NFS from a Linux client, locally it works fine, and from
a Solaris client it works fine. It also only seems to fail on directories,
files receive the correct
Paul B. Henson wrote:
We're running Solaris 10 with ZFS to provide home and group directory file
space over NFSv4. We've run into an interoperability issue between the
Solaris NFS server and the Linux NFS client regarding the sgid bit on
directories and assigning appropriate group ownership on ne
We're running Solaris 10 with ZFS to provide home and group directory file
space over NFSv4. We've run into an interoperability issue between the
Solaris NFS server and the Linux NFS client regarding the sgid bit on
directories and assigning appropriate group ownership on newly created
subdirector
On Sat, October 10, 2009 12:02, Harry Putnam wrote:
>
> What do real live administators who administer important data do about
> meta info like that?
Same thing I do about directories -- I name them meaningfully. So I've
got /home/ddb which is the home directory for user ddb and is mounted from
Hi Richard;
You are right ZFS is not a shared FS so it can not be used for RAC unless
you have 7000 series disk system.
In Exadata ASM is used for storage Management where F20 can perform as a
cache.
Best regards
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
Hi James;
Product will be lounched in a very short time. You can learn pricing from
sun. Please keep in mind that Logzilla and F20 is desigined for slightly
different tasks in mind. Logzilla is an extremely fast and reliable write
device while F20 can be used for many different loads (read or writ
Hi All ;
I am not the right person to talk about Solaris/ZFS roadmap, however you can
talk with you Sun account Manager about 7000 series roadmap if you sign an
NDA, which can give you more information
Best regards
Mertol
Mertol Ozyoney
Storage Practice - Sales Manager
Sun Microsystems, TR
I
> "sj" == Shawn Joy writes:
sj> Can you explain in, simple terms, how ZFS now reacts
sj> to this?
I can't. :) I think Victor's long message made a lot of sense. The
failure modes with a SAN are not simple. At least there is the
difference of whether the target's write buffer was
Hi,
Am 12.10.2009 um 13:29 schrieb Richard Elling:
I've not implemented qmail, but it appears to be just an MTA.
These do store-and-forward, so it is unlikely that they need to
use sync calls. It will create a lot of files, but that is usually
done async.
Async I/O for mail servers is a big n
Hua,
The behavior below is described here:
http://docs.sun.com/app/docs/doc/819-5461/setup-1?a=view
The top-level /tank file system cannot be removed so it is
less flexible then using descendent datasets.
If you want to create snapshot or clone and later promote
the /tank clone, then it is bes
Richard Elling wrote:
On Oct 12, 2009, at 2:12 AM, tak ar wrote:
I'm not aware of email services using sync regularly.
In my experience
with large
email services, the response time of the disks used
for database and
indexes is
the critical factor (for > 600 messages/sec
delivered, caches don't
On Oct 12, 2009, at 2:12 AM, tak ar wrote:
I'm not aware of email services using sync regularly.
In my experience
with large
email services, the response time of the disks used
for database and
indexes is
the critical factor (for > 600 messages/sec
delivered, caches don't
matter :-)
Performance
> I'm not aware of email services using sync regularly.
> In my experience
> with large
> email services, the response time of the disks used
> for database and
> indexes is
> the critical factor (for > 600 messages/sec
> delivered, caches don't
> matter :-)
> Performance of the disks for the
i have re run zdb -l /dev/dsk/c9t4d0s0 as i should have the first time (thanks
Nicolas).
Attached output.
--
This message posted from opensolaris.org# zdb -l /dev/dsk/c9t4d0s0
LABEL 0
version=14
nam
Hi Victor, i have tried to re-attach the detail from /var/adm/messages
--
This message posted from opensolaris.orgOct 11 17:16:55 opensolaris unix: [ID 836849 kern.notice]
Oct 11 17:16:55 opensolaris ^Mpanic[cpu0]/thread=ff000b6f7c60:
Oct 11 17:16:55 opensolaris genunix: [ID 361072 kern.noti
> > Use the BBWC to maintain high IOPS when X25-E's
> write cache is disabled?
>
> It should certainly help. Note that in this case
> your relatively
> small battery-backed memory is accepting writes for
> both the X25-E and
> for the disk storage so the BBWC memory becomes 1/2
> as useful and
On 11.10.09 12:59, Darren Taylor wrote:
I have searched the forums and google wide, but cannot find a fix for the issue
I'm currently experiencing. Long story short - I'm now at a point where I
cannot even import my zpool (zpool import -f tank) without causing a kernel
panic
I'm running OpenS
20 matches
Mail list logo