So I have just finished building something similar to this...
I'm finally replacing my Pentium II 400Mhz fileserver!
My setup is:
Opensolaris 2008.11
http://www.newegg.com/Product/Product.aspx?Item=N82E16813138117
http://www.newegg.com/Product/Product.aspx?Item=N82E16820145184
http://www.newegg.c
Right now we are not using Oracle...we are using iorate so we don't have
separate logs. When the testing was with Oracle the logs were separate. This
test represents the 13 data luns that we had during those test.
The reason it wasn't striped with vxvm is that the original comparison test was
zfs with the datafiles recreated after the recordsize change was 3079 IOPS
So now we are at least in the ballpark.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/li
that should be set zfs:zfs_nocacheflush=1
in the post above...that was my typo in the post.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
So to give a little background on this, we have been benchmarking Oracle RAC on
Linux vs. Oracle on Solaris. In the Solaris test, we are using vxvm and vxfs.
We noticed that the same Oracle TPC benchmark at roughly the same transaction
rate was causing twice as many disk I/O's to the backend DMX
Do you have any info on this upgrade path?
I can't seem to find anything about this...
I would also like to throw in my $0.02 worth that I would like to see the
software offered to existing sun X4540 (or upgraded X4500) customers.
Chris G.
--
This message posted from opensolaris.org
___
I've been looking at this board myself for the same thing
The blog below is regarding the D945GCLF but looking at the two, it looks like
the
processor is the only thing that is different (single core vs. dual core).
http://blogs.sun.com/PotstickerGuru/entry/solaris_running_on_intel_atom
--
T
iostat -En will show you device and serial number (at least on the Sun hardware
I've used).
My old way of finding drives is to run format and run a disk read test that
isn't destructive and look for the drive with the access light going crazy.
Most drives still have a very small LED on them t
If you are having touble booting to the mirrored drive, the following is what
we had to do to correctly boot off the mirrored drive in a Thumper mirrored
with disksuite. The root drive is c5t0d0 and the mirror is c5t4d0. The BIOS
will try those 2 drives.
Just a note, if it ever switches to c5t
I was using EMC's iorate for the comparison.
ftp://ftp.emc.com/pub/symm3000/iorate/
I had 4 processes running on the pool in parallel do 4K sequential writes.
I've also been playing around with a few other benchmark tools (i just had
results from other storage test with this same iorate tes
I currently have a traditional NFS cluster hardware setup in the lab (2 host
with FC attached JBOD storage) but no cluster software yet. I've been wanting
to try out the separate ZIL to see what it might do to boost performance. My
problem is that I don't have any cool SSD devices, much less o
We've used enclosures manufactured by Xyratex (http://www.xyratex.com/).
Several RAID vendors have used these disk in their systems. One reseller is
listed below (the one we used got bought out). I've been very happy with these
enclosures and a Qlogic HBA.
As we have retired some of the RAID
You probably want to add a -R / to your import command. That should keep
a node from automatically importing your zfs pool when it reboots after a crash.
See this thread...
http://www.opensolaris.org/jive/thread.jspa?threadID=13544&tstart=0
This message posted from opensolaris.org
__
13 matches
Mail list logo