Given that quote a few folk ask "which is the best SSD?", I thought some
folk might find the following interesting:
http://www.storagenewsletter.com/news/flash/dramexchange-intel-ssds
-marc
P.S: Apologies if the slightly off-topic post offends anyone.
On Tue, Mar 16, 2010 at 2:46 PM, Svein Skogen wrote:
>
> > Not quite a one liner. After you create the target once (step 3), you do
> not have to do that again for the next volume. So three lines.
>
>
> So ... no way around messing with guid numbers?
>
>
I'll write you a Perl script :)
-marc
___
On Tue, Mar 16, 2010 at 3:16 PM, Svein Skogen wrote:
>
> > I'll write you a Perl script :)
> >
>
> I think there are ... several people that'd like a script that gave us
> back some of the ease of the old shareiscsi one-off, instead of having
> to spend time on copy-and-pasting GUIDs they have ..
On Thu, Mar 18, 2010 at 2:44 PM, Chris Murray wrote:
> Good evening,
> I understand that NTFS & VMDK do not relate to Solaris or ZFS, but I was
> wondering if anyone has any experience of checking the alignment of data
> blocks through that stack?
>
NetApp has a great little tool called mbrscan/m
Richard,
My challenge to you is that at least three vedors that I know of built
their storage platforms on FreeBSD. One of them sells $4bn/year of
product - petty sure that eclipses all (Open)Solaris-based storage ;)
-marc
On 3/26/10, Richard Elling wrote:
> On Mar 26, 2010, at 4:46 AM, Edward
You'd run out of LUN IDs on the VMware side pretty quickly (255 from
what I remember).
It's also not really VMware best practice for block.
-marc
On 4/21/10, Robert Milkowski wrote:
> On 21/04/2010 07:41, Schachar Levin wrote:
>> Hi,
>> We are currently using NetApp file clone option to clone m
The L2ARC will continue to function.
-marc
On 5/4/10, Michael Sullivan wrote:
> HI,
>
> I have a question I cannot seem to find an answer to.
>
> I know I can set up a stripe of L2ARC SSD's with say, 4 SSD's.
>
> I know if I set up ZIL on SSD and the SSD goes bad, the the ZIL will be
> relocated
Hi Michael,
What makes you think striping the SSDs would be faster than round-robin?
-marc
On Thu, May 6, 2010 at 1:09 PM, Michael Sullivan wrote:
> Everyone,
>
> Thanks for the help. I really appreciate it.
>
> Well, I actually walked through the source code with an associate today and
> we
Nice write-up, Marc.
Aren't the SuperMicro cards their funny "UIO" form factor? Wouldn't want
someone buying a card that won't work in a standard chassis.
-marc
On Tue, May 18, 2010 at 2:26 AM, Marc Bevand wrote:
> The LSI SAS1064E slipped through the cracks when I built the list.
> This is a
I agree wholeheartedlyyou're paying to make the problem "go away" in an
expedient manner. That said, I see how much we spend on NetApp storage at
work and it makes me shudder ;)
I think someone was wondering if the large storage vendors have their own
microcode on drives? I can tell you that N
On Tue, Feb 2, 2010 at 1:38 PM, Brandon High wrote:
> On Sat, Jan 16, 2010 at 9:47 AM, Simon Breden wrote:
> > Which consumer-priced 1.5TB drives do people currently recommend?
>
> I happened to be looking at the Hitachi product information, and
> noticed that the Deskstar 7K2000 appears to be s
I'm running the 500GB models myself, but I wouldn't say they're overly
noisyand I've been doing ZFS/iSCSI/IOMeter/Bonnie++ stress testing with
them.
They "whine" rather than "click" FYI.
-marc
On Tue, Feb 2, 2010 at 2:58 PM, Simon Breden wrote:
> IIRC the Black range are meant to be the 'p
On Tue, Feb 2, 2010 at 3:45 PM, Peter Jeremy <
peter.jer...@alcatel-lucent.com> wrote:
>
> OTOH, if I'm paying 10x the street drive price upfront, plus roughly
> the street price annually in "support", I can save a fair amount of
> money by just buying a pile of spare drives - when one fails, just
On Tue, Feb 2, 2010 at 3:11 PM, Frank Cusack
wrote:
>
> That said, I doubt 2TB drives represent good value for a home user.
> They WILL fail more frequently and as a home user you aren't likely
> to be keeping multiple spares on hand to avoid warranty replacement
> time.
I'm having a hard time
I believe magical unicorn controllers and drives are both bug-free and
100% spec compliant. The leprichorns sell them if you're trying to
find them ;)
-marc
On 2/2/10, David Magda wrote:
> On Feb 2, 2010, at 15:21, Tim Cook wrote:
>
>> How exactly do you suggest the drive manufacturers make thei
On Tue, Feb 2, 2010 at 9:52 PM, Toby Thain wrote:
>
> On 2-Feb-10, at 1:54 PM, Orvar Korvar wrote:
>
> 100% uptime for 20 years?
>>
>> So what makes OpenVMS so much more stable than Unix? What is the
>> difference?
>>
>
>
> The short answer is that uptimes like that are VMS *cluster* uptimes.
>
As I previously mentioned, I'm pretty happy with the 500GB Caviar
Blacks that I have :)
One word of caution: failure and rebuild times with 1TB+ drives can be
a concern. How many spindles were you planning?
-marc
On 2/3/10, Simon Breden wrote:
> Sounds good.
>
> I was taking a look at the 1TB C
I think you'll do just fine then. And I think the extra platter will
work to your advantage.
-marc
On 2/3/10, Simon Breden wrote:
> Probably 6 in a RAID-Z2 vdev.
>
> Cheers,
> Simon
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss ma
I would go with cores (threads) rather than clock speed here. My home system
is a 4-core AMD @ 1.8Ghz and performs well.
I wouldn't use drives that big and you should be aware of the overheads of
RaidZ[x].
-marc
On Thu, Feb 4, 2010 at 6:19 PM, Brian wrote:
> I am Starting to put together a h
Very interesting stats -- thanks for taking the time and trouble to share
them!
One thing I found interesting is that the Gen 2 X25-M has higher write IOPS
than the X25-E according to Intel's documentation (6,600 IOPS for 4K writes
versus 3,300 IOPS for 4K writes on the "E"). I wonder if it'd perf
On Thu, Feb 4, 2010 at 7:54 PM, Brian wrote:
> It sounds like the consensus is more cores over clock speed. Surprising to
> me since the difference in clocks speed was over 1Ghz. So, I will go with a
> quad core.
>
Four cores @ 1.8Ghz = 7.2Ghz of threaded performance ([Open]Solaris is
relative
On Thu, Feb 4, 2010 at 10:18 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Thu, 4 Feb 2010, Marc Nicholas wrote:
>
> Very interesting stats -- thanks for taking the time and trouble to share
>> them!
>>
>> One thing I found interesting is that
On Thu, Feb 4, 2010 at 10:35 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Thu, 4 Feb 2010, Marc Nicholas wrote:
>
>>
>> The write IOPS between the X25-M and the X25-E are different since with
>> the X25-M, much
>> more of your data gets com
Definitely use Comstar as Tim says.
At home I'm using 4*WD Caviar Blacks on an AMD Phenom x4 @ 1.Ghz and
only 2GB of RAM. I'm running svn132. No HBA - onboard SB700 SATA
ports.$
I can, with IOmeter, saturate GigE from my WinXP laptop via iSCSI.
Can you toss the RAID controller aside an use mothe
How does lowering the flush interval help? If he can't ingress data
fast enough, faster flushing is a Bad Thibg(tm).
-marc
On 2/10/10, Kjetil Torgrim Homme wrote:
> Bob Friesenhahn writes:
>> On Wed, 10 Feb 2010, Frank Cusack wrote:
>>
>> The other three commonly mentioned issues are:
>>
>> -
This is a Windows box, not a DB that flushes every write.
The drives are capable of over 2000 IOPS (albeit with high latency as
its NCQ that gets you there) which would mean, even with sync flushes,
8-9MB/sec.
-marc
On 2/10/10, Brent Jones wrote:
> On Wed, Feb 10, 2010 at 3:12 PM, M
Anyone else got stats to share?
Note: the below is 4*Caviar Black 500GB drives, 1*Intel x-25m setup as both
ZIL and L2ARC, decent ASUS mobo, 2GB of fast RAM.
-marc
r...@opensolaris130:/tank/myfs# /usr/benchmarks/bonnie++/bonnie++ -u root -d
/tank/myfs -f -b
Using uid:0, gid:0.
Writing intelligen
On Thu, Feb 18, 2010 at 10:49 AM, Matt wrote:
> Here's IOStat while doing writes :
>
> r/sw/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
>1.0 256.93.0 2242.9 0.3 0.11.30.5 11 12 c0t0d0
>0.0 253.90.0 2242.9 0.3 0.11.00.4 10 11 c0t1d0
>1.0
Run Bonnie++. You can install it with the Sun package manger and it'll
appear under /usr/benchmarks/bonnie++
Look for the command line I posted a couple of days back for a decent set of
flags to truly rate performance (using sync writes).
-marc
On Thu, Feb 18, 2010 at 11:05 AM, Matt wrote:
> Al
Isn't the dedupe bug fixed in svn133?
-marc
On Tue, Feb 23, 2010 at 9:21 AM, Jeffry Molanus wrote:
> There is no clustering package for it and available source seems very old
> also the de-dup bug is there iirc. So if you don't need HA cluster and
> dedup..
>
> BR, Jeffry
>
> > -Original Mes
send and receive?!
-marc
On Tue, Feb 23, 2010 at 9:25 PM, Thomas Burgess wrote:
> When i needed to do this, the only way i could get it to work was to do
> this:
>
> Take some disks, use a Opensolaris Live CD and label them EFI
> Create a ZPOOL in FreeBSD with these disks
> copy my data from fr
On Wed, Feb 24, 2010 at 2:02 PM, Troy Campbell wrote:
>
> http://www.oracle.com/technology/community/sun-oracle-community-continuity.html
>
> Half way down it says:
> Will Oracle support Java and OpenSolaris User Groups, as Sun has?
>
> Yes, Oracle will indeed enthusiastically support the Java Use
On Fri, Feb 26, 2010 at 2:43 PM, Brandon High wrote:
>
> The drives I'm considering are:
>
> OCZ Vertex 30GB
> Intel X25V 40GB
> Crucial CT64M225 64GB
>
Personally, I'd go with the Intel product...but save a few more pennies up
and get the X-25M. The extra boost on read and write performance is
On Fri, Feb 26, 2010 at 2:42 PM, Lutz Schumann
wrote:
>
> Now If a virtual machine writes to the zvol, blocks are allocated on disk.
> Reads are now partial from disk (for all blocks written) and from ZFS layer
> (all unwritten blocks).
>
> If the virtual machine (which may be vmware / xen / hyper
That's a great deck, Chris.
-marc
Sent from my iPhone
On 2010-11-27, at 10:34 AM, Christopher George wrote:
>> I haven't had a chance to test a Vertex 2 PRO against my 2 EX, and I'd
>> be interested if anyone else has.
>
> I recently presented at the OpenStorage Summit 2010 and compared
> ex
Rocky,
Does DataON manufacture these units or they LSI OEM?
-marc
Sent from my iPhone
416.414.6271
On 2011-01-25, at 2:53 PM, Rocky Shek wrote:
> Philip,
>
> You can consider DataON DNS-1600 4U 24Bay 6Gb/s SAS JBOD Storage.
> http://dataonstorage.com/dataon-products/dns-1600-4u-6g-sas-to-sa
36 matches
Mail list logo