[zfs-discuss] Re: SAS support on Solaris

2007-01-23 Thread David J. Orman
*snip snip*
>  AFAIK
> only Adaptec and LSI Logic are making controllers
> today.  With so few
> manufacturers it's a scary investment.  (Of course,
> someone please
> correct me if you know of other players.)

There's a few others. Those are (of course) the major players (and with big 
names like that making them, you can be pretty sure they are going to be around 
for a while...)

That said, I know of ARIO Data ( http://www.ariodata.com/products/controllers/ 
) making some (or ramping up to make them.) I'm sure there are some others. 
It's certainly not as common as SATA/SCSI/etc right now, up until recently you 
couldn't even buy drives. Now, the fastest drive I've seen is SAS only (15k 
2.5" Seagate). I'm pretty sure when Seagate is making it's fastest product SAS, 
SAS has been accepted. :p

http://techreport.com/onearticle.x/11638
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Did ZFS boot/root make it into Solaris Express Developer Edition 9/07

2007-10-25 Thread David J. Orman
Any idea when the installer integration for ZFS root/boot will happen?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How's zfs RAIDZ fualt-tolerant ???

2006-05-26 Thread David J. Orman
> RAID-Z is single-fault tolerant.  If if you take out two disks, 
> then you
> no longer have the required redundancy to maintain your data.  
> Build 42
> should contain double-parity RAID-Z, which will allow you to 
> sustain two
> simulataneous disk failures without dataloss.

I'm not sure if this has been mentioned elsewhere (I didn't see it..) but will 
this double parity be backported into Solaris 10 in time for making the U2 
release? This is a sorely needed piece of functionality for my deployment (and 
I'm sure many others.)

Thanks,
David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question about ZFS performance for webserving/java

2006-06-01 Thread David J. Orman
Just as a hypothetical (not looking for exact science here folks..), how would 
ZFS fare (in your educated opinion) in this sitation:

1 - Machine with 8 10k rpm SATA drives. High performance machine of sorts (ie 
dual proc, etc..let's weed out cpu/memory/bus bandwidth as much as possible 
from the equation).

2 - Workload is webserving, well - application serving. Java app server 9, 
various java applications requiring database access (mostly small tables/data 
elements, but millions and millions of rows).

3 - App server would be running in one zone, with a (NFS) mounted ZFS 
filesystem as storage.

4 - DB server (PgSQL) would be running in another zone, with a (NFS) mounted 
ZFS filesystem as storage.

5 - Multiple disk redundancy is needed. So, I'm assuming two raid-z pools of 3 
drives each, mirrored is the solution. If people have a better suggestion, tell 
me! :P

6 - OS will be Sol10U2, OS/Root FS will be installed on mirrored drives, using 
UFS (my only choice..)

Now, please eliminate CPU/RAM from this equation, assume the server has 4 cores 
of goodness powering it, and 32 gigs of ram. No, running on a ram-disk isn't 
what I'm asking for. :P

* NFS being optional, just curious what the difference would be, as getting a 
T1000 + building an external storage box is an option. I just can't justify 
Sun's crazy storage pricing at the moment.

How would ZFS perform (educated opinions, I realize I won't be getting exact 
answers) in this situation. I can't be more specific because I don't have the 
HW in front of me, I'm trying to get a feel for the "correct" solution before I 
make huge purchases.

If anything else is needed, please feel free to ask!

Thanks,
David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-01 Thread David J. Orman
- Original Message -
From: Matthew Ahrens <[EMAIL PROTECTED]>
Date: Thursday, June 1, 2006 12:30 pm
Subject: Re: [zfs-discuss] question about ZFS performance for webserving/java

 
> Why would you use NFS?  These zones are on the same machine as the
> storage, right?  You can simply export filesystems in your pool to the
> various zones (see zfs(1m) and zonecfg(1m) manpages).  This will 
> resultin better performance.

This is why I noted it as optional, and gave my reasoning (a T1000 with a 
seperate box handling storage, exporting via NFS to the T1000). I'm not 
investing in the blackhole that is FC, no way, and I don't know how to cram 8+ 
SATA ports into a T1000. I can't justify the price of the T2000 at this point. 
But again, NFS was *optional*. Using a home-built box, I would be using 
directly attached storage.
> 
> 
> There is no need for multiple pools.  Perhaps you meant two raid-z
> groups (aka "vdevs") in a single pool?  Also, wouldn't you want to use
> all 8 disks, therefore use two 4-disk raid-z groups?  This way you 
> wouldget 3 disks worth of usable space.
Yes, I meant what you specified, sorry for my lack of knowledge. :)
I need two of the disks for the root FS, because U2 won't allow me to make the 
root FS on ZFS fs. Otherwise, I'd love to use all 8.

> Depending on how much space you need, you should consider using a 
> singledouble-parity RAID-Z group with your 8 disks.  This would 
> give you 6
> disks worth of usable space.  Given that you want to be able to 
> toleratetwo failures, that is probably your best solution.  Other 
> solutionswould include three 3-way mirrors (if you can fit another 
> drive in your
> machine), giving you 3 disks worth of usable space.

That would be ideal, unfortunately that won't be in U2 for various reasons (I 
won't argue this point, although I really think the "process" is hurting 
Solaris in this regard, this should have been included - lots of people need 
two disk redundancy at least.)

Thanks,
David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: Re[2]: [zfs-discuss] question about ZFS performance for webserving/java

2006-06-01 Thread David J. Orman
- Original Message -
From: Robert Milkowski <[EMAIL PROTECTED]>
Date: Thursday, June 1, 2006 1:17 pm
Subject: Re[2]: [zfs-discuss] question about ZFS performance for webserving/java

> Hello David,
> 
> The system itself won't take too much space.
> You can create one large slice form the rest of the disks and the same
> slices on the rest of the disks. Then you can create one large pool
> from 8 such slices. Remaining space on the rest of the disks could be
> use for swap for example, or other smaller pool.
Ok, sorry I'm not up to speed on Solaris/software raid types. So you're saying 
create a couple slices on each disk. One set of slices I'll use to make a raid 
of some sort (maybe 10) and use UFS on that (for the initial install - can this 
be done on installation??), and then use the rest of the slices on the disks to 
do the zfs/raid for everything else?
> 
> Now I would consider creating raid-10, and not raid-z, something like:
> 
> zpool create local mirror s1 s2 mirror s3 s4 mirror s5 s6 mirror s7 s8
> 
> Than I would probably create local/zones/db and put zone here, then
> create additional needed filesystem in that one pool.
> 
> btw. in such a config write cache would be off by default - I'm not
> sure it will be a problem or not.

Ok. I'll keep that in mind. I'm just making sure this is feasible. The 
technical details I can work out later.

David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Time Machine

2006-08-07 Thread David J. Orman
Reading that site, it sounds EXACTLY like snapshots. It doesn't sound to 
require a second disk, it just gives you the option of backing up to one. 
Sounds like it snapshots once a day (configurable) and then "sends" the 
snapshot to another drive/server if you request it to do so. Looks like they 
just made snapshots accesible to desktop users. Pretty impressive how they did 
the GUI work too.

- Original Message -
From: Eric Schrock <[EMAIL PROTECTED]>
Date: Monday, August 7, 2006 8:55 am
Subject: Re: [zfs-discuss] Apple Time Machine
To: Tao Chen <[EMAIL PROTECTED]>
Cc: ZFS Discussions 

> There are some more details here:
> 
> http://www.apple.com/macosx/leopard/timemachine.html
> 
> In particular, the backups are done to a separate drive.  This means
> that they can't be using traditional COW techniques (not that such a
> thing is possible with HSFS), so it's unclear what kind of performance
> impact this would have on your machine.  I'm sure we'll hear some
> details as people actually get their hands on the software.
> 
> - Eric
> 
> --
> Eric Schrock, Solaris Kernel Development   
> http://blogs.sun.com/eschrock___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Time Machine

2006-08-07 Thread David J. Orman
Yeah, we need more information.

However, time machine browser might very well just be a fancy browser for a 
.zfs type setup, much like Solaris has. Just with GUI splash all over it. I'm 
just curious about the underlying implementation. I wonder if they did all this 
sticking with HFS+ or if they actually migrated to ZFS. The next weeks should 
be interesting as people get ahold of the dev copies. 

David

- Original Message -
From: Joseph Mocker <[EMAIL PROTECTED]>
Date: Monday, August 7, 2006 9:04 am
Subject: Re: [zfs-discuss] Apple Time Machine
To: "David J. Orman" <[EMAIL PROTECTED]>
Cc: Eric Schrock <[EMAIL PROTECTED]>, ZFS Discussions 


> Well, its hard to tell from the description whether the "Time 
> Machine 
> browser" is the only way you can get at previous versions of files 
> before you restore them. If so, this is somewhat different than 
> snapshots.
>  --joe
> 
> David J. Orman wrote:
> > Reading that site, it sounds EXACTLY like snapshots. It doesn't 
> sound to require a second disk, it just gives you the option of 
> backing up to one. Sounds like it snapshots once a day 
> (configurable) and then "sends" the snapshot to another 
> drive/server if you request it to do so. Looks like they just made 
> snapshots accesible to desktop users. Pretty impressive how they 
> did the GUI work too.
> >
> > - Original Message -
> > From: Eric Schrock <[EMAIL PROTECTED]>
> > Date: Monday, August 7, 2006 8:55 am
> > Subject: Re: [zfs-discuss] Apple Time Machine
> > To: Tao Chen <[EMAIL PROTECTED]>
> > Cc: ZFS Discussions 
> >
> >   
> >> There are some more details here:
> >>
> >> http://www.apple.com/macosx/leopard/timemachine.html
> >>
> >> In particular, the backups are done to a separate drive.  This 
> means>> that they can't be using traditional COW techniques (not 
> that such a
> >> thing is possible with HSFS), so it's unclear what kind of 
> performance>> impact this would have on your machine.  I'm sure 
> we'll hear some
> >> details as people actually get their hands on the software.
> >>
> >> - Eric
> >>
> >> --
> >> Eric Schrock, Solaris Kernel Development   
> >> 
> http://blogs.sun.com/eschrock___>>
>  zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >>
> >> 
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >   
> 
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Time Machine

2006-08-07 Thread David J. Orman
> David Magda wrote:
> > Well, they've ported Dtrace:
> > 
> > "..now built into Mac OS X Leopard. Xray. Because it’s 2006."
> 
> Uh right and they're actually shipping it in 2007. Apple marketing. 
> Anyone want to start printing t-shirts:
> 
> "DTrace & Time Machine in OpenSolaris. Because we had it in 2005."
> 

Is there a GUI frontend grandma can use for ZFS snapshot management/file 
rollbacks? Is there an IDE which integrates Dtrace support? I realize this 
isn't what OSOL is targetting (servers, from all the talk I see, is the main 
priority for OSOL devs), but it's what will get the most visibility. Joe Blow 
isn't going to have a clue what OpenSolaris is, probably not even Solaris. 
They'll know a mac when they see one though. :) 

Perhaps the OSOL project could *learn* from "Apple Marketing", because to be 
quite frank, they are doing VERY well. :) Emulating their wise choices would be 
a good thing for OSOL (no need to falsify dates of course... although arguably 
since the dev previews going out have dtrace/time machine in them... 2006 is 
correct. OSOL is very much a "dev preview" type of OS. Not as unstable as 
10.5.preview undoubtably, but still very much alike.)


> (actually did they give OpenSolaris a name check at all when they 
> mentioned DTrace ?)

Nope, not that I can see. Apple's pretty notorious for that kind of 
"oversight". I used to work for them, I know first hand how hat-tipping doesn't 
occur very often.

> > http://www.apple.com/macosx/leopard/xcode.html
> 
> Hrm. I note with interest the bit about "project snapshots":
> 
> -
> Record the state of your project anytime, and restore it instantly. 
> Experiment with new features without spending time or brain cells 
> committing them to a source control system. Like saving a game in 
> Civilization 4, Xcode 3.0 lets you go back in time without 
> repercussions.-
> 
> That's sounding more and more like ZFS to me, and as others have 
> said, 
> the devil's in the details (I'm paraphrasing), but I'm only 
> speculating 
> here - don't have anything concrete.
> 

I'm just curious if they added ZFS-like functionality to HFS+ or actually USE 
ZFS. I'd be MUCH happier if they USE ZFS, because I've had a LOT of volume 
corruption with HFS+.

Gumakalakafufu,
David
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Apple Time Machine

2006-08-07 Thread David J. Orman
 
> Apple just released the Darwin Kernel code "xnu-792-10.96"
> the equivalent of 10.4.7 for intel machines.
> 
> -- Robert.

Really? How odd. Seems to be counter-intuitive with this news:

http://opendarwin.org/en/news/shutdown.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS/Thumper experiences

2006-08-07 Thread David J. Orman
Thanks, interesting read. It'll be nice to see the actual results if Sun ever 
publishes them. 

Cheers,
David

- Original Message -
From: Adrian Cockcroft <[EMAIL PROTECTED]>
Date: Monday, August 7, 2006 3:23 pm
Subject: [zfs-discuss] ZFS/Thumper experiences
To: zfs-discuss@opensolaris.org

> Dave Fisk and I spent some time evaluating Thumper and ZFS as part 
> of the beta program. We collected tons of data and results that we 
> fed back to Sun. I just blogged a short summary at 
> http://perfcap.blogspot.com and we are waiting for the final 
> performance fixes, and some spare time to do a retest...
> 
> Cheers Adrian
> 
> 
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] External drive enclosures + Sun Server for mass storage

2007-01-19 Thread David J. Orman
Hi,

I'm looking at Sun's 1U x64 server line, and at most they support two drives. 
This is fine for the root OS install, but obviously not sufficient for many 
users.

Specifically, I am looking at the: http://www.sun.com/servers/x64/x2200/ 
X2200M2.

It only has "Riser card assembly with two internal 64-bit, 8-lane, low-profile, 
half length PCI-Express slots" for expansion.

What I'm looking for is a SAS/SATA card that would allow me to add an external 
SATA enclosure (or some such device) to add storage. The supported list on the 
HCL is pretty slim, and I see no PCI-E stuff. A card that supports SAS would be 
*ideal*, but I can settle for normal SATA too.

So, anybody have any good suggestions for these two things:

#1 - SAS/SATA PCI-E card that would work with the Sun X2200M2.
#2 - Rack-mountable external enclosure for SAS/SATA drives, supporting hot swap 
of drives.

Basically, I'm trying to get around using Sun's extremely expensive storage 
solutions while waiting on them to release something reasonable now that ZFS 
exists.

Cheers,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage

2007-01-20 Thread David J. Orman
> Hi David,
> 
> I don't know if your company qualifies as a startup
> under Sun's regs
> but you can get an X4500/Thumper for $24,000 under
> this program:
> http://www.sun.com/emrkt/startupessentials/
> 
> Best Regards,
> Jason

I'm already a part of the Startup Essentials program. Perhaps I should have 
been more clear, my apologies, I am not looking for 48 drives worth of storage. 
This is beyond our means to purchase at this point, regardless of the $/GB. I 
do agree, it is quite a good deal.

I was talking about the huge gap in storage solutions from Sun for the 
middle-ground. While $24,000 is a wonderful deal, it's absolute overkill for 
what I'm thinking about doing. I was looking for more around 6-8 drives.

David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: External drive enclosures + Sun Server for massstorage

2007-01-20 Thread David J. Orman
> On Fri, 19 Jan 2007, Frank Cusack wrote:
> 
> > But x4100/x4200 only accept expensive 2.5" SAS
> drives, which have
> > small capacities.  [...]
> 
> ... and only 2 or 4 drives each.  Hence my blog entry
> a while back,
> wishing for a Sun-badged 1U SAS JBOD with room for 8
> drives.  I'm
> amazed that Sun hasn't got a product to fill this
> obvious (to me
> at least) hole in their storage catalogue.
> 
> -- 
> Rich Teer, SCSA, SCNA, SCSECA, OpenSolaris CAB member
> 
> President,
> Rite Online Inc.
> 
> Voice: +1 (250) 979-1638
> URL: http://www.rite-group.com/rich
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
> ss
> 

This is exactly what I am looking for. I apparently was not clear in my 
original post. I am looking for a 6-8 drive external solution to tie into Sun 
servers. The existing Sun solutions at this range are very expensive. For 
instance, the 3511 is ~$37000 for 12x500gb drives.

I can buy good quality Seagate drives for $200 each. That comes to the grand 
total of $2400. Somehow I doubt the enclosure/drive controllers are worth 
~$34,000. It's an insane markup.

That's why I was asking for an external JBOD solution. The Sun servers I've 
looked at are all priced excellently, and I'd love to use them - but the 
storage solutions are a bit crazy. Not to mention, I don't want to get tied 
into FC, seeing as 10gE is around the corner. I'd rather use some kind of 
external interface that's reasonable.

On that note, I've recently read it might be the case that the 1u sun servers 
do not have hot-swappable disk drives... is this really true? That makes this 
whole plan silly, I can just go out and buy a Supermicro machine and save money 
all around, and have the 6-8 drives in the same box as the server.

Thanks,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: External drive enclosures + Sun Server for

2007-01-22 Thread David J. Orman
> Hi Frank,
> 
> I'm sure Richard will check it out. He's a very good
> guy and not
> trying to jerk you around. I'm sure the hostility
> isn't warranted. :-)
> 
> Best Regards,
> Jason

I'm very confused now. Do the x2200m2s support "hot plug" of drives or not? I 
can't believe it's that confusing/difficult. They do or they don't. I don't 
care if I can just yank a drive in a running system out and have no problems, 
but I *do* need to be able to swap a failed disk in a mirror without downtime.

Does Sun not have an official word on this? I'm losing my faith very rapidly on 
the lack of an absolute response to this question.

Along these same lines, what is the roadmap for ZFS on boot disks? I've not 
heard anything about it in quite some time, and google doesn't yield any 
current information either.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: External drive enclosures + Sun Server for mass

2007-01-22 Thread David J. Orman
> Not to be picky, but the X2100 and X2200 series are
> NOT
> designed/targeted for disk serving (they don't even
> have redundant power
> supplies).  They're compute-boxes.  The X4100/X4200
> are what you are
> looking for to get a flexible box more oriented
> towards disk i/o and
> expansion.

I don't see those as being any better suited to external discs other than:

#1 - They have the capacity for redundant PSUs, which is irrelevant to my needs.
#2 - They only have PCI Express slots, and I can't find any good external SATA 
interface cards on PCI Express

I can't wrap my head around the idea that I should buy a lot more than I need, 
which still doesn't serve my purposes. The 4 disks in an x4100 still aren't 
enough, and the machine is a fair amount more costly. I just need mirrored boot 
drives, and an external disk array.

> That said (if you're set on an X2200 M2), you are
> probably better off
> getting a PCI-E SCSI controller, and then attaching
> it to an external
> SCSI->SATA JBOD.  There are plenty of external JBODs
> out there which use
> Ultra320/Ultra160 as a host interface and SATA as a
> drive interface.
> Sun will sell you a supported SCSI controller with
> the X2200 M2 (the
> "Sun StorageTek PCI-E Dual Channel Ultra320 SCSI
> HBA").
>
> SCSI is far better for a host attachment mechanism
> than eSATA if you
> plan on doing more than a couple of drives, which it
> sounds like you
> are. While the SCSI HBA is going to cost quite a bit
> more than an eSATA
> HBA, the external JBODs run about the same, and the
> total difference is
> going to be $300 or so across the whole setup (which
> will cost you $5000
> or more fully populated). So the cost to use SCSI vs
> eSATA as the host-
> attach is a rounding error.

I understand your comments in some ways, in others I do not. It sounds like 
we're moving backwards in time. Exactly why is SCSI "better" than SAS/SATA for 
external devices? From my experience (with other OSs/hardware platforms) the 
opposite is true. A nice SAS/SATA controller with external ports (especially 
those that allow multiple SAS/SATA drives via one cable - whichever tech you 
use) works wonderfully for me, and I get a nice thin/clean cable which makes 
cable management much more "enjoyable" in higher density situations.
 
I also don't agree with the logic "just spend a mere $300 extra to use older 
technology!"

$300 may not be much to large business, but things like this nickle and dime 
small business owners. There's a lot of things I'd prefer to spend $300 on than 
an expensive SCSI HBA which offers no advantages over a SAS counterpart, in 
fact offers disadvantages instead. 

Your input is of course highly valued, and it's quite possible I'm missing an 
important piece to the puzzle somewhere here, but I am not convinced this is 
the ideal solution - simply a "stick with the old stuff, it's easier" solution, 
which I am very much against.

Thanks,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: External drive enclosures + Sun Server for

2007-01-22 Thread David J. Orman
> On January 22, 2007 11:19:40 AM -0800 "David J.
> Orman" 
> <[EMAIL PROTECTED]> wrote:
> > I'm very confused now. Do the x2200m2s support "hot
> plug" of drives or
> > not? I can't believe it's that confusing/difficult.
> They do or they
> > don't.
> 
> Running Solaris, they do not.

Wow. What was/is Sun thinking here? Glad I asked the question happenstance, 
this makes the X2* series a total waste to purchase.
 
> > I don't care if I can just yank a drive in a
> running system out
> > and have no problems, but I *do* need to be able to
> swap a failed disk in
> > a mirror without downtime.
> 
> Then the x2100/x2200 is not for you in a standard
> configuration.  You might
> be able to find a PCI-E sata card and use that
> instead of the onboard SATA.
> I'm hoping to find such a card.
> 

I'm not going to pay for hardware that can't handle very basic things such as 
mirrored boot drives on the vendor-provided OS. That's insane.

Guess it's time to investigate Supermicro and Tyan solutions, 
startup-essentials program or not - that makes no hardware sense.

Who do I gripe to concerning this (we're starting to stray from discussion 
pertinent to this list...)? Would I gripe to my sales rep?

Thanks for the clarity,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: External drive enclosures + Sun Server for mass

2007-01-22 Thread David J. Orman
> Hi David,
> 
> Depending on the I/O you're doing the X4100/X4200 are
> much better
> suited because of the dual HyperTransport buses. As a
> storage box with
> GigE outputs you've got a lot more I/O capacity with
> two HT buses than
> one. That plus the X4100 is just a more solid box.

That much makes sense, thanks for clearing that up.

> The X2100 M2 while
> a vast improvement over the X2100 in terms of
> reliability and
> features, is still an OEM'd whitebox. We use the
> X2100 M2s for
> application servers, but for anything that needs
> solid reliability or
> I/O we go Galaxy.

Ahh. That explains a lot. Thank you once again!

Sounds like the X2* is the red-headed stepchild of Sun's product line. They 
should slap disclaimers up on the product information pages so we know better 
than to purchase into something that doesn't fully function.

Still unclear on the SAS/SATA solutions, but hopefully that'll progress further 
now in the thread.

Cheers,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Re: External drive enclosures + Sun

2007-01-22 Thread David J. Orman
> I know it seems ridiculous to HAVE to buy a 3rd party
> card, but come
> on it is only $50 or so.  Assuming you don't need
> both pci slots for
> other uses.

I do. Two would have gone to external access for a JBOD (if that ever gets 
sorted out, haha) - most external adapters seem to support 4 disks.

> I personally wouldn't want to deal with "PC" hardware
> suppliers directly.

Neither would I, hence looking to Sun. :)

> Putting together and maintaining those kinds of
> systems is a PITA. 

Well, the Supermicro and Tyan systems generally are not.

> The
> $50 is worth it.  Assuming it will work. 

Herein lies the problem, more following...

> Especially
> under the startup
> program you're going to have as good or better prices
> from Sun,

With the program, the prices are still more than I would pay from 
Supermicro/Tyan, but they are acceptably higher as the integration/support 
would be much better, of course. Except, this does not seem the case on the X2* 
series.

> and
> good support.

Here is the big problem. I'd be buying a piece of Sun hardware specifically for 
this reason, already paying more (even with the startup essentials program) - 
but do you think Sun is going to support that SAS/SATA controller I bought? If 
something doesn't work, or later gets broken (for example, the driver 
disappears/breaks in a later version of Solaris) - what will I do then? 
Nothing. :) Might as well buy whitebox if I'm going to build the system out in 
a whitebox-way. ;)

I'd much prefer Sun products, however - I just expect them to support Sun's 
flagship OS, and be supported fully. I'm going to look into the X4* series 
assuming they don't have such problems with supported boot disk mirroring/hot 
plugging/etc.

Thanks,
David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: Re: Re: Re: Re: External drive enclosures + Sun

2007-01-22 Thread David J. Orman
> You can't actually use those adapters in the
> x2100/x2200 or even the
> x4100/x4200.  The slots are "MD2" low profile slots
> and the 4 port adapters
> require a full height slot.  Even the x4600 only has
> MD2 slots.  So you can
> only use 2 port adapters.  I think there are esata
> cards that use the
> infiniband (SAS style) connector, which will fit in
> an MD2 slot and still
> access 4 drives, but I'm not aware of any that
> Solaris supports.

Fair enough. :)

> Unfortunately, Solaris does not support SATA port
> multipliers (yet) so
> I think you're pretty limited in how many esata
> drives you can connect.

Gotcha.

> External SAS is pretty much a non-starter on Solaris
> (today) so I think
> you're left with iscsi or FC if you need more than
> just a few drives and
> you want to use Sun servers instead of building your
> own.

iSCSI is interesting to me, are there any JBOD iSCSI external arrays that would 
allow me to use SAS/SATA drives? I'd actually prefer this to eSATA, as network 
cable is even more easily dealt with. Toss in one of the dual/quad gigabit 
cards and run iSCSI to a JBOD filled with SATA/SAS drives == winning solution 
for me. 4gbit via network avoiding all of the expense of FC is nothing to 
sniffle at.

Would this still be workable with ZFS? Ideally, I'd like 8-10 drives, running 
RaidZ2. Know of any products out there I should be looking at, in terms of the 
hardware enclosure/iSCSI interface for the drives?

David
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss