On Sat, Apr 10, 2010 at 12:56:04PM -0500, Tim Cook wrote:
> At that price, for the 5-in-3 at least, I'd go with supermicro. For $20
> more, you get what appears to be a far more solid enclosure.
My intent with that link was only to show an example, not make a
recommendation. I'm glad others have
On Fri, Apr 9, 2010 at 9:31 PM, Eric D. Mudama wrote:
> On Sat, Apr 10 at 7:22, Daniel Carosone wrote:
>
>> On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote:
>>
>>> If I could find a reasonable backup method that avoided external
>>> enclosures altogether, I would take that route.
On Sat, Apr 10 at 7:22, Daniel Carosone wrote:
On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote:
If I could find a reasonable backup method that avoided external
enclosures altogether, I would take that route.
I'm tending to like bare drives.
If you have the chassis space, the
On Fri, Apr 9, 2010 at 6:14 AM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Eric Andersen
>>
>> I backup my pool to 2 external 2TB drives that are simply striped using
>> zfs send/receive followed by a scrub.
On Fri, Apr 09, 2010 at 10:21:08AM -0700, Eric Andersen wrote:
> If I could find a reasonable backup method that avoided external
> enclosures altogether, I would take that route.
I'm tending to like bare drives.
If you have the chassis space, there are 5-in-3 bays that don't need
extra driv
> I am doing something very similar. I backup to external USB's, which I
> leave connected to the server for obviously days at a time ... zfs send
> followed by scrub. You might want to consider eSATA instead of USB. Just a
> suggestion. You should be able to go about 4x-6x faster than 27MB/s.
You may be absolutely right. CPU clock frequency certainly has hit a wall at
around 4GHz. However, this hasn't stopped CPUs from getting progressively
faster. I know this is mixing apples and oranges, but my point is that no
matter what limits or barriers computing technology hits, someone co
No idea about the build quality, but is this the sort of thing you're looking
for?
Not cheap, integrated RAID (sigh), but one cable only
http://www.pc-pitstop.com/das/fit-500.asp
Cheap, simple, 4 eSATA connections on one box
http://www.pc-pitstop.com/sata_enclosures/scsat4eb.asp
Still cheap, us
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Eric Andersen
>
> I backup my pool to 2 external 2TB drives that are simply striped using
> zfs send/receive followed by a scrub. As of right now, I only have
> 1.58TB of actual data. ZFS sen
Eric Andersen wrote:
I find Erik Trimble's statements regarding a 1 TB limit on drives to be a very
bold statement. I don't have the knowledge or the inclination to argue the
point, but I am betting that we will continue to see advances in storage
technology on par with what we have seen in t
On Apr 8, 2010, at 9:06 PM, Daniel Carosone wrote:
> On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote:
>> On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
>>>
>>> As for error rates, this is something zfs should not be afraid
>>> of. Indeed, many of us would be happy to get drives
I thought I might chime in with my thoughts and experiences. For starters, I
am very new to both OpenSolaris and ZFS, so take anything I say with a grain of
salt. I have a home media server / backup server very similar to what the OP
is looking for. I am currently using 4 x 1TB and 4 x 2TB dr
On Thu, Apr 08, 2010 at 08:36:43PM -0700, Richard Elling wrote:
> On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
> >
> > As for error rates, this is something zfs should not be afraid
> > of. Indeed, many of us would be happy to get drives with less internal
> > ECC overhead and complexity for
On Apr 8, 2010, at 6:19 PM, Daniel Carosone wrote:
>
> As for error rates, this is something zfs should not be afraid
> of. Indeed, many of us would be happy to get drives with less internal
> ECC overhead and complexity for greater capacity, and tolerate the
> resultant higher error rates, specif
On Thu, 8 Apr 2010, Jason S wrote:
One thing i have noticed that seems a littler different from my
previous hardware raid controller (Areca) is the data is not
constantly being written to the spindles. For example i am copying
some large files to the array right now (approx 4 gigs a file) and
Well I would like to thank everyone for there comments and ideas.
I finally have this machine up and running with Nexenta Community edition and
am really liking the GUI for administering it. It suits my needs perfectly and
is running very well. I ended up going with 2 X 7 RaidZ2 vdevs in one poo
On Thu, Apr 08, 2010 at 03:48:54PM -0700, Erik Trimble wrote:
> Well
To be clear, I don't disagree with you; in fact for a specific part of
the market (at least) and a large part of your commentary, I agree. I
just think you're overstating the case for the rest.
> The problem is (and this i
On 04/ 9/10 10:48 AM, Erik Trimble wrote:
Well
The problem is (and this isn't just a ZFS issue) that resilver and scrub
times /are/ very bad for>1TB disks. This goes directly to the problem
of redundancy - if you don't really care about resilver/scrub issues,
then you really shouldn't bothe
On Fri, 2010-04-09 at 08:07 +1000, Daniel Carosone wrote:
> On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote:
> > Daniel Carosone wrote:
> >> Go with the 2x7 raidz2. When you start to really run out of space,
> >> replace the drives with bigger ones.
> >
> > While that's great in theor
On Thu, Apr 08, 2010 at 12:14:55AM -0700, Erik Trimble wrote:
> Daniel Carosone wrote:
>> Go with the 2x7 raidz2. When you start to really run out of space,
>> replace the drives with bigger ones.
>
> While that's great in theory, there's getting to be a consensus that 1TB
> 7200RPM 3.5" Sata dr
> "dm" == David Magda writes:
> "bf" == Bob Friesenhahn writes:
dm> OP may also want to look into the multi-platform pkgsrc for
dm> third-party open source software:
+1. jucr.opensolaris.org seems to be based on RPM which is totally
fail. RPM is the oldest, crappiest, most fru
On Apr 8, 2010, at 8:52 AM, Bob Friesenhahn wrote:
> On Thu, 8 Apr 2010, Erik Trimble wrote:
>> While that's great in theory, there's getting to be a consensus that 1TB
>> 7200RPM 3.5" Sata drives are really going to be the last usable capacity.
I doubt that 1TB (or even 1.5TB) 3.5" disks are be
On Thu, 8 Apr 2010, Erik Trimble wrote:
While that's great in theory, there's getting to be a consensus that 1TB
7200RPM 3.5" Sata drives are really going to be the last usable capacity.
Agreed. The 2.5" form factor is rapidly emerging. I see that
enterprise 6-Gb/s SAS drives are available w
Daniel Carosone wrote:
Go with the 2x7 raidz2. When you start to really run out of space,
replace the drives with bigger ones. You will run out of space
eventually regardless; this way you can replace 7 at a time, not 14 at
a time. With luck, each replacement will last you long enough that
th
Go with the 2x7 raidz2. When you start to really run out of space,
replace the drives with bigger ones. You will run out of space
eventually regardless; this way you can replace 7 at a time, not 14 at
a time. With luck, each replacement will last you long enough that
the next replacement will c
On Wed, Apr 7, 2010 at 4:58 PM, Edward Ned Harvey wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of David Magda
> >
> > If you're going to go with (Open)Solaris, the OP may also want to look
> > into the multi-platform pkgsrc for th
On Wed, Apr 7, 2010 at 5:59 PM, Richard Elling wrote:
> On Apr 7, 2010, at 3:24 PM, Tim Cook wrote:
> > On Wednesday, April 7, 2010, Jason S wrote:
> >> Since i already have Open Solaris installed on the box, i probably wont
> jump over to FreeBSD. However someone has suggested to me to look into
On Apr 7, 2010, at 19:58, Edward Ned Harvey wrote:
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of David Magda
If you're going to go with (Open)Solaris, the OP may also want to
look
into the multi-platform pkgsrc for third-party open sourc
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of David Magda
>
> If you're going to go with (Open)Solaris, the OP may also want to look
> into the multi-platform pkgsrc for third-party open source software:
>
> http://www.pkgsrc.org/
>
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Chris Dunbar
>
> like to clarify something. If read performance is paramount, am I
> correct in thinking RAIDZ is not the best way to go? Would not the ZFS
> equivalent of RAID 10 (striped mirro
On Wed, Apr 7, 2010 at 4:27 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Wed, 7 Apr 2010, David Magda wrote:
>
>>
>>> It is more straightforward to update a FreeBSD install from source code
>>> because that is the way it is normally delivered. Sometimes this is useful
>>> in or
On Wed, 7 Apr 2010, David Magda wrote:
It is more straightforward to update a FreeBSD install from source code
because that is the way it is normally delivered. Sometimes this is useful
in order to incorporate a fix as soon as possible without needing to wait
for someone to produce binaries.
On Apr 7, 2010, at 3:24 PM, Tim Cook wrote:
> On Wednesday, April 7, 2010, Jason S wrote:
>> Since i already have Open Solaris installed on the box, i probably wont jump
>> over to FreeBSD. However someone has suggested to me to look into
>> www.nexenta.org and i must say it is quite interesting
On Apr 7, 2010, at 16:47, Bob Friesenhahn wrote:
Solaris 10's Live Upgrade (and the OpenSolaris equivalent) is quite
valuable in that it allows you to upgrade the OS without more than a
few minutes of down-time and with a quick fall-back if things don't
work as expected.
It is more straig
On Wednesday, April 7, 2010, Jason S wrote:
> Since i already have Open Solaris installed on the box, i probably wont jump
> over to FreeBSD. However someone has suggested to me to look into
> www.nexenta.org and i must say it is quite interesting. Someone correct me if
> i am wrong but it look
Since i already have Open Solaris installed on the box, i probably wont jump
over to FreeBSD. However someone has suggested to me to look into
www.nexenta.org and i must say it is quite interesting. Someone correct me if i
am wrong but it looks like it is Open Solaris based and has basically
ev
On Wed, 7 Apr 2010, Jason S wrote:
systems that support ZFS. Does anyone have any advice as to wether i
should be considering FreeBSD instead of Open Solaris? Both
operating systems are somewhat foriegn to me as i come from the
FreeBSD zfs does clearly work, although it is an older version of
On Wed, Apr 7, 2010 at 1:22 PM, Jason S wrote:
> now you have brought up another question :) I had always assumed that i
> would just used open solaris for this file server build, as i had not
> actually done any research in regards to other operatin systems that support
> ZFS. Does anyone have a
Freddie,
now you have brought up another question :) I had always assumed that i would
just used open solaris for this file server build, as i had not actually done
any research in regards to other operatin systems that support ZFS. Does anyone
have any advice as to wether i should be consideri
On Wed, 2010-04-07 at 12:41 -0700, Jason S wrote:
> Ahh,
>
> Thank you for the reply Bob, that is the info i was after. It looks like i
> will be going with the 2 X 7 RaidZ2 option.
>
> And just to clarify as far as expanding this pool in the future my only
> option is to add another 7 spindle
On Wed, Apr 7 at 12:41, Jason S wrote:
And just to clarify as far as expanding this pool in the future my
only option is to add another 7 spindle RaidZ2 array correct?
That is correct, unless you want to use the -f option to force-allow
an asymmetric expansion of your pool.
--eric
--
Eric D.
I am booting from a single 74gig WD raptor attached to the motherboards onboard
SATA port.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Ahh,
Thank you for the reply Bob, that is the info i was after. It looks like i will
be going with the 2 X 7 RaidZ2 option.
And just to clarify as far as expanding this pool in the future my only option
is to add another 7 spindle RaidZ2 array correct?
Thanks for all the help guys !
--
This
On Wed, Apr 7, 2010 at 12:29 PM, Frank Middleton
wrote:
> On 04/ 7/10 03:09 PM, Jason S wrote:
>
>
>> I was actually already planning to get another 4 gigs of ram for the
>> box right away anyway, but thank you for mentioning it! As there
>> appears to be a couple ways to "skin the cat" here i thi
On Wed, Apr 7, 2010 at 12:09 PM, Jason S wrote:
> I was actually already planning to get another 4 gigs of ram for the box
> right away anyway, but thank you for mentioning it! As there appears to be a
> couple ways to "skin the cat" here i think i am going to try both a 14
> spindle RaidZ2 and 2
On Wed, 7 Apr 2010, Chris Dunbar wrote:
More for my own edification than to help Jason (sorry Jason!) I
would like to clarify something. If read performance is paramount,
am I correct in thinking RAIDZ is not the best way to go? Would not
the ZFS equivalent of RAID 10 (striped mirror sets) off
On 04/ 7/10 03:09 PM, Jason S wrote:
I was actually already planning to get another 4 gigs of ram for the
box right away anyway, but thank you for mentioning it! As there
appears to be a couple ways to "skin the cat" here i think i am going
to try both a 14 spindle RaidZ2 and 2 X 7 RaidZ2 confi
On Wed, 7 Apr 2010, Jason S wrote:
I was actually already planning to get another 4 gigs of ram for the
box right away anyway, but thank you for mentioning it! As there
appears to be a couple ways to "skin the cat" here i think i am
going to try both a 14 spindle RaidZ2 and 2 X 7 RaidZ2 config
Hello,
More for my own edification than to help Jason (sorry Jason!) I would like to
clarify something. If read performance is paramount, am I correct in thinking
RAIDZ is not the best way to go? Would not the ZFS equivalent of RAID 10
(striped mirror sets) offer better read performance? In thi
Thank you for the replies guys!
I was actually already planning to get another 4 gigs of ram for the box right
away anyway, but thank you for mentioning it! As there appears to be a couple
ways to "skin the cat" here i think i am going to try both a 14 spindle RaidZ2
and 2 X 7 RaidZ2 configura
On Wed, 7 Apr 2010, Erik Trimble wrote:
One thing Richard or Bob might be able to answer better is the tradeoff
between getting a cheap/small SSD for L2ARC and buying more RAM. That
is, I don't have a good feel for whether (for your normal usage case),
it would be better to get 8GB of more RAM,
On Wed, 2010-04-07 at 10:40 -0700, Jason S wrote:
> I have been searching this forum and just about every ZFS document i can find
> trying to find the answer to my questions. But i believe the answer i am
> looking for is not going to be documented and is probably best learned from
> experience.
On Wed, 7 Apr 2010, Jason S wrote:
To keep the pool size at 12TB i would have to give up my extra
parity drive going to this 2 array setup and it is concerning as i
have no room for hot spares in this system. So in my mind i am left
with only one other choice and this is going to 2XRaidZ2 pool
I have been searching this forum and just about every ZFS document i can find
trying to find the answer to my questions. But i believe the answer i am
looking for is not going to be documented and is probably best learned from
experience.
This is my first time playing around with open solaris
54 matches
Mail list logo