At the risk of being repetitive:
[i]Why not two separate machines, one for XP, one for zfs/raid?[/i] At today's
network speeds, hooking a cable between those two would provide any speed data
access to the files in the raid that you want. A suitable ZFS machine could sit
in another room if you w
> Erik Trimble sez:
> Honestly, I've said it before, and I'll say it (yet) again: unless you
> have very stringent power requirement (or some other unusual
> requirement, like very, very low noise), used (or even new-in-box,
> previous generation excess inventory) OEM stuff is far superior to
> On 2010-Sep-24 00:58:47 +0800, "R.G. Keen"
> wrote:
> > But for me, the likelihood of
> >making a setup or operating mistake in a virtual machine
> >setup server is far outweighs the hardware cost to put
> >another physical machine on the ground.
>
I should clarify. I was addressing just the issue of
virtualizing, not what the complete set of things to
do to prevent data loss is.
> 2010/9/19 R.G. Keen
> > and last-generation hardware is very, very cheap.
> Yes, of course, it is. But, actually, is that a true
> stateme
I have another question to add to the two you already asked and answered.
Why not two separate machines, one for XP, one for zfs/raid? At today's
network speeds, hooking a cable between those two would provide any speed data
access to the files in the raid that you want. A suitable ZFS machine
> Hi Craig,
> Don't use the p* devices for your storage pools. They
> represent the larger fdisk partition.
>
> Use the d* devices instead, like this example below:
Good advice, something I wondered about too.
However, aside from my having guessed right once (I think...) I have no clue
why thi
I'm answering my own question, having just decided to try it. Yes, anything you
want to persist beyond reboot with EON that's not in the zfs pools has to have
an image update done before shutdown.
I had this Doh! moment after I did the trial. Of course all the system config
has to be on the sy
I've run into something odd. I find that my EON setup loses all user ids and
passwords when rebooted. It does import the zpool.
Do I need to update the image every time I add users?
--
This message posted from opensolaris.org
___
zfs-discuss mailing l
Offhand, I'd say EON
http://sites.google.com/site/eonstorage/
This probably the best answer right now. It will be even better when they get a
web administration GUI running. Some variant of freenas on freebsd is also
possible.
Opensolaris is missing a good opportunity to expand its user bas
Hmmm.. Tried to post this before, but it doesn't appear. I'll try again.
I've been discussing the concept of a reference design for Opensolaris systems
with a few people. This comes very close to a system you can "just buy".
I spent about six months burning up google and pestering people here ab
I finally achieved critical mass on enough parts to put my zfs server together.
It basically ran the first time, any non-function being my own
misunderstandings. I wanted to issue a thank you to those of you who suffered
through my questions and pointed me in the right direction. Many pieces of
> I think ZFS has no specific mechanisms in respect to
> RAM integrity. It will just count on a healthy and
> robust foundation for any component in the machine.
I'd really like to understand what OS does with respect to ECC. Anyone who
does understand the internal operation and can comment would
I did some reading on DDRn ram and controller chips and how they do ECC.
Sorry, but I was moderately incorrect. Here's closer to what happens.
DDRn memory has no ECC logic on the DIMMs. What it has is an additional eight
bits of memory for each 64 bit read/write operation. That is, for ECC DIM
Yay! Something where I can contribute! Iam a hardware
guy trying to live in a software world, but I think I know
how this one works.
> The reason is that the vendor (ACER) of the mainboard
> says it is not supported, and I can not get into the
> bios any more, but osol boots fine and sees 8GB.
>
This is meant with the sincerest of urges to help.
I have a similar situation, and pondered much the same issues. However, I'm
extremely short of time as it is. I decided that my needs would be best served
leaving the data on those backup DVDs and CDs in case I needed it. The "in case
I need i
Good observation. It seems that I'm only keeping ahead of the folks in this
forum by running as hard as I can.
I just bought the sheet aluminum for making my drive cages. I'm going for the
drives-in-a-cage setup, but I'm also floating each drive on vinyl (and hence
dissipative, not resonant) v
Registered: 7/7/05
Re: [zfs-discuss] Best 1.5TB drives for consumer RAID?
Posted: Jan 24, 2010 11:20 AM in response to: r.g.
Click to reply to this thread Reply
On January 24, 2010 Frank wrote:
>Sorry I missed this part of your post before responding just a m
On January 24, 2010 Frank Cusack wrote:
>That's the point I was arguing against.
Yes, that's correct.
>You did not respond to my argument, and you don't have to now,
Thanks for the permission. I'll need that someday.
>but as long as you keep stating this without correcting me I will keep
>resp
Let me start this off with a personal philosophy statement. In technical
matters, there is almost never a “best”. There only the best compromise given
the objective you’re trying to achieve.
If you change the objectives even slightly, you may get wildly different “best
compromise” answers.
I
Interesting question.
The answer I came to, perhaps through lack of information and experience, is
that there isn't a best 1.5tb drive. I decided that 1.5tb is too big, and that
it's better to use more and smaller devices so I could get to raidz3.
The reasoning came after reading the case for
And I agree as well. WD was about to get upwards of $500-$700 of my money, and
is now getting zero over this issue alone moving me to look harder for other
drives.
I'm sure a WD rep would tell us about how there are extra unseen goodies in the
RE line. Maybe.
--
This message posted from opens
Well, there had to be some reason that they had enough of them come back to run
a "recertifying" program. 8-)
I rather expected something of that sort; thanks for doing the homework for me!
I appreciate the help.
I probably won't ever trust these drives; they were just convenient for the
test
One reason I was so interested in this issue was the double-price of "raid
enabled" disks.
However, I realized that I am doing the initial proving, not production - even
if personal - of the system I'm building. So for that purpose, an array of
smaller and cheaper disks might be good.
In the
> Richard Elling wrote:
> Perhaps I am not being clear. If a disk is really dead, then
> there are several different failure modes that can be responsible.
> For example, if a disk does not respond to selection, then it
> is diagnosed as failed very quickly. But that is not the TLER
> case. The T
> On Dec 31, 2009, at 6:14 PM, Richard Elling wrote:
> Some nits:
> disks aren't marked as semi-bad, but if ZFS has trouble with a
> block, it will try to not use the block again. So there is two levels
> of recovery at work: whole device and block.
Ah. I hadn't found that yet.
> The "one more an
> On Thu, 31 Dec 2009, Bob Friesenhahn wrote:
> I like the nice and short answer from this "Bob
> Friesen" fellow the
> best. :-)
It was succinct, wasn't it? 8-)
Sorry - I pulled the attribution from the ID, not the
signature which was waiting below. DOH!
When you say:
> It does not really mat
I'm in full overthink/overresearch mode on this issue, preparatory to ordering
disks for my OS/zfs NAS build. So bear with me. I've been reading manuals and
code, but it's hard for me to come up to speed on a new OS quickly.
The question(s) underlying this thread seem to be:
(1) Does zfs raidz/
> FMA (not ZFS, directly) looks for a number of
> failures over a period of time.
> By default that is 10 failures in 10 minutes. If you
> have an error that trips
> on TLER, the best it can see is 2-3 failures in 10
> minutes. The symptom
> you will see is that when these long timeouts happen,
>
I didn't see "remove a simple device" anywhere in there.
Is it:
too hard to even contemplate doing,
or
too silly a thing to do to even consider letting that happen
or
too stupid a question to even consider
or
too easy and straightforward to do the procedure I see recommended (export the
whole p
This is an interesting discussion. It appears that there is indeed some work to
be done with manipulating spin up/down on subsections of an array, etc.
However, in terms of cost/performance for small systems, it may be simpler to
solve this with less programming and more hardware. The cost of a
Jim Sez:
> Like many others, I've come close to making a home
> NAS server based on
> ZFS and OpenSolaris. While this is not an enterprise
> solution with high IOPS
> expectation, but rather a low-power system for
> storing everything I have,
> I plan on cramming in some 6-10 5400RPM "Green"
> dr
Most ECC setups are as you describe. The memory hardware detects and corrects
all 1-bit errors, and detects all two-bit errors on its own. What ... should
... happen is that the OS should get an interrupt when this happens so it has
the opportunity to note the error in logs and to higher level s
Most ECC setups are as you describe. The memory hardware detects and corrects
all 1-bit errors, and detects all two-bit errors on its own. What ... should
... happen is that the OS should get an interrupt when this happens so it has
the opportunity to note the error in logs and to higher level s
> > On 11/23/09 10:10 AM, David Dyer-Bennet wrote:
> Lots of storage servers, outside the big corporate
>environment, can't
> afford full-blown redundancy. For many of us, we're
> just taking the first
> steps into using any kind of redundancy at all in
> disks for our file
> servers. Full enter
Your point is well taken, Frank, and I agree - there has to be some serious
design work for reliability. My background includes both hardware design for
reliability and field service engineering support, so the issues are not at all
foreign to me. Nor are the limits of something like a volunteer
Thank you Al! That's exactly the kind of information
I needed. I very much appreciate the help.
> It would be helpful to give us a broad description of
> what type of
> data you're planning on storing. Small files, large
> files, required
> capactity etc. and we can probably make some
> specif
> Someone can correct me if I'm wrong... but I believe
> that opensolaris can do the ECC scrubbing in software
> even of the motherboard BIOS doesn't support
> it.
That's interesting - I didn't run into that in the background search.
I suspect that some motherboards just accept the ECC memory bit
Thanks for replying! I did look into that. The AMD design was my second choice.
It was :
AMD Athlon II X2 240e (to get low power; the dual core and lack of L3 help
there)
ASUS motherboard (see considerations below)
Cheap VGA? LAN card? This is the mire that ultimately bogged down this one.
Give
With apologies for clogging up the forum with beginner questions -
I'm trying to figure out how to build a home zfs server. Common question. In
the last two months of reading the net and here, I've found many answers, none
of which would convince me to part with the $800-$1K to do it.
So can
39 matches
Mail list logo