These days, I've switched to 2.5" SATA laptop drives for large-storage
requirements.
They're going to cost more $/GB than 3.5" drives, but they're still not
horrible ($100 for a 500GB/7200rpm Seagate Momentus). They're also
easier to cram large numbers of them in smaller spaces, so it's easier
On Sun, Jan 24 at 11:44, Dedhi Sujatmiko wrote:
I am curious to know what is the normal operating temperature of
consumer SATA drive, and what is the considered maximum limit I need
to watch out?
These are my disks SMART output under FreeNAS 0.7RC2, where my
ambient temperature is 28 C withou
On Jan 23, 2010, at 5:06 AM, Simon Breden wrote:
> Thanks a lot.
>
> I'd looked at SO many different RAID boxes and never had a good feeling about
> them from the point of data safety, that when I read the 'A Conversation with
> Jeff Bonwick and Bill Moore – The future of file systems' article
I can't get sharesmb=name= to workit worked in b130i'm not sure if
it's broken in 131 or if my machine is being a pain.
anyways, when i try to do this:
zfs set sharesmb=name=wonslung tank/nas/Wonslung
i get this:
cannot set property for 'tank/nas/Wonslung': 'sharesmb' cannot be set to
I'd like to thank Tim and Cindy at Sun for providing me with a new zfs binary
file that fixed my issue. I was able to get my zpool back! Hurray!
Thank You.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris
I am curious to know what is the normal operating temperature of
consumer SATA drive, and what is the considered maximum limit I need to
watch out?
These are my disks SMART output under FreeNAS 0.7RC2, where my ambient
temperature is 28 C without air conditioning.
ad4 476941MB ST35
> Just make 'zfs' an alias to your version of it. A
> one-time edit
> of .profile can update that alias.
Sure; write a shell function, and add an alias to it.
And use a quoted command name (or full path) within the function
to get to the real command. Been there, done that.
But to do a good job
Mirko wrote:
Well, I've purchased 5 Barracuda LP 1.5TB.
They ran very queit, cool, 5 in a cage and the vibration are nearly zero.
reliability ? Well every HDD is unreliable, every major brand at this time have
problems, so go for the best bang for the bucks.
In my country Seagate have the best
On Fri, Jan 22, 2010 at 04:12:48PM -0500, Miles Nordin wrote:
> w> http://www.csc.liv.ac.uk/~greg/projects/erc/
>
> dead link?
Works for me - this is someone who's written patches for smartctl to
set this feature; these are standardised/documented commands, no
reverse engineering of DOS tool
On Sat, Jan 23, 2010 at 7:57 PM, Frank Cusack wrote:
> On January 23, 2010 6:09:49 PM -0600 Tim Cook wrote:
>
>> When you've got a home system and X amount of dollars
>> to spend, $/GB means absolutely nothing when you need a certain number of
>> drives to have the redundancy you require.
>>
>
>
On January 23, 2010 4:33:59 PM -0800 "Richard L. Hamilton"
wrote:
It might be nice if "zfs list" would check an environment variable for
a default list of properties to show (same as the comma-separated list
used with the -o option). If not set, it would use the current default
list; if set, it
On January 23, 2010 6:53:26 PM -0600 Bob Friesenhahn
wrote:
On Sat, 23 Jan 2010, Frank Cusack wrote:
Notice that the referenced path is subordinate to the exported zfs
filesystem.
Well, assuming there is a single home zfs filesystem and not a
filesystem-per-user. For filesystem-per-user your
On January 23, 2010 6:09:49 PM -0600 Tim Cook wrote:
When you've got a home system and X amount of dollars
to spend, $/GB means absolutely nothing when you need a certain number of
drives to have the redundancy you require.
Don't you generally need a certain amount of GB? I know I plan my
sto
AIUI, this works as designed.
I think the best practice will be to add the L2ARC to syspool (nee rpool).
However, for current NexentaStor releases, you cannot add cache devices
to syspool.
Earlier I mentioned that this made me nervous. I no longer hold any
reservation against it. It should wor
As I said in another post, it's coming time to build a new storage
platform at home. I'm revisiting all the hardware options and
permutations again, for current kit.
Build 125 added something I was very eager for earlier, sata
port-multiplier support.Since then, I've seen very little, if any,
>I just took a look at customer feedback on this > drive here. 36% rate with
>one star, which I would > consider alarming. Take a look here, ordered from
>lowest rating to highest rating. Note the recency of the comments and the
>descriptions:
Every people vote in different way for the same thi
> I just took a look at customer feedback on this
> drive here. 36% rate with one star, which I would
> consider alarming. Take a look here, ordered from
> lowest rating to highest rating. Note the recency of
> the comments and the descriptions:
>
Every people vote in different way for the same t
On Sat, 23 Jan 2010, Frank Cusack wrote:
Notice that the referenced path is subordinate to the exported zfs
filesystem.
Well, assuming there is a single home zfs filesystem and not a
filesystem-per-user. For filesystem-per-user your example simply
mounts the correct shared filesystem. Even fo
On Sat, Jan 23, 2010 at 06:39:25PM -0500, Frank Cusack wrote:
> On January 23, 2010 5:17:16 PM -0600 Tim Cook wrote:
>> Smaller devices get you to raid-z3 because they cost less money.
>> Therefore, you can afford to buy more of them.
>
> I sure hope you aren't ever buying for my company! :) :)
>
It might be nice if "zfs list" would check an environment variable for
a default list of properties to show (same as the comma-separated list
used with the -o option). If not set, it would use the current default list;
if set, it would use the value of that environment variable as the list.
I fin
On Jan 23, 2010, at 3:47 PM, Frank Cusack wrote:
> On January 23, 2010 1:20:13 PM -0800 Richard Elling
>> My theory is that drives cost $100.
>
> Obviously you're not talking about Sun drives. :)
Don't confuse cost with price :-)
-- richard
___
zfs-d
On Sat, Jan 23, 2010 at 5:39 PM, Frank Cusack wrote:
> On January 23, 2010 5:17:16 PM -0600 Tim Cook wrote:
>
>> Smaller devices get you to raid-z3 because they cost less money.
>> Therefore, you can afford to buy more of them.
>>
>
> I sure hope you aren't ever buying for my company! :) :)
>
>
Hey Dan,
Thanks for the reply.
Yes, I'd forgotten that it's often the heads that degrade -- something like
lubricant buildup, IIRC.
As well as SMART data, which I must admit to never looking at, presumably scrub
errors are also a good indication of looming trouble due to head problems etc?
As
On January 23, 2010 2:17:12 PM -0600 Bob Friesenhahn
wrote:
On Sat, 23 Jan 2010, Frank Cusack wrote:
I thought with NFS4 *on solaris* that clients would follow the zfs
filesystem hierarchy and mount sub-filesystems. That doesn't seem
to be happening and I can't find any documentation on it (e
On January 23, 2010 1:20:13 PM -0800 Richard Elling
My theory is that drives cost $100.
Obviously you're not talking about Sun drives. :)
-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs
On January 23, 2010 5:17:16 PM -0600 Tim Cook wrote:
Smaller devices get you to raid-z3 because they cost less money.
Therefore, you can afford to buy more of them.
I sure hope you aren't ever buying for my company! :) :)
Smaller devices cost more $/GB; ie they are more expensive.
-frank
___
On Sat, Jan 23, 2010 at 11:41 AM, Frank Cusack wrote:
> On January 23, 2010 8:04:50 AM -0800 "R.G. Keen" wrote:
>
>> The answer I came to, perhaps through lack of information and experience,
>> is that there isn't a best 1.5tb drive. I decided that 1.5tb is too big,
>> and that it's better to us
On Sat, Jan 23, 2010 at 09:04:31AM -0800, Simon Breden wrote:
> For resilvering to be required, I presume this will occur mostly in
> the event of a mechanical failure. Soft failures like bad sectors
> will presumably not require resilvering of the whole drive to occur,
> as these types of error ar
On Sat, Jan 23, 2010 at 12:30:01PM -0800, Simon Breden wrote:
> And regarding mirror vdevs etc, I can see the usefulness of being
> able to build a mirror vdev of multiple drives for cases where you
> have really critical data -- e.g. a single 4-drive mirror vdev. I
> suppose regular backups can he
My main server doubles as a both a development system and web server for
my work and a media server for home. When I built it in the early days
of ZFS, drive prices were about four times current (500GB were the
beading edge) and affordable SSDs were a way off so I opted for a stripe
of 4 2way
How does a previously highly rated drive that costed >$100 suddenly become
substandard when it costs <$100 ?
I can think of possible reasons, but they might not be printable here ;-)
Cheers,
Simon
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
--
This message posted from opensola
On Jan 23, 2010, at 8:04 AM, R.G. Keen wrote:
> Interesting question.
>
> The answer I came to, perhaps through lack of information and experience, is
> that there isn't a best 1.5tb drive. I decided that 1.5tb is too big, and
> that it's better to use more and smaller devices so I could get to
On Jan 23, 2010, at 12:12 PM, Bob Friesenhahn wrote:
> On Sat, 23 Jan 2010, A. Krijgsman wrote:
>
>> Just to jump in.
>>
>> Did you guys ever consider to shortstroke a larger sata disk?
>> I'm not familiar with this, but read a lot about it;
>>
>> Since the drive cache gets larger on the bigger
Ha ha -- regarding the drive comments, it looks like my humour detector was
working just fine ;-)
And regarding mirror vdevs etc, I can see the usefulness of being able to build
a mirror vdev of multiple drives for cases where you have really critical data
-- e.g. a single 4-drive mirror vdev.
On Sat, 23 Jan 2010, Frank Cusack wrote:
I thought with NFS4 *on solaris* that clients would follow the zfs
filesystem hierarchy and mount sub-filesystems. That doesn't seem
to be happening and I can't find any documentation on it (either way).
Did I only dream up this feature or does it actua
On Sat, 23 Jan 2010, A. Krijgsman wrote:
Just to jump in.
Did you guys ever consider to shortstroke a larger sata disk?
I'm not familiar with this, but read a lot about it;
Since the drive cache gets larger on the bigger drives.
Bringing back a disk to roughly 25% of its capicity would give be
On Sat, 23 Jan 2010, Simon Breden wrote:
Why do you consider that model a good drive?
This is a good model of drive to test zfs's redundancy/resiliency
support. It is surely recommended for anyone who does not have the
resources to simulate drive failure.
Why do you like to use mirrors i
Just to jump in.
Did you guys ever consider to shortstroke a larger sata disk?
I'm not familiar with this, but read a lot about it;
Since the drive cache gets larger on the bigger drives.
Bringing back a disk to roughly 25% of its capicity would give better cache
ratio and less seektime.
So 2
Frank Cusack wrote:
I thought with NFS4 *on solaris* that clients would follow the zfs
filesystem hierarchy and mount sub-filesystems. That doesn't seem
to be happening and I can't find any documentation on it (either way).
Did I only dream up this feature or does it actually exist? I am
using
zpool status [-xv] [pool] ...
Displays the detailed health status for the given pools.
...
-xOnly display status for pools that are exhibiting
errors or are otherwise unavailable.
# zpool status -x
all pools are healthy
# zpool status -x rpool
p
On Sat, Jan 23, 2010 at 8:44 AM, Fletcher Cocquyt wrote:
> Fletcher Cocquyt stanford.edu> writes:
>
>
>> I found this script for replicating zfs data:
>>
> http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html
>>
>> - I am testing it out in the lab with b129.
>> It
Hi Bob,
Why do you consider that model a good drive?
Why do you like to use mirrors instead of something like RAID-Z2 / RAID-Z3?
And how many drives do you (recommend to) use within each mirror vdev?
Cheers,
Simon
http://breden.org.uk/2008/03/02/a-home-fileserver-using-zfs/
--
This message po
Mike Gerdts wrote:
On Sat, Jan 23, 2010 at 11:55 AM, John Hoogerdijk
wrote:
Mike Gerdts wrote:
On Fri, Jan 22, 2010 at 1:00 PM, John Hoogerdijk
wrote:
Is there a way to zero out unused blocks in a pool? I'm looking for ways
to
shrink the size of an opensolaris virtualbox VM
On Sat, Jan 23, 2010 at 11:55 AM, John Hoogerdijk
wrote:
> Mike Gerdts wrote:
>>
>> On Fri, Jan 22, 2010 at 1:00 PM, John Hoogerdijk
>> wrote:
>>
>>>
>>> Is there a way to zero out unused blocks in a pool? I'm looking for ways
>>> to
>>> shrink the size of an opensolaris virtualbox VM and
>>> us
On Sat, 23 Jan 2010, Simon Breden wrote:
I just took a look at customer feedback on this drive here. 36% rate with one
star, which I would consider alarming. Take a look here, ordered from lowest rating to
highest rating. Note the recency of the comments and the descriptions:
http://www.neweg
Reading through your post brought back many memories of how I used to manage my
data.
I also found SuperDuper and Carbon Copy Cloner great for making a duplicate of
my Mac's boot drive, which also contained my data.
After juggling around with cloning boot/data drives and using non-redundant
Ti
Mike Gerdts wrote:
On Fri, Jan 22, 2010 at 1:00 PM, John Hoogerdijk
wrote:
Is there a way to zero out unused blocks in a pool? I'm looking for ways to
shrink the size of an opensolaris virtualbox VM and
using the compact subcommand will remove zero'd sectors.
I've long suspected that
Hi,
i found some time and was able to test again.
- verify with unique uid of the device
- verify with autoreplace = off
Indeed autoreplace was set to yes for the pools. So I disabled the autoreplace.
VOL PROPERTY VALUE SOURCE
nxvol2 autoreplaceoff default
Era
I thought with NFS4 *on solaris* that clients would follow the zfs
filesystem hierarchy and mount sub-filesystems. That doesn't seem
to be happening and I can't find any documentation on it (either way).
Did I only dream up this feature or does it actually exist? I am
using s10_u8.
thanks
-fra
On January 23, 2010 8:04:50 AM -0800 "R.G. Keen" wrote:
The answer I came to, perhaps through lack of information and experience,
is that there isn't a best 1.5tb drive. I decided that 1.5tb is too big,
and that it's better to use more and smaller devices so I could get to
raidz3.
Please expla
On Fri, Jan 22, 2010 at 2:42 PM, Cindy Swearingen
wrote:
> Hi John,
>
> You might check with the virtualguru, Rudolf Kutina.
Unfortunately, Rudolfs last day at Sun was Jan 15th:
http://blogs.sun.com/VirtualGuru/entry/kiss_of_dead_my_last
you can still catch up with him at his new blog:
http://
On Jan 23, 2010, at 12:04, Simon Breden wrote:
And I think your 750GB choice should be a good one. I'm currently
using 750GB drives (WD7500AAKS) and they have worked flawlessly over
the last 2 years. But knowing that drives don't last forever, it's
time I looked for some new ones, assuming
In general I agree completely with what you are saying. Making reliable large
capacity drives does appear to have become very difficult for the drive
manufacturers, judging by the many sad comments from drive buyers listed on
popular, highly-trafficked sales outlets' websites, like newegg.
And
Fletcher Cocquyt stanford.edu> writes:
> I found this script for replicating zfs data:
>
http://www.infrageeks.com/groups/infrageeks/wiki/8fb35/zfs_autoreplicate_script.html
>
> - I am testing it out in the lab with b129.
> It error-ed out the first run with some syntax error about the send co
On Sat, 23 Jan 2010, R.G. Keen wrote:
The reasoning came after reading the case for triple-parity raid.
The curves showing time to failure versus time to resilver a single
lost drive. Time to failure will remain constant-ish, while time to
resilver will increase as the number of bits inside a
I just took a look at customer feedback on this drive here. 36% rate with
one star, which I would consider alarming. Take a look here, ordered from
lowest rating to highest rating. Note the recency of the comments and the
descriptions:
http://www.newegg.com/Product/ProductReview.aspx?Item=22-14
Interesting question.
The answer I came to, perhaps through lack of information and experience, is
that there isn't a best 1.5tb drive. I decided that 1.5tb is too big, and that
it's better to use more and smaller devices so I could get to raidz3.
The reasoning came after reading the case for
Thanks a lot.
I'd looked at SO many different RAID boxes and never had a good feeling about
them from the point of data safety, that when I read the 'A Conversation with
Jeff Bonwick and Bill Moore – The future of file systems' article
(http://queue.acm.org/detail.cfm?id=1317400), I was convinc
The way i understand it, this requires bprewrite and it's being worked on.
On Sat, Jan 23, 2010 at 5:40 AM, Colin Raven wrote:
> Can anyone comment on the likelihood of zfs defrag becoming a reality some
> day? If so, any approximation as to when? I realize this isn't exactly a
> trivial endeavo
Can anyone comment on the likelihood of zfs defrag becoming a reality some
day? If so, any approximation as to when? I realize this isn't exactly a
trivial endeavor, but it sure would be nice to see.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.or
I'd agree with export/import *IF* the drive should be good, however I have a
drive that was pulled from the pool a long time ago (to flash drive the drive)
- The data on it is useless. exporting/importing would cause either a) errors,
b) scrub to need to be run which should overwrite it complet
61 matches
Mail list logo