On Feb 6, 2011, at 6:45 PM, Matthew Angelo wrote:
> I require a new high capacity 8 disk zpool. The disks I will be
> purchasing (Samsung or Hitachi) have an Error Rate (non-recoverable,
> bits read) of 1 in 10^14 and will be 2TB. I'm staying clear of WD
> because they have the new 2048b sectors
Yes I did mean 6+2, Thank you for fixing the typo.
I'm actually more leaning towards running a simple 7+1 RAIDZ1.
Running this with 1TB is not a problem but I just wanted to
investigate at what TB size the "scales would tip". I understand
RAIDZ2 protects against failures during a rebuild process
On Feb 5, 2011, at 2:44 PM, Roy Sigurd Karlsbakk wrote:
> Hi
>
> I keep getting these messages on this one box. There are issues with at least
> one of the drives in it, but since there are some 80 drives in it, that's not
> really an issue. I just want to know, if anyone knows, what this kerne
On Feb 5, 2011, at 8:10 AM, Yi Zhang wrote:
> Hi all,
>
> I'm trying to achieve the same effect of UFS directio on ZFS and here
> is what I did:
Solaris UFS directio has three functions:
1. improved async code path
2. multiple concurrent writers
3. no buffering
Of the th
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Matthew Angelo
>
> My question is, how do I determine which of the following zpool and
> vdev configuration I should run to maximize space whilst mitigating
> rebuild failure risk?
>
> 1. 2x R
On 02/ 7/11 03:45 PM, Matthew Angelo wrote:
I require a new high capacity 8 disk zpool. The disks I will be
purchasing (Samsung or Hitachi) have an Error Rate (non-recoverable,
bits read) of 1 in 10^14 and will be 2TB. I'm staying clear of WD
because they have the new 2048b sectors which don't
Chris,
I might be able to help you recover the pool but will need access to your
system. If you think this is possible just ping me off list and let me know.
Thanks,
George
On Sun, Feb 6, 2011 at 4:56 PM, Chris Forgeron wrote:
> Hello all,
>
> Long time reader, first time poster.
>
>
>
> I’m
I require a new high capacity 8 disk zpool. The disks I will be
purchasing (Samsung or Hitachi) have an Error Rate (non-recoverable,
bits read) of 1 in 10^14 and will be 2TB. I'm staying clear of WD
because they have the new 2048b sectors which don't play nice with ZFS
at the moment.
My question
On Sat, Feb 5, 2011 at 5:44 PM, Roy Sigurd Karlsbakk wrote:
> Hi
>
> I keep getting these messages on this one box. There are issues with at least
> one of the drives in it, but since there are some 80 drives in it, that's not
> really an issue. I just want to know, if anyone knows, what this ke
Roy, I read your question on OpenIndiana mail lists: how can you rebalance your
huge raid, without implementing block pointer rewrite? You have an old vdev
full of data, and now you have added a new vdev - and you want the data to be
evenly spread out to all vdevs.
I answer here beceause it is
Heh. My bad. Didnt read the command. Yes, that should be safe.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Additionally, the way I do it is to draw a diagram of the drives in the system,
labelled with the drive serial numbers. Then when a drive fails, I can find out
from smartctl which drive it is and remove/replace without trial and error.
On 5 Feb 2011, at 21:54, zfs-discuss-requ...@opensolaris.org
On 2/6/2011 3:51 AM, Orvar Korvar wrote:
Ok, so can we say that the conclusion for a home user is:
1) Using SSD without TRIM is acceptable. The only drawback is that without
TRIM, the SSD will write much more, which effects life time. Because when the
SSD has written enough, it will break.
I
On 2/6/2011 3:51 AM, Orvar Korvar wrote:
Ok, so can we say that the conclusion for a home user is:
1) Using SSD without TRIM is acceptable. The only drawback is that without
TRIM, the SSD will write much more, which effects life time. Because when the
SSD has written enough, it will break.
I
Following up to myself, I think I've got things sorted, mostly.
1. The thing I was most sure of, I was wrong about. Some years back, I
must have split the mirrors so that they used different brand disks. I
probably did this, maybe even accidentally, when I had to restore from
backups at one
> 2) And later, when Solaris gets TRIM support, should I reformat or is
> there no need to reformat? I mean, maybe I must format and reinstall
> to get TRIM all over the disk. Or will TRIM immediately start to do
> it's magic?
Trim works on the device level, so a reformat won't be necessary
Vennl
If autoexpand = on, then yes.
zpool get autoexpand
zpool set autoexpand=on
The expansion is vdev specific, so if you replaced the mirror first, you'd
get that much (the extra 2TB) without touching the raidz.
Cheers,
On 7 February 2011 01:41, Achim Wolpers wrote:
> Hi!
>
> I have a zpool biul
Hi!
I have a zpool biult up from two vdrives (one mirror and one raidz). The
raidz is built up from 4x1TB HDs. When I successively replace each 1TB
drive with a 2TB drive will the capacity of the raidz double after the
last block device is replaced?
Achim
signature.asc
Description: OpenPGP di
On 2011-02-06 05:58, Orvar Korvar wrote:
Will this not ruin the zpool? If you overwrite one of discs in the zpool won't
the zpool go broke, so you need to repair it?
Without quoting I can't tell what you think you're responding to, but
from my memory of this thread, I THINK you're forgetting
> Will this not ruin the zpool? If you overwrite one of discs in the
> zpool won't the zpool go broke, so you need to repair it?
As suggested, dd if=/dev/rdsk/c8t3d0s0 of=/dev/null bs=4k count=10, that
will do its best to overwrite /dev/null, which the system is likely to allow :P
Vennlige h
Will this not ruin the zpool? If you overwrite one of discs in the zpool won't
the zpool go broke, so you need to repair it?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org
Ok, so can we say that the conclusion for a home user is:
1) Using SSD without TRIM is acceptable. The only drawback is that without
TRIM, the SSD will write much more, which effects life time. Because when the
SSD has written enough, it will break.
I dont have high demands for my OS disk, so b
Yes, you create three groups as you described and insert them into your zpool
(the zfs raid). So you have only one ZFS raid, consisting of three groups. You
dont have three different ZFS raids (unless you configure that).
You can also later, swap one disk to a larger and repair the group. Then y
On 6 Feb 2011, at 03:14, David Dyer-Bennet wrote:
> I'm thinking either Solaris' appalling mess of device files is somehow scrod,
> or else ZFS is confused in its reporting (perhaps because of cache file
> contents?). Is there anything I can do about either of these? Does devfsadm
> really c
On Sat, Feb 5, 2011 at 3:34 PM, Roy Sigurd Karlsbakk wrote:
>> so as not to exceed the channel bandwidth. When they need to get higher disk
>> capacity, they add more platters.
>
> May this mean those drives are more robust in terms of reliability, since the
> leaks between sectors is less likely
25 matches
Mail list logo