Hi guys, after reading the mailings yesterday i noticed someone was after
upgrading to zfs v21 (deduplication) i'm after the same, i installed
osol-dev-127 earlier which comes with v19 and then followed the instructions on
http://pkg.opensolaris.org/dev/en/index.shtml to bring my system up to da
Hi, I'm using zfs version 6 on mac os x 10.5 using the old macosforge
pkg. When I'm writing files to the fs they are appearing as 1kb files
and if I do zpool status or scrub or anything the command is just
hanging. However I can still read the zpool ok, just write is having
problems and any
Hi, I currently have 4x 1tb drives in a raidz configuration. I want to
add another 2 x 1tb drives, however if i simply zpool add, i will only
gain an extra 1tb of space as it will create a second raidz set inside
the existing tank/pool. Is there a way to add my new drives into the
existing
Hi Guys,I currently have a 18 drive system built from 13x 2.0tb Samsung's and 5x WD 1tb's... I'm about to swap out all of my 1tb drives with 2tb ones to grow the pool a bit... My question is...The replacement 2tb drives are from various manufacturers (seagate/hitachi/samsung) and I know from previo
t 9:35 AM, Edward Ned Harvey
> wrote:
> > From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Michael Armstrong
> >
> > Is there a way to quickly ascertain if my seagate/hitachi drives are as
> large as
> &
Hi Guys,
I have a "portable pool" i.e. one that I carry around in an enclosure. However,
any SSD I add for L2ARC, will not be carried around... meaning the cache drive
will become unavailable from time to time.
My question is Will random removal of the cache drive put the pool into a
"degr
whether or not it
risked integrity.
Sent from my iPhone
On 13 Oct 2012, at 23:02, Ian Collins wrote:
> On 10/14/12 10:02, Michael Armstrong wrote:
>> Hi Guys,
>>
>> I have a "portable pool" i.e. one that I carry around in an enclosure.
>> However, any SSD
Hi guys, sorry in advance if this is somewhat a lowly question, I've recently
built a zfs test box based on nexentastor with 4x samsung 2tb drives connected
via SATA-II in a raidz1 configuration with dedup enabled compression off and
pool version 23. From running bonnie++ I get the following res
I've since turned off dedup, added another 3 drives and results have improved
to around 148388K/sec on average, would turning on compression make things more
CPU bound and improve performance further?
On 18 Jan 2011, at 15:07, Richard Elling wrote:
> On Jan 15, 2011, at 4:21 PM,
Thanks everyone, I think overtime I'm gonna update the system to include an ssd
for sure. Memory may come later though. Thanks for everyone's responses
Erik Trimble wrote:
>On Tue, 2011-01-18 at 15:11 +0000, Michael Armstrong wrote:
>> I've since turned off dedup, ad
che table of what's in the L2ARC. Using 2GB of RAM
>with an SSD-based L2ARC (even without Dedup) likely won't help you too
>much vs not having the SSD.
>
>If you're going to turn on Dedup, you need at least 8GB of RAM to go
>with the SSD.
>
>-Erik
>
Additionally, the way I do it is to draw a diagram of the drives in the system,
labelled with the drive serial numbers. Then when a drive fails, I can find out
from smartctl which drive it is and remove/replace without trial and error.
On 5 Feb 2011, at 21:54, zfs-discuss-requ...@opensolaris.org
I obtained smartmontools (which includes smartctl) from the standard apt
repository (i'm using nexenta however), in addition its neccessary to use the
device type of sat,12 with smartctl to get it to read attributes correctly in
OS afaik. Also regarding dev id's on the system, from what i've see
13 matches
Mail list logo