On Sun, Mar 18, 2012 at 9:03 AM, Jim Klimov wrote:
> Hello, while browsing around today I stumbled across
> "Seagate Pipeline HD" HDDs lineup (i.e. ST2000VM002).
> Did any ZFS users have any experience with them?
> http://www.seagate.com/www/en-us/products/consumer_electronics/pipeline/
> http://w
On Thu, Oct 20, 2011 at 7:55 AM, Edward Ned Harvey
wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Jim Klimov
>>
>> new CKSUM errors
>> are being found. There are zero READ or WRITE error counts,
>> though.
>>
>> Should we be worrie
On Thu, Sep 15, 2011 at 5:13 AM, Carsten Aulbert
wrote:
> Has nayone any idea what's going on here?
Carsten,
It will be more visible at the VFS layer with fsstat. The following
one-liner will pull out all ZFS filesystems and pass the list as
arguments to fsstat so you can see activity broken dow
On Wed, Jul 13, 2011 at 6:32 AM, Orvar Korvar
wrote:
> If you go the LSI2008 route, avoid raid functionality as it messes up ZFS.
> Flash the BIOS to JBOD mode.
You don't even have to do that with the LSI SAS2 cards. They no
longer ship alternate IT-mode firmware for these like they did for the
On Tue, Jul 12, 2011 at 1:35 PM, Brandon High wrote:
> Most "enterprise" SSDs use something like 30% for spare area. So a
> drive with 128MiB (base 2) of flash will have 100MB (base 10) of
> available storage. A consumer level drive will have ~ 6% spare, or
> 128MiB of flash and 128MB of available
On Tue, Jul 12, 2011 at 1:06 AM, Brandon High wrote:
> On Mon, Jul 11, 2011 at 7:03 AM, Eric Sproul wrote:
>> Interesting-- what is the suspected impact of not having TRIM support?
>
> There shouldn't be much, since zfs isn't changing data in place. Any
> drive with r
On Sat, Jul 9, 2011 at 2:19 PM, Roy Sigurd Karlsbakk wrote:
> Most drives should work well for a pure SSD pool. I have a postgresql
> database on a linux box on a mirrored set of C300s. AFAIK ZFS doesn't yet
> support TRIM, so that can be an issue. Apart from that, it should work well.
Interest
On Wed, Jun 15, 2011 at 4:33 PM, Nomen Nescio wrote:
> Has there been any change to the server hardware with respect to number of
> drives since ZFS has come out? Many of the servers around still have an even
> number of drives (2, 4) etc. and it seems far from optimal from a ZFS
> standpoint.
Wi
On Tue, Jun 14, 2011 at 10:09 PM, Ding Honghui wrote:
> I expect to have 14*931/1024=12.7TB zpool space, but actually, it only have
> 12.6TB zpool space:
> # zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
> datapool 12.6T 9.96T 2.66T 78% ONLINE -
> #
>
> And I expect th
On Sat, Jun 4, 2011 at 3:51 PM, Harry Putnam wrote:
> Apparently my OS is new enough (b 147 )... since the command
> is known. Very nice... but where is the documentation?
>
> `man zfs' has no hits on a grep for diff (except different..)
>
> Ahh never mind... I found:
> http://www.c0t0d0s0.org/
On Fri, Jun 3, 2011 at 11:22 AM, Paul Kraus wrote:
> So is there a way to read these real I/Ops numbers ?
>
> iostat is reporting 600-800 I/Ops peak (1 second sample) for these
> 7200 RPM SATA drives. If the drives are doing aggregation, then how to
> tell what is really going on ?
I've always as
On Wed, Jun 1, 2011 at 3:47 PM, Matt Harrison
wrote:
> Thanks Eric, however seeing as I can't have two pools named 'tank', I'll
> have to name the new one something else. I believe I will be able to rename
> it afterwards, but I just wanted to check first. I'd have to have to spend
> hours changin
On Wed, Jun 1, 2011 at 2:54 PM, Matt Harrison
wrote:
> Hi list,
>
> I've got a pool thats got a single raidz1 vdev. I've just some more disks in
> and I want to replace that raidz1 with a three-way mirror. I was thinking
> I'd just make a new pool and copy everything across, but then of course I'v
Hi,
One of my colleagues was confused by the output of 'zpool status' on a pool
where a hot spare is being resilvered in after a drive failure:
$ zpool status data
pool: data
state: DEGRADED
status: One or more devices is currently being resilvered. The pool will
continue to function,
On 12/ 4/09 02:06 AM, Erik Trimble wrote:
> Hey folks.
>
> I've looked around quite a bit, and I can't find something like this:
>
> I have a bunch of older systems which use Ultra320 SCA hot-swap
> connectors for their internal drives. (e.g. v20z and similar)
>
> I'd love to be able to use mode
Erin wrote:
> The issue that we have is that the first two vdevs were almost full, so we
> will quickly be in the state where all writes will be on the 3rd vdev. It
> would
> also be useful to have better read performance, but I figured that solving the
> write performance optimization would also
Erin wrote:
> How do we spread the data that is stored on the first two raidz2 devices
> across all three so that when we continue to write data to the storage pool,
> we will get the added performance of writing to all three devices instead of
> just the empty new one?
All new writes will be spre
Matthias Appel wrote:
> I am using 2x Gbit Ethernet an 4 Gig of RAM,
> 4 Gig of RAM for the iRAM should be more than sufficient (0.5 times RAM and
> 10s worth of IO)
>
> I am aware that this RAM is non-ECC so I plan to mirror the ZIL device.
>
> Any considerations for this setupWill it work a
olution-- the peripherals on those boards a a bit better
supported than the AMD stuff, but even the AMD boards work well.
Eric
--
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
http://omniti.com
___
Scott Meilicke wrote:
> So what happens during the txg commit?
>
> For example, if the ZIL is a separate device, SSD for this example, does it
> not work like:
>
> 1. A sync operation commits the data to the SSD
> 2. A txg commit happens, and the data from the SSD are written to the
> spinning
Adam Leventhal wrote:
> Hi James,
>
> After investigating this problem a bit I'd suggest avoiding deploying
> RAID-Z
> until this issue is resolved. I anticipate having it fixed in build 124.
Adam,
Is it known approximately when this bug was introduced? I have a system running
snv_111 with a lar
casper@sun.com wrote:
> Most of the "Intellispeed" drives are just 5400rpm; I suppose that this
> drive can deliver 150MB/s on sequential access.
I have the earlier generation of the 2TB WD RE4 drive in one of my systems.
With Bonwick's diskqual script I saw an average of 119 MB/s across 14 d
the output. The first one creates the 'oradata' pool with two
mirrors of two drives each. Data will be dynamically balanced across both
mirrors, effectively the same as RAID1+0. The second one creates a simple
mirror of two disks (RAID1).
Regards,
Eric
--
Eric Sproul
Lead Site Reliabili
the pools are
not boot pools. ZFS will automatically label the disks with EFI labels when you
give a whole disk (no 's#') as an argument.
Hope this helps,
Eric
--
Eric Sproul
Lead Site Reliability Engineer
OmniTI Computer Consulting, Inc.
Web Applications & Internet Architectures
ht
istrators
> are largely interested in system administration issues.
+1 to those items. I'd also like to hear about how people are maintaining
offsite DR copies of critical data with ZFS. Just send/recv, or something a
little more "live"?
Eric
--
Eric Sproul
Lead Site Rel
25 matches
Mail list logo