Sorry everyone, this one was indeed a case of root stupidity. I had
forgotten to upgrade to OI 148, which apparently fixed the write balancer.
Duh. (didn't find full changelog from google tho.)
On Jun 30, 2011 3:12 PM, "Tuomas Leikola" wrote:
> Thanks for the input. This
Rsync with some ignore-errors option, maybe? In any case you've lost some
data so make sure to take record of zpool status -v
On Jul 1, 2011 12:26 AM, "Tom Demo" wrote:
> Hi there.
>
> I am trying to get my filesystems off a pool that suffered irreparable
damage due to 2 disks partially failing in
Thanks for the input. This was not a case of degraded vdev, but only a
missing log device (which i cannot get rid of..). I'll try offlining some
vdevs and see what happens - altough this should be automatic atf all times
IMO.
On Jun 30, 2011 1:25 PM, "Markus Kovero" wrote:
>
>
>> To me it seems th
Hi!
I've been monitoring my arrays lately, and to me it seems like the zfs
allocator might be misfiring a bit. This is all on OI 147 and if there
is a problem and a fix, i'd like to see it in the next image-update =D
Here's some 60s iostat cleaned up a bit:
tank3.76T 742G
> I have a resilver running and am
> seeing about 700-800 writes/sec. on the hot spare as it resilvers.
IIRC resilver works in block birth order (write order) which is
commonly more-or-less sequential unless the fs is fragmented. So it
might or might not be. I think you cannot get that kind of per
It's been quiet, seems.
On Fri, Apr 15, 2011 at 5:09 PM, Jerry Kemp wrote:
> I have not seen any email from this list in a couple of days.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-
Hei,
I'm crossposting this to zfs as i'm not sure which bit is to blame here.
I've been having this issue that i cannot really fix myself:
I have a OI 148 server, which hosts a log of disks on SATA
controllers. Now it's full and needs some data moving work to be done,
so i've acquired another bo
I'd pick samsung and use the savings for additional redundancy. Ymmv.
On Feb 25, 2011 8:46 AM, "Markus Kovero" wrote:
>> So, does anyone know which drives to choose for the next setup? Hitachis
look good so far, perhaps also seagates, but right now, I'm dubious about
the > blacks.
>
> Hi! I'd go f
On Wed, Dec 15, 2010 at 3:29 PM, Gareth de Vaux wrote:
> On Mon 2010-12-13 (16:41), Marion Hakanson wrote:
>> After you "clear" the errors, do another "scrub" before trying anything
>> else. Once you get a complete scrub with no new errors (and no checksum
>> errors), you should be confident that
On Tue, Oct 26, 2010 at 5:21 PM, Matthieu Fecteau
wrote:
> My question : in the event that there's no more common snapshot between Site
> A and Site B, how can we replicate again ? (example : Site B has a power
> failure and then Site A cleanup his snapshots before Site B is brought back,
> so
On Mon, Oct 25, 2010 at 4:57 PM, Cuyler Dingwell wrote:
> It's not just this directory in the example - it's any directory or file.
> The system was running fine up until it hit 96%. Also, a full scrub of the
> file system was done (took nearly two days).
> --
I'm just stabbing in the dark he
On Thu, Oct 21, 2010 at 12:06 AM, Peter Jeremy
wrote:
> On 2010-Oct-21 01:28:46 +0800, David Dyer-Bennet wrote:
>>On Wed, October 20, 2010 04:24, Tuomas Leikola wrote:
>>
>>> I wished for a more aggressive write balancer but that may be too much
>>> to ask for.
On Wed, Oct 20, 2010 at 5:00 PM, Richard Elling
wrote:
>>> Now, is there a way, manually or automatically, to somehow balance the data
>>> across these LVOLs? My first guess is that doing this _automatically_ will
>>> require block pointer rewrite, but then, is there way to hack this thing by
>
On Wed, Oct 20, 2010 at 4:05 PM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> 4. Guess what happens if you have 2 or 3 failed disks in your raidz3,
>> and
>> they're trying to resilver at
On Wed, Oct 20, 2010 at 3:50 AM, Bob Friesenhahn
wrote:
> On Tue, 19 Oct 2010, Cindy Swearingen wrote:
>>>
>>> unless you use copies=2 or 3, in which case your data is still safe
>>> for those datasets that have this option set.
>>
>> This advice is a little too optimistic. Increasing the copies p
On Tue, Oct 19, 2010 at 7:13 PM, Roy Sigurd Karlsbakk
wrote:
> I have this server with some 50TB disk space. It originally had 30TB on WD
> Greens, was filled quite full, and another storage chassis was added. Now,
> space problem gone, fine, but what about speed? Three of the VDEVs are quite
On Mon, Oct 18, 2010 at 4:55 PM, Edward Ned Harvey wrote:
> Thank you, but, the original question was whether a scrub would identify
> just corrupt blocks, or if it would be able to map corrupt blocks to a list
> of corrupt files.
>
Just in case this wasn't already clear.
After scrub sees read o
On Mon, Oct 18, 2010 at 8:18 PM, Simon Breden wrote:
> So are we all agreed then, that a vdev failure will cause pool loss ?
> --
unless you use copies=2 or 3, in which case your data is still safe
for those datasets that have this option set.
--
- Tuomas
On Tue, Oct 12, 2010 at 9:39 AM, Stephan Budach wrote:
> You are implying that the issues resulted from the H/W raid(s) and I don't
> think that this is appropriate.
>
Not exactly. Because the raid is managed in hardware, and not by zfs,
is the reason why zfs cannot fix these errors when it enco
Hello everybody.
I am experiencing terribly slow writes on my home server. This is from
zpool iostat:
capacity operationsbandwidth
pool alloc free read write read write
- - - - - -
evices seem to be still dropping
from the SATA bus randomly. Maybe I'll cough together a report and post to
storage-discuss.
On Wed, Sep 29, 2010 at 8:13 PM, Tuomas Leikola wrote:
> The endless resilver problem still persists on OI b147. Restarts when it
> should complete.
>
> I see
On Thu, Sep 30, 2010 at 1:16 AM, Scott Meilicke <
scott.meili...@craneaerospace.com> wrote:
> Resliver speed has been beaten to death I know, but is there a way to avoid
> this? For example, is more enterprisy hardware less susceptible to
> reslivers? This box is used for development VMs, but ther
On Thu, Sep 30, 2010 at 9:08 AM, Jason J. W. Williams <
jasonjwwilli...@gmail.com> wrote:
>
> Should I be worried about these checksum errors?
>
>
Maybe. Your disks, cabling or disk controller is probably having some issue
which caused them. or maybe sunspots are to blame.
Run a scrub often and m
Thanks for taking an interest. Answers below.
On Wed, Sep 29, 2010 at 9:01 PM, George Wilson
wrote:
> On Mon, Sep 27, 2010 at 1:13 PM, Tuomas Leikola
> > tuomas.leik...@gmail.com>> wrote:
>>
>
>> (continuous resilver loop) has been going on for a week now
, Tuomas Leikola wrote:
> Hi!
>
> My home server had some disk outages due to flaky cabling and whatnot, and
> started resilvering to a spare disk. During this another disk or two
> dropped, and were reinserted into the array. So no devices were actually
> lost, they just were i
Hi!
My home server had some disk outages due to flaky cabling and whatnot, and
started resilvering to a spare disk. During this another disk or two
dropped, and were reinserted into the array. So no devices were actually
lost, they just were intermittently away for a while each.
The situation is
Hi.
I have a simple question. Is it safe to place log device on another zfs
disk?
I'm planning on placing the log on my mirrored root partition. Using latest
opensolaris.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.o
On Mon, Sep 8, 2008 at 8:35 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
>ps> iSCSI with respect to write barriers?
>
> +1.
>
> Does anyone even know of a good way to actually test it? So far it
> seems the only way to know if your OS is breaking write barriers is to
> trade gossip and guess.
>
let A,B,C,D be the 250GB disks and X,Y the 500GB ones.
my choise here would be raidz over (A+B),(C+D),X,Y
means something like
zpool create tank raidz (stripe A B) (stripe C D) X Y
(how do you actually write that up as zpool commands?)
___
zfs-discuss
On 9/20/07, Roch - PAE <[EMAIL PROTECTED]> wrote:
>
> Next application modifies D0 -> D0' and also writes other
> data D3, D4. Now you have
>
> Disk0 Disk1 Disk2 Disk3
>
> D0 D1 D2 P0,1,2
> D0' D3 D4 P0',3,4
>
> But if D1 and D2 stays immut
On 9/10/07, Pawel Jakub Dawidek <[EMAIL PROTECTED]> wrote:
> The problem with RAID5 is that different blocks share the same parity,
> which is not the case for RAIDZ. When you write a block in RAIDZ, you
> write the data and the parity, and then you switch the pointer in
> uberblock. For RAID5, you
On 8/19/07, James <[EMAIL PROTECTED]> wrote:
> Raidz can only be as big as your smallest disk. For example if I had a 320gig
> with a 250gig and 200gig I could only have 400gig of storage.
>
Correct - variable sized disks are not (yet?) supported.
However, you can circumvent this by slicing up t
On 8/11/07, Russ Petruzzelli <[EMAIL PROTECTED]> wrote:
>
> Is it possible/recommended to create a zpool and zfs setup such that the OS
> itself (in root /) is in its own zpool?
Yes. You're looking for "zfs root" and it's easiest if your installer
does that for you. At least latest nexenta unsta
On 8/10/07, Darren Dunham <[EMAIL PROTECTED]> wrote:
> For instance, it might be nice to create a "mirror" with a 100G disk and
> two 50G disks. Right now someone has to create slices on the big disk
> manually and feed them to zpool. Letting ZFS handle everything itself
> might be a win for some
On 8/10/07, Moore, Joe <[EMAIL PROTECTED]> wrote:
> Wishlist: It would be nice to put the whole redundancy definitions into
> the zfs filesystem layer (rather than the pool layer): Imagine being
> able to "set copies=5+2" for a filesystem... (requires a 7-VDEV pool,
> and stripes via RAIDz2, othe
On 8/10/07, Darren J Moffat <[EMAIL PROTECTED]> wrote:
> Tuomas Leikola wrote:
> >>>> We call that a "mirror" :-)
> >>>>
> >>> Mirror and raidz suffer from the classic blockdevice abstraction
> >>> problem in that they need
> >> We call that a "mirror" :-)
> >>
> >
> > Mirror and raidz suffer from the classic blockdevice abstraction
> > problem in that they need disks of equal size.
>
> Not that I'm aware of. Mirror and raid-z will simply use the smallest
> size of your available disks.
>
Exactly. The rest is not us
On 8/9/07, Richard Elling <[EMAIL PROTECTED]> wrote:
> > What I'm looking for is a disk full error if ditto cannot be written
> > to different disks. This would guarantee that a mirror is written on a
> > separate disk - and the entire filesystem can be salvaged from a full
> > disk failure.
>
> We
On 8/9/07, Mario Goebbels <[EMAIL PROTECTED]> wrote:
> If you're that bent on having maximum redundancy, I think you should
> consider implementing real redundancy. I'm also biting the bullet and
> going mirrors (cheaper than RAID-Z for home, less disks needed to start
> with).
Currently I am, and
>
> Actually, ZFS is already supposed to try to write the ditto copies of a
> block on different vdevs if multiple are available.
>
*TRY* being the keyword here.
What I'm looking for is a disk full error if ditto cannot be written
to different disks. This would guarantee that a mirror is written
Hi!
I'm having hard time finding out if it's possible to force ditto
blocks on different devices.
This mode has many benefits, the least not being that is practically
creates a fully dynamic mode of mirroring (replacing raid1 and raid10
variants), especially when combined with the upcoming vdev r
41 matches
Mail list logo