You can see the original ARC case here:
http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt
On 8 Dec 2011, at 16:41, Ian Collins wrote:
> On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
>> On 12/07/11 20:48, Mertol Ozyoney wrote:
>>> Unfortunetly the answer is no. Neither l1 nor l2
On 27 Sep 2011, at 18:29, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Tony MacDoodle
>>
>>
>> Now:
>> mirror-0 ONLINE 0 0 0
>> c1t2d0 ONLINE 0 0 0
>> c1t
minor quibble: compressratio uses a lowercase x for the description text
whereas the new prop uses an uppercase X
On 6 Jun 2011, at 21:10, Eric Schrock wrote:
> Webrev has been updated:
>
> http://dev1.illumos.org/~eschrock/cr/zfs-refratio/
>
> - Eric
>
> --
> Eric Schrock
> Delphix
>
> 2
Yeah, this is a known problem. The DTL on the toplevel shows an outage, and is
preventing the removal of the spare even though removing the spare won't make
the outage worse.
Unfortunately, for opensolaris anyway, there is no workaround.
You could try doing a full scrub, replacing any disks th
On 5 Dec 2010, at 16:06, Roy Sigurd Karlsbakk wrote:
>> Hot spares are dedicated spares in the ZFS world. Until you replace
>> the actual bad drives, you will be running in a degraded state. The
>> idea is that spares are only used in an emergency. You are degraded
>> until your spares are
You should only see a "HOLE" in your config if you removed a slog after having
added more stripes. Nothing to do with bad sectors.
On 14 Oct 2010, at 06:27, Matt Keenan wrote:
> Hi,
>
> Can someone shed some light on what this ZPOOL_CONFIG is exactly.
> At a guess is it a bad sector of the dis
You need to let the resilver complete before you can detach the spare. This is
a known problem, CR 6909724.
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724
On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote:
> Hi!
>
> I had trouble with my raidz in the way, that some o
On 16 Aug 2010, at 22:30, Robert Hartzell wrote:
>
> cd /mnt ; ls
> bertha export var
> ls bertha
> boot etc
>
> where is the rest of the file systems and data?
By default, root filesystems are not mounted. Try doing a "zfs mount
bertha/ROOT/snv_134"__
You can also use the "zpool split" command and save yourself having to do the
zfs send|zfs recv step - all the data will be preserved.
"zpool split rpool preserve" does essentially everything up to and including
the "zpool export preserve" commands you listed in your original email. Just
don'
I'm guessing that the virtualbox VM is ignoring write cache flushes. See this
for more ifno:
http://forums.virtualbox.org/viewtopic.php?f=8&t=13661
On 12 Jun, 2010, at 5.30, zfsnoob4 wrote:
> Thanks, that works. But it only when I do a proper export first.
>
> If I export the pool then I can
Can you find the devices in /dev/rdsk? I see there is a path in /pseudo at
least, but the zpool import command only looks in /dev. One thing you can try
is doing this:
# mkdir /tmpdev
# ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a
And then see if 'zpool import -d /tmpdev' finds the pool.
On 2
On 28 May, 2010, at 17.21, Vadim Comanescu wrote:
> In a stripe zpool configuration (no redundancy) is a certain disk regarded as
> an individual vdev or do all the disks in the stripe represent a single vdev
> ? In a raidz configuration im aware that every single group of raidz disks is
> reg
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote:
> The instances are "ephemeral"; once terminated they cease to exist, as do all
> their settings. Rebooting an image keeps any EBS volumes attached, but this
> isn't the case I'm dealing with - its when the instance terminates
> unexpectedly. For
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote:
> I'm not actually issuing any when starting up the new instance. None are
> needed; the instance is booted from an image which has the zpool
> configuration stored within, so simply starts and sees that the devices
> aren't available, which beco
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote:
>
> I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure
> already defined. Starting an instance from this image, without attaching the
> EBS volume, shows the pool structure exists and that the pool state is
> "UNAVAIL" (as
15 matches
Mail list logo