This problem is known an fixed in later builds:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6923585
AFAIK it is going to be included into b134a as well
Sent from my iPhone
On Mar 27, 2010, at 22:26, Russ Price wrote:
I have two 500 GB drives on my system that are attached to
Awesome - thank you to all who responded with both the autoexpand and
import/export suggestions! I will try it out!
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman
What I don't understand then is why can I do this with some frequency without
any delays on my 2009.06 and S10 systems? I have a three disk mirror at home,
one disk in an esata dock. Sometimes I don't turn on the dock, and the system
boots just as quickly. Likewise, I've done this with two-di
On 03/28/10 04:18 PM, Tim Cook wrote:
Sounds exactly like the behavior people have had previously while a
system is trying to recover a pool with a faulted drive. I'll have to
check and see if I can dig up one of those old threads. I vaguely
recall someone here had a single drive fail on a
On Sat, Mar 27, 2010 at 10:03 PM, William Bauer wrote:
> Depends on a lot of things. I'd let it sit for at least half an hour to
> see if you get any messages. 30 seconds, if it's waiting for the driver
> stack timeouts, is way too short.
> -
>
> I'm not the OP, but
Depends on a lot of things. I'd let it sit for at least half an hour to see if
you get any messages. 30 seconds, if it's waiting for the driver stack
timeouts, is way too short.
-
I'm not the OP, but I let my VB guest sit for an hour now, and nothing new has
happen
Good idea (importing from a LiveCD). I just did this, and it imported without
any unusual complaint, except for the usual "DEGRADED" state because a member
is missing.
Also, for whatever this is worth, I noticed that v134 now shows the mirror (or
the first mirror) as "mirror-0" instead of just
On Mar 27, 2010, at 3:14 PM, Nick wrote:
> I thought I had read somewhere that zpools in ZFS will automatically resize
> (expand) when larger disks are detected to a point where it is feasible to
> expand. To this end, I have a four-drive zpool using RAIDZ(1). I upgraded
> all four of the dri
On Sat, Mar 27, 2010 at 7:57 PM, William Bauer wrote:
> Posted this reply in the help forum, copying it here:
>
> I frequently use mirrors to replace disks, or even as a backup with an
> esata dock. So I set up v134 with a mirror in VB, ran installgrub, then
> detached each drive in turn. I compl
Posted this reply in the help forum, copying it here:
I frequently use mirrors to replace disks, or even as a backup with an esata
dock. So I set up v134 with a mirror in VB, ran installgrub, then detached each
drive in turn. I completely duplicated and can confirm your problem, and since
I'm q
On Sat, Mar 27, 2010 at 18:50, Bob Friesenhahn wrote:
> On Sat, 27 Mar 2010, Harry Putnam wrote:
>
>>
>> So its not a serious matter? Or maybe more of a potentially serious
>> matter?
>>
>
> It is difficult to say if this is a serious matter or not. It should not
> have happened. The severity
On Sat, 27 Mar 2010, Harry Putnam wrote:
So its not a serious matter? Or maybe more of a potentially serious
matter?
It is difficult to say if this is a serious matter or not. It should
not have happened. The severity depends on the cause of the problem
(which may be difficult to figure o
On Oct 2, 2009, at 11:54 AM, Robert Milkowski wrote:
> Stuart Anderson wrote:
>>
>> On Oct 2, 2009, at 5:05 AM, Robert Milkowski wrote:
>>
>>> Stuart Anderson wrote:
I am wondering if the following idea makes any sense as a way to get ZFS
to cache compressed data in DRAM?
I thought I had read somewhere that zpools in ZFS will automatically resize
(expand) when larger disks are detected to a point where it is feasible to
expand. To this end, I have a four-drive zpool using RAIDZ(1). I upgraded all
four of the drives from 500GB to 1TB, but haven't seen any expans
On 03/28/10 10:02 AM, Harry Putnam wrote:
Bob Friesenhahn writes:
On Sat, 27 Mar 2010, Harry Putnam wrote:
What to do with a status report like the one included below?
What does it mean to have an unrecoverable error but no data errors?
I think that this summary means tha
On Sat, Mar 27, 2010 at 6:02 PM, Harry Putnam wrote:
> Bob Friesenhahn writes:
>
> > On Sat, 27 Mar 2010, Harry Putnam wrote:
> >
> >> What to do with a status report like the one included below?
> >>
> >> What does it mean to have an unrecoverable error but no data errors?
> >
> > I think that
Bob Friesenhahn writes:
> On Sat, 27 Mar 2010, Harry Putnam wrote:
>
>> What to do with a status report like the one included below?
>>
>> What does it mean to have an unrecoverable error but no data errors?
>
> I think that this summary means that the zfs scrub did not encounter
> any reported r
On Sat, Mar 27, 2010 at 2:45 PM, Russ Price wrote:
> > What build? How long have you waited for the boot? It
> > almost sounds to me like it's waiting for the
> > drive and hasn't timed out before you give up and
> > power it off.
>
> I waited about three minutes. This is a b134 installation.
>
> What build? How long have you waited for the boot? It
> almost sounds to me like it's waiting for the
> drive and hasn't timed out before you give up and
> power it off.
I waited about three minutes. This is a b134 installation.
One one of my tests, I tried shoving the removed mirror into the
On Sat, Mar 27, 2010 at 2:26 PM, Russ Price wrote:
> I have two 500 GB drives on my system that are attached to built-in SATA
> ports on my Asus M4A785-M motherboard, running in AHCI mode. If I shut down
> the system, remove either drive, and then try to boot the system, it will
> fail to boot. I
I have two 500 GB drives on my system that are attached to built-in SATA ports
on my Asus M4A785-M motherboard, running in AHCI mode. If I shut down the
system, remove either drive, and then try to boot the system, it will fail to
boot. If I disable the splash screen, I find that it will display
Zpool split is a wonderful feature and it seems to work well,
and the choice of which disk got which name was perfect!
But there seems to be an odd anomaly (at least with b132) .
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Reboot
On Mar 27, 2010, at 2:41 AM, Daniel Carosone wrote:
> On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote:
>
>> You can't share a device (either as ZIL or L2ARC) between multiple pools.
>
> Discussion here some weeks ago reached suggested that an L2ARC device
> was used for all ARC evic
Hi,
I have a setup with thousands of filesystems, each containing several
snapshots. For a good percentage of these filesystems I want to create
a snapshots once every hour, for others once every 2 hours and so forth.
I built some tools to do this, no problem so far.
While examining disk load on
On Sat, 27 Mar 2010, Harry Putnam wrote:
What to do with a status report like the one included below?
What does it mean to have an unrecoverable error but no data errors?
I think that this summary means that the zfs scrub did not encounter
any reported read/write errors from the disks, but o
On Fri, 26 Mar 2010, Erik Trimble wrote:
It will attempt to balance the data across the two vdevs (the mirror and
raidz) until it runs out of space on one (in your case, the mirror pair).
ZFS does not currently understand differences in underlying hardware
performance or vdev layout, so it can'
On Sat, 27 Mar 2010, Daniel Carosone wrote:
On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote:
You can't share a device (either as ZIL or L2ARC) between multiple pools.
Discussion here some weeks ago reached suggested that an L2ARC device
was used for all ARC evictions, regardless
Eric,
Thanks for your input, this has been a great learning experience for me on the
workings of ZFS. I will use your suggestion and create the metadevice and run
raidz across 5 "devices" for approximately the same total storage.
--
This message posted from opensolaris.org
What to do with a status report like the one included below?
What does it mean to have an unrecoverable error but no data errors?
Is it just a matter of `clearing' this device? But what would have
prompted such a report then?
Also note the numeral 7 in the CKSUM column for device c3d1s0. What
On Mar 26, 2010, at 9:26 PM, Richard Elling wrote:
> On Mar 25, 2010, at 7:25 PM, antst wrote:
>
>> I have two storages, both on snv133. Both filled with 1TB drives.
>> 1) stripe over two raidz vdevs, 7 disks in each. In total avalable size is
>> (7-1)*2=12TB
>> 2) zfs pool over HW raid, also
I'm not entirely convinced there is no problem here I had a WD EADS
1.5TB die, the warranty replacement drive was a EARS. So, first foray into
4k sectors.
I had 8x EADS in a raidz set, had replaced the broken one with a 1.5TB
Seagate 7200rpm - which was obviously faster.
Just replacing back,
On 27.03.2010 11:01, Daniel Carosone wrote:
> On Sat, Mar 27, 2010 at 08:47:26PM +1100, Daniel Carosone wrote:
>> On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote:
>>> not sure if 32bit BSD supports 48bit LBA
>>
>> Solaris is the only otherwise-modern OS with this daft limitation.
>
>
On Sat, Mar 27, 2010 at 08:47:26PM +1100, Daniel Carosone wrote:
> On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote:
> > not sure if 32bit BSD supports 48bit LBA
>
> Solaris is the only otherwise-modern OS with this daft limitation.
Ok, it's not due to LBA48, but the 1Tb limitation i
On Fri, Mar 26, 2010 at 05:57:31PM -0700, Darren Mackay wrote:
> not sure if 32bit BSD supports 48bit LBA
Solaris is the only otherwise-modern OS with this daft limitation.
--
Dan.
pgpE9xlpyJDRZ.pgp
Description: PGP signature
___
zfs-discuss mailing
On Sat, Mar 27, 2010 at 01:03:39AM -0700, Erik Trimble wrote:
> You can't share a device (either as ZIL or L2ARC) between multiple pools.
Discussion here some weeks ago reached suggested that an L2ARC device
was used for all ARC evictions, regardless of the pool.
I'd very much like an authorita
> It would be nice if the 32bit osol kernel support
> 48bit LBA
Is already supported, for may years (otherwise
disks with a capacity >= 128GB could not be
used with Solaris) ...
> (similar to linux, not sure if 32bit BSD
> supports 48bit LBA ), then the drive would probably
> work - perhaps late
On 03/26/10 12:16 AM, Bruno Sousa wrote:
Well...i'm pretty much certain that at my job i faced something similar..
We had a server with 2 raidz2 groups each with 3 drives, and one drive
has failed and replaced by a hot spare. However, the balance of data
between the 2 groups of raidz2 start to be
Muhammed Syyid wrote:
Which is why I was looking to setup
1x8 raidz2 as pool1
and
1x8 raidz2 as pool2
instead of as two vdevs under 1 pool. That way I can have 'some' flexibility
where I could take down pool1 or pool2 without affecting the other.
The issue I had was how do I set up an L2ARC f
On 03/27/10 08:14 PM, Svein Skogen wrote:
On 26.03.2010 23:55, Ian Collins wrote:
On 03/27/10 09:39 AM, Richard Elling wrote:
On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
Hi,
The jumbo-frames in my case give me a boost of around 2 mb/s, so it's
not that much.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 26.03.2010 23:55, Ian Collins wrote:
> On 03/27/10 09:39 AM, Richard Elling wrote:
>> On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
>>
>>> Hi,
>>>
>>> The jumbo-frames in my case give me a boost of around 2 mb/s, so it's
>>> not that much.
>>>
40 matches
Mail list logo