Ian Collins (i...@ianshome.com) wrote:
> On 07/18/10 11:19 AM, marco wrote:
>> *snip*
>>
>>
> Yes, that is correct. zfs list reports usable space, which is 2 out of
> the three drives (parity isn't confined to one device).
>
>> *snip*
>>
>>
> Are you sure? That result looks odd. It is w
On 07/18/10 11:19 AM, marco wrote:
Im seeing weird differences between 2 raidz pools, 1 created on a recent
freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old
osol build.
The raidz pool on the fbsd box is created from 3 2Tb sata drives.
The raidz pool on the osol box
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Cindy Swearingen
>
> Hi Ned,
>
> One of the benefits of using a mirrored ZFS configuration is just
> replacing each disk with a larger disk, in place, online, and so on...
Yes, the autoexpan
Im seeing weird differences between 2 raidz pools, 1 created on a recent
freebsd 9.0-CURRENT amd64 box containing the zfs v15 bits, the other on a old
osol build.
The raidz pool on the fbsd box is created from 3 2Tb sata drives.
The raidz pool on the osol box was created in the past from 3 smalle
On Sat, Jul 17, 2010 at 3:07 PM, Amit Kulkarni wrote:
> I don't know if the devices are renumbered. How do you know if the devices
> are changed?
>
> Here is output of format, the middle one is the boot drive and selection 0 &
> 2 are the ZFS mirrors
>
> AVAILABLE DISK SELECTIONS:
> 0. c8t
> > I did a zpool status and it gave me zfs 8000-3C error,
> saying my pool is unavailable. Since I am able to boot &
> access browser, I tried a zpool import without arguments,
> with trying to export my pool, more fiddling. Now I can't
> get zpool status to show my pool.
>
> > vdev_path = /de
Hi Ned,
One of the benefits of using a mirrored ZFS configuration is just replacing
each disk with a larger disk, in place, online, and so on...
Its probably easiest to use zfs send -R (recursive) to do a recursive snapshot
of your root pool.
Check out the steps here:
http://www.solarisinter
On Sat, Jul 17, 2010 at 10:55 AM, Amit Kulkarni wrote:
> I did a zpool status and it gave me zfs 8000-3C error, saying my pool is
> unavailable. Since I am able to boot & access browser, I tried a zpool import
> without arguments, with trying to export my pool, more fiddling. Now I can't
> get
On 17-7-2010 15:49, Bob Friesenhahn wrote:
> On Sat, 17 Jul 2010, Bruno Sousa wrote:
>> Jul 15 12:30:48 storage01 SOURCE: eft, REV: 1.16
>> Jul 15 12:30:48 storage01 EVENT-ID: 859b9d9c-1214-4302-8089-b9447619a2a1
>> Jul 15 12:30:48 storage01 DESC: The command was terminated with a
>> non-recovered
On Sat, Jul 17, 2010 at 10:49 AM, Bob Friesenhahn
wrote:
> On Sat, 17 Jul 2010, Bruno Sousa wrote:
>>
>> Jul 15 12:30:48 storage01 SOURCE: eft, REV: 1.16
>> Jul 15 12:30:48 storage01 EVENT-ID: 859b9d9c-1214-4302-8089-b9447619a2a1
>> Jul 15 12:30:48 storage01 DESC: The command was terminated with a
I believe I know enough to figure this out on my own, but there's usually
some little "gotcha" that you don't think of until you hit it. I'm just
betting that Cindy has already a procedure written for just this purpose.
;-)
In general, if you've been good about backing up your rpool via "zfs s
On Sat, 17 Jul 2010, Bruno Sousa wrote:
Jul 15 12:30:48 storage01 SOURCE: eft, REV: 1.16
Jul 15 12:30:48 storage01 EVENT-ID: 859b9d9c-1214-4302-8089-b9447619a2a1
Jul 15 12:30:48 storage01 DESC: The command was terminated with a
non-recovered error condition that may have been caused by a flaw in
Hello,
I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris
build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS
mirror, 1 is shared with Windows.
Last 2 days I was working in Windows. I didn't touch the hard drives in any way
except I once open
Hi all,
Today i notice that one of the ZFS based servers within my company is
complaining about disk errors, but i would like to know if this a real
physical error or something like a transport error or something.
The server in question runs snv_134 attached to 2 J4400 jbods , and the
head-node ha
14 matches
Mail list logo