Dear all
We ran into a nasty problem the other day. One of our mirrored zpool
hosts several ZFS filesystems. After a reboot (all FS mounted at that
time an in use) the machine paniced (console output further down). After
detaching one of the mirrors the pool fortunately imported automatically
in a
Thomas Burgess wrote:
Yeah, this is what I was thinking too...
Is there anyway to retain snapshot data this way? I've read about the
ZFS replay/mirror features, but my impression was that this was more so
for a development mirror for testing ra
Thomas Nau wrote:
Dear all
We ran into a nasty problem the other day. One of our mirrored zpool
hosts several ZFS filesystems. After a reboot (all FS mounted at that
time an in use) the machine paniced (console output further down). After
detaching one of the mirrors the pool fortunately importe
Thanks for the link Arne.
On 06/13/2010 03:57 PM, Arne Jansen wrote:
> Thomas Nau wrote:
>> Dear all
>>
>> We ran into a nasty problem the other day. One of our mirrored zpool
>> hosts several ZFS filesystems. After a reboot (all FS mounted at that
>> time an in use) the machine paniced (console
On Jun 8, 2010, at 12:46 PM, Miles Nordin wrote:
>> "re" == Richard Elling writes:
>
>re> Please don't confuse Ethernet with IP.
>
> okay, but I'm not. seriously, if you'll look into it.
[fine whine elided]
I think we can agree that the perfect network has yet to be invented :-)
Meanw
I found a thread that mentions an undocumented parameter -V
(http://opensolaris.org/jive/thread.jspa?messageID=444810) and that did the
trick!
The pool is now online and seems to be working well.
Thanks everyone who helped!
--
This message posted from opensolaris.org
__
Arne,
On 06/13/2010 03:57 PM, Arne Jansen wrote:
> Thomas Nau wrote:
>> Dear all
>>
>> We ran into a nasty problem the other day. One of our mirrored zpool
>> hosts several ZFS filesystems. After a reboot (all FS mounted at that
>> time an in use) the machine paniced (console output further down).
On Jun 13, 2010, at 8:09 AM, Jan Hellevik wrote:
> I found a thread that mentions an undocumented parameter -V
> (http://opensolaris.org/jive/thread.jspa?messageID=444810) and that did the
> trick!
>
> The pool is now online and seems to be working well.
-V is a crutch, not a cure.
-- richard
Well, for me it was a cure. Nothing else I tried got the pool back. As far as I
can tell, the way to get it back should be to use symlinks to the fdisk
partitions on my SSD, but that did not work for me. Using -V got the pool back.
What is wrong with that?
If you have a better suggestion as to
On 6/13/2010 11:14 AM, Jan Hellevik wrote:
Well, for me it was a cure. Nothing else I tried got the pool back. As far as I
can tell, the way to get it back should be to use symlinks to the fdisk
partitions on my SSD, but that did not work for me. Using -V got the pool back.
What is wrong with
On Jun 13, 2010, at 12:38 PM, Erik Trimble wrote:
> On 6/13/2010 11:14 AM, Jan Hellevik wrote:
>> Well, for me it was a cure. Nothing else I tried got the pool back. As far
>> as I can tell, the way to get it back should be to use symlinks to the fdisk
>> partitions on my SSD, but that did not
Thank you. The -D option works.
And yes, now I feel a lot more confident about playing around with the FS. I'm
planning on moving an existing raid1 NTFS setup to ZFS, but since I'm on a
budget I only have three drive in total to work with. I want to make sure I
know what I'm doing before I mess
Hi Guys
I am having trouble installing Opensolaris 2009.06 into my Biostar Tpower I45
motherboard, approved on BigAdmin HCL here:
http://www.sun.com/bigadmin/hcl/data/systems/details/26409.html -- why is it
not working?
My setup:
3x 1TB hard-drives SATA
1x 500GB hard-drive (I have only left
Hello, I tried enabling dedup on a filesystem, and moved files into it to take
advantage of it. I had about 700GB of files and left it for some hours. When I
returned, only 70GB were moved.
I checked zpool iostat, and it showed about 8MB/s R/W performance (the old and
new zfs filesystems are in
Hernan F wrote:
Hello, I tried enabling dedup on a filesystem, and moved files into it to take
advantage of it. I had about 700GB of files and left it for some hours. When I
returned, only 70GB were moved.
I checked zpool iostat, and it showed about 8MB/s R/W performance (the old and
new zfs
Howdy all,
I too dabbled with dedup and found the performance poor with only 4gb ram. I've
since disabled dedup and find the performance better but "zpool list" still
shows a 1.15x dedup ratio. Is this still a hit on disk io performance? Aside
from copying the data off and back onto the filesys
Erik is right, more below...
On Jun 13, 2010, at 10:17 PM, Erik Trimble wrote:
> Hernan F wrote:
>> Hello, I tried enabling dedup on a filesystem, and moved files into it to
>> take advantage of it. I had about 700GB of files and left it for some hours.
>> When I returned, only 70GB were moved.
17 matches
Mail list logo