We have a number of Sun J4200 SAS JBOD arrays which we have multipathed using
Sun's MPxIO facility. While this is great for reliability, it results in the
/dev/dsk device IDs changing from cXtYd0 to something virtually unreadable like
"c4t5000C5000B21AC63d0s3".
Since the entries in /dev/{rdsk,d
to be the best of my recollection, I only needed to run zfs scrub, reboot and
the disk became operational again
the irony was that the error message was asking me to recover from backup, but
the disk involved was my backup of my working pool.
--
This message posted from opensolaris.org
On Mon, Nov 16, 2009 at 4:49 PM, Tim Cook wrote:
> Is your failmode set to wait?
Yes. This box has like ten prod zones and ten corresponding zpools
that initiate to iscsi targets on the filers. We can't panic the
whole box just because one {zone/zpool/iscsi target} fails. Are there
undocumente
On Nov 16, 2009, at 2:00 PM, Martin Vool wrote:
I already got my files back acctuay and the disc contains already
new pools, so i have no idea how it was set.
I have to make a virtualbox installation and test it.
Don't forget to change VirtualBox's default cache flush setting.
http://www.s
On Mon, Nov 16, 2009 at 4:00 PM, Martin Vool wrote:
> I already got my files back acctuay and the disc contains already new
> pools, so i have no idea how it was set.
>
> I have to make a virtualbox installation and test it.
> Can you please tell me how-to set the failmode?
>
>
>
http://prefetch
Hi Daniel,
In some cases, when I/O is suspended, permanent errors are logged and
you need to run a zpool scrub to clear the errors.
Are you saying that a zpool scrub cleared the errors that were
displayed in the zpool status output? Or, did you also use zpool
clear?
Metadata is duplicated even
I already got my files back acctuay and the disc contains already new pools, so
i have no idea how it was set.
I have to make a virtualbox installation and test it.
Can you please tell me how-to set the failmode?
--
This message posted from opensolaris.org
__
On Mon, Nov 16, 2009 at 2:10 PM, Martin Vool wrote:
> I encountered the same problem...like i sed in the first post...zpool
> command freezes. Anyone knows how to make it respond again?
> --
>
>
Is your failmode set to wait?
--Tim
___
zfs-discuss maili
On Mon, Nov 16, 2009 at 12:09 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Sun, 15 Nov 2009, Tim Cook wrote:
>
>>
>> Once again I question why you're wasting your time with raid-z. You might
>> as well just stripe across all the drives. You're taking a performance
>> penalty f
On Mon, Nov 16, 2009 at 2:13 PM, Benoit Heroux wrote:
> Hi guys,
>
> I needed a quick install of OpenSolaris and i found :
> http://hub.opensolaris.org/bin/view/Project+jeos/200906+Prototype#HDownloads
>
> The footprint is splendid: around 275 megs, so it is little. But i have a
> question.
>
>
On Mon, 16 Nov 2009, daniel.rodriguez.delg...@gmail.com wrote:
is this something common on usb disks? would it get improved in
later versions of osol or it is somewhat of an
incompatibility/unfriendliness of zfs with external usb disks? --
Some USB disks seem to ignore cache sync requests, w
You can use VCB to backup.
In my test lab, I use VCB integrated with Bacula to backup all the VMs.
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
Thanks Cindy,
In fact, after some research, I ran into the scrub suggestion and it worked
perfectly. Now I think that the automated message in
http://www.sun.com/msg/ZFS-8000-8A should mention something about scrub as a
worthy attempt.
It was related to an external usb disk. I guess I am happy
I have no idea why this forum just makes files dissapear??? I will put a link
tomorrow...a file was attached before...
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailm
Hi guys,
I needed a quick install of OpenSolaris and i found :
http://hub.opensolaris.org/bin/view/Project+jeos/200906+Prototype#HDownloads
The footprint is splendid: around 275 megs, so it is little. But i have a
question.
Why the ZFS and ZPOOL version are that old ?
r...@osol-jeos:/var/www#
I encountered the same problem...like i sed in the first post...zpool command
freezes. Anyone knows how to make it respond again?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolari
On Sun, 15 Nov 2009, Tim Cook wrote:
Once again I question why you're wasting your time with raid-z. You
might as well just stripe across all the drives. You're taking a
performance penalty for a setup that essentially has 0 redundancy.
You lose a 500gb drive, you lose everything.
Why do
Hi Daniel,
Unfortunately, the permanent errors are in this pool's metadata so it is
unlikely that this pool can be recovered.
Is this an external USB drive? These drives are not always well-behaved
and its possible that it didn't synchronize successfully.
Is the data accessible? I don't know
Steven Sim wrote:
Hello;
Dedup on ZFS is an absolutely wonderful feature!
Is there a way to conduct dedup replication across boxes from one dedup
ZFS data set to another?
Pass the '-D' argument to 'zfs send'.
--
Darren J Moffat
___
zfs-discuss mai
Hello;
Dedup on ZFS is an absolutely wonderful feature!
Is there a way to conduct dedup replication across boxes from one dedup
ZFS data set to another?
Warmest Regards
Steven Sim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mai
zpool for zone of customer-facing production appserver hung due to iscsi
transport errors. How can I {forcibly} reset this pool? zfs commands
are hanging and iscsiadm remove refuses.
r...@raadiku~[8]8:48#iscsiadm remove static-config
iqn.1986-03.com.sun:02:aef78e-955a-4072-c7f6-afe087723466
You might want to check out this thread:
http://opensolaris.org/jive/thread.jspa?messageID=435420
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discu
The links work fine if you take the * off from the end...sorry bout that
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I forgot to add the script
--
This message posted from opensolaris.org
zfs_revert.py
Description: Binary data
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
I have written an python script that enables to get back already deleted files
and pools/partitions. This is highly experimental, but I managed to get back a
moths work when all the partitions were deleted by accident(and of course
backups are for the weak ;-)
I hope someone can pass this info
25 matches
Mail list logo