Dmitry Sorokin wrote:
Thanks for the update Robert.
Currently I have failed zpool with slog missing, which I was unable to
recover, although I was able to find out what the GUID was for the slog
device (below is the uotput of zpool import command).
I couldn’t compile logfix binary eithe
Thanks for the update Robert.
Currently I have failed zpool with slog missing, which I was unable to
recover, although I was able to find out what the GUID was for the slog
device (below is the uotput of zpool import command).
I couldn't compile logfix binary either, so I ran out of any ideas
On Jul 29, 2010, at 6:04 PM, Carol wrote:
> Richard,
>
> I disconnected all but one path and disabled mpxio via stmsboot -d and my
> read performance doubled. I saw about 100MBps average from the pool.
This is a start. Something is certainly fishy in the data paths, but
it is proving to be d
fyi
Original Message
Subject:Read-only ZFS pools [PSARC/2010/306 FastTrack timeout
08/06/2010]
Date: Fri, 30 Jul 2010 14:08:38 -0600
From: Tim Haley
To: psarc-...@sun.com
CC: zfs-t...@sun.com
I am sponsoring the following fast-track for George Wilson.
Thanks that seems to have done it.
I exported with the -f option and imported it again, it is now resilvering.
...Oh, it's finished resilvering, that was quick!
It has shown 4 checksum errors so I'm doing a scrub now, but it seems to be
working.
Thanks for helping me out!
--
This message posted
Just wondering if anyone has experimented with working out the best zvol
recordsize for a zvol which is backing a zpool over iSCSI?
--
Andrew Gabriel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/
On Thu, Jul 29, 2010 at 6:04 PM, Carol wrote:
> Richard,
>
> I disconnected all but one path and disabled mpxio via stmsboot -d and my
> read performance doubled. I saw about 100MBps average from the pool.
>
> BTW, single harddrive performance (single disk in a pool) is about 140MBps.
>
> What d
Richard,
I disconnected all but one path and disabled mpxio via stmsboot -d and my read
performance doubled. I saw about 100MBps average from the pool.
BTW, single harddrive performance (single disk in a pool) is about 140MBps.
What do you think?
Thank you again for your help!
--- On T
> Yep. With round robin it's about 80 for each disk for ascv_t
Any ideas?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thank you James, exactly the answer I needed.
Regards,
Mark
On Jul 29, 2010 3:05pm, James Dickens wrote:
On Thu, Jul 29, 2010 at 11:50 AM, Mark white...@gmail.com> wrote:
I'm trying to understand how snapshots work in terms of how I can use
them for recovering and/or duplicating virtual
Yes I noticed that thread a while back and have been doing a great deal of
testing with various scsi_vhci options.
I am disappointed that the thread hasn't moved further since I also suspect
that it is related to mpt-sas or multipath or expander related.
I was able to get aggregate writes up t
Hi Gary,
I will file a bug to track the zfs upgrade/device busy problem.
We use beadm or lucreate to upgrade the root BE so we generally don't
have to do an in-place root dataset replacement.
Thanks,
Cindy
On 07/29/10 17:03, Gary Mills wrote:
On Thu, Jul 29, 2010 at 10:26:14PM +0200, Pawel J
I was mistaken below. I see that the ls -dv was issued from the
2 directory. We have no idea what's going on here. It works
as expected in my tests.
If you identify steps that lead up to this or can reproduce it
and can provide the Solaris release, please let us know.
Thanks,
Cindy
On 07/29/10
In general, ZFS can detect device changes but we recommend
exporting the pool before you move hardware around.
You might try exporting and importing this pool to see if
ZFS recognizes this device again.
Make sure you have a good backup of this data before you
export it because its hard to tell i
I'm about to do some testing with that dtrace script..
However, in the meantime - I've disabled primarycache (set primarycache=none)
since I noticed that it was easily caching /dev/zero and I wanted to do some
tests within the OS rather than over FC.
I am getting the same results through dd.
Vi
I had the same problem after disabling multipath and some of my device names
having changed. I performed replace -f - then noticed that the pool was
resilvering. Once finished it displayed the new device name if I recall
correctly.
I could be wrong, but that's how I remember it.
--
This messa
Yes because the author was too smart for his own good and ssd is for Sparc, you
use SD. Delete all the ssd lines. Here's that script which will work for you
provided it doesn't get wrapped or otherwise maligned by this html interface:
#!/usr/sbin/dtrace -s
#pragma D option quiet
fbt:sd:sdstrat
> You should look at your disk IO patterns which will
> likely lead you to find unset IO queues in sd.conf.
> Look at this
> http://blogs.sun.com/chrisg/entry/latency_bubble_in_yo
> ur_io as a place to start.
Any idea why I would get this message from the dtrace script?
(I'm new to dtrace / open
Good idea.
I will keep this test in mind - I'd do it immediately except for the fact that
it would be somewhat difficult to connect power to the drives considering the
design of my chassis, but I'm sure I can figure something out if it comes to
it...
--
This message posted from opensolaris.org
I believe, I'm in a very similar situation than yours.
Have you figured something out?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
You should look at your disk IO patterns which will likely lead you to find
unset IO queues in sd.conf. Look at this
http://blogs.sun.com/chrisg/entry/latency_bubble_in_your_io as a place to
start. The parameter you can try to set globally (bad idea) is done by doing
echo zfs_vdev_max_pending/W
Hello Karol,
you wrote at, 29. Juli 2010 02:23:
> I appear to be getting between 2-9MB/s reads from individual disks
It sounds for me that you have a hardware failure because 2-9 MB/s
are less than dropping.
> 2x LSI 9200-8e SAS HBAs (2008 chipset)
> Supermicro 846e2 enclosure with LSI sasx36 e
22 matches
Mail list logo