Simon Breden wrote:
> set sata:sata_max_queue_depth = 0x1
>
> =
>
> Anyway, after adding the line above into /etc/system, I rebooted and then
> re-tried the copy with truss:
>
> truss cp -r testdir z4
>
> It seems to hang on random files -- so it's not a
Hi,
Great stuff.
Does this change will make it into opensolaris? Looking at actual code I
couldn't find the modification.
I try to replace zdb.c in the opensolaris main tree before compiling with
nightly but the compiler wasn't happy with it. Can you write down the right
options?
bbr
Thi
I don't think that this is hardware issue, however i don't except this. I'll
try to explain why.
1. I've replaced all memory modules which are more likely to cause such a
problem.
2. There are many different applications running on that server (Apache,
PostgreSQL, etc.). However, if you look a
I 'm trying to decode a lzjb compressed blocks and I have some hard times
regarding big/little endian. I'm on x86 working with build 77.
#zdb - ztest
...
rootbp = [L0 DMU objset] 400L/200P DVA[0]=<0:e0c98e00:200>
...
## zdb -R ztest:c0d1s4:e0c98e00:200:
Found vdev: /dev/dsk/c
thanks for the quick reaction. I ve now a working binary for my system.
I don't understand why these changes should go through a project. The hooks
are already there so once the code is written no much work have to be done. But
it's an other story. Lets decode lzjb blocks now :-)
bbr
This
Hi Benjamin,
Benjamin Brumaire wrote:
> I 'm trying to decode a lzjb compressed blocks and I have some hard times
> regarding big/little endian. I'm on x86 working with build 77.
>
> #zdb - ztest
> ...
> rootbp = [L0 DMU objset] 400L/200P DVA[0]=<0:e0c98e00:200>
> ...
>
> ## zdb -R ztest:c0d1
Thanks Max, I have done a few tests with what you suggest and I have listed the
output below. I wait a few minutes before deciding it's failed, and there is
never any console output about anything failing, and nothing in any log files
I've looked in: /var/adm/messages or /var/log/syslog. Maybe i
Hello Rustam,
Saturday, May 3, 2008, 9:16:41 AM, you wrote:
R> I don't think that this is hardware issue, however i don't except this. I'll
try to explain why.
R> 1. I've replaced all memory modules which are more likely to cause such a
problem.
R> 2. There are many different applications run
Well, I had some more ideas and ran some more tests:
1. cp -r testdir ~/z1
This copied the testdir directory from the zfs pool into my home directory on
the IDE boot drive, so not part of the zfs pool, and this worked.
2. cp -r ~/z1 .
This copied the files back from my home directory on the ID
The plot thickens. I replaced 'cp' with 'rsync' and it worked -- I ran it a few
times and it didn't hang so far.
So on the face of it, it appears that 'cp' is doing something that causes my
system to hang if the files are read from and written to the same pool, but
simply replacing 'cp' with 'r
Hi Simon,
Simon Breden wrote:
> The plot thickens. I replaced 'cp' with 'rsync' and it worked -- I ran it a
> few times and it didn't hang so far.
>
> So on the face of it, it appears that 'cp' is doing something that causes my
> system to hang if the files are read from and written to the same p
no amount of playing with cp will fix a drive FW issue. but as
pointed out the slower rsync will tax the FW less. Looking at
devicer/sw/s kr/s kw/s wait actv svc_t %w %b s/w h/w trn tot us
sy wt id
sd0 0.00.00.00.0 35.0 0.00.0 100 0 0 0 0 0
it se
oops, I lied... according to my self
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-January/045141.html
"wait" are queued in solaris and active > 1 are in
the drives NCQ.
so the question is: Where are the drive's command getting
dropped across 3 disks at the same time?
and in all cases
Thanks Max, and the fact that rsync stresses the system less would help explain
why rsync works, and cp hangs. The directory was around 11GB in size.
If Sun engineers are interested in this problem then I'm happy to run whatever
commands they give me -- after all, I have a pure goldmine here for
Hi Simon,
Simon Breden wrote:
> Thanks Max, and the fact that rsync stresses the system less would help
> explain why rsync works, and cp hangs. The directory was around 11GB in size.
>
> If Sun engineers are interested in this problem then I'm happy to run
> whatever commands they give me -- aft
@Max: I've not tried this with other file systems, and not with multiple dd
commands at the same time with raw disks. I suppose this is not possible to do
with my disks which are currently part of this RAIDZ1 vdev in the pool without
corrupting data? I'll assume not.
@Rob: OK, let's assume that
I have similar, but not exactly the same drives:
format> inq
Vendor: ATA
Product: WDC WD7500AYYS-0
Revision: 4G30
Same firmware revision. I have no problems with drive performance,
although I use them under UFS and for backing stores for iscsi disks.
FYI, I had random lockups and crashes on
Wow, thanks Dave. Looks like you've had this hell too :)
So, that makes me happy that the disks and pool are probably OK, but it does
seem an issue with the NVidia MCP 55 chipset, or at least perhaps the nv_sata
driver. From reading the bug list below, it seems the problem might be a more
gener
Hello,
I'm using snv_81 x86 as a file server and occasional CPU server at
home. It consists of one system disk with normal UFS/swap and one pool
of six disks in raidz1 configuration.
Every now and again the raidz file systems will lock up hard. Any
access to them will block in IO-wait. Trying to
Oh, you're right! Well, that will simplify things! All we have to do
is convince a few bits of code to ignore ub_txg == 0. I'll try a
couple of things and get back to you in a few hours...
Jeff
On Fri, May 02, 2008 at 03:31:52AM -0700, Benjamin Brumaire wrote:
> Hi,
>
> while diving deeply in
Looking at the txg numbers, it's clear that labels on to devices that
are unavailable now may be stale:
Krzys wrote:
> When I do zdb on emcpower3a which seems to be ok from zpool perspective I get
> the following output:
> bash-3.00# zdb -lv /dev/dsk/emcpower3a
> -
21 matches
Mail list logo