Lukas Hejtmanek wrote:
On Fri, Feb 15, 2008 at 03:42:58PM +0100, Jan Engelhardt wrote:
Also consider
- DMA (e.g. only UDMA2 selected)
- aging disk
it's not the case.
hdparm reports udma5 is used, if it is reliable with libata.
The disk is 3 months old, kernel does not report any errors. And
I have seen RH3.0 crash on 32GB systems because it has too
much memory tied up in write cache. It required update 2
(this was a while ago) and a change of a parameter in /proc
to prevent the crash, it was because of a overagressive
write caching change RH implemented in the kernel resulted
in th
A kde and gnome are well above MTU they don't know anything
about MTU and neither does NFS, if those hang it up you have
a network configuration problem, and should probably fix it,
as a number of other things will show the problem also.
Routers almost always have hard coded MTU limits, and they
/dev/cdrom is a link to the proper device, if that link is not
on the initrd /dev/cdrom won't work.
I previously had some statically linked linuxrc C code (I don't
have the code anymore- it was a work-for-hire), that scanned
the various locations that the cd could be (/dev/hd[abcd...])
and looked
For high end stuff Serverworks is supposed to have some
AMD stuff soon (this is rumor I heard).
>From what Allen said, the implication to me is that something
in the current NVIDIA stat NCQ chipset is *not* fully under
NVIDIA's control, ie they got some piece of technology from
someone else and c
With the Segate sata's I worked with before, I had to
actually remove them from the blacklist, this was a couple
of months ago with the native sata seagate disks.
With the drive in the blacklist the drive worked right
under light conditions, but under a dd read from the boot
seagate the entire ma
I have had a fair amount of trouble with the limited support
for ecc reporting on higher end dual and quad cpu servers as
the reporting is pretty weak.
On the opterons I can tell which cpu gets errors, but mcelog
does not isolate things down to the dimm level properly, is
there a way to do this
If this does not happen immediately at boot up (before the machine
finished all init stuff), it is generally a hardware problem. In
my experience with new machines 75% of the time it will be the cpu
itself, and another 25% it will be a serious memory error.
The machine I have dealt with are dual
I saw it mentioned before that the kernel only allows a certain
percentage of total memory to be dirty, I thought the number was
around 40%, and I have seen machines with large amounts of ram,
hit the 40% and then put the writing application into disk wait
until certain amounts of things are writt
Attila Nagy wrote:
On 2007.07.30. 18:19, Alan Cox wrote:
O> MCE:
[153103.918654] HARDWARE ERROR
[153103.918655] CPU 1: Machine Check Exception:5 Bank
0: b2401400
[153104.066037] RIP !INEXACT! 10:
{mwait_idle+0x46/0x60}
[153104.145699] TSC 1167e915e93ce
[153104.18355
Jeff V. Merkey wrote:
I have seen something similiar with the ixgb. make certain there are
**NO** other adapters sharing the PCI bus with the
ixgb. There are some serious hardware compatibility issues with the
ixgb mixing it with other cards on the same PCI-X bus,
and I have seen power loa
Auke Kok wrote:
[added netdev to CC]
Roger Heflin wrote:
I have a machine (actually 2 machines) that upon loading
the intel 10GBe driver (ixgb) the machine reboots, I am
using a RHAS4.4 based distribution with Vanilla 2.6.19.2
(the RHAS 4.4.03 kernel also reboots with the ixgb load),
I don
Dave Kleikamp wrote:
On Tue, 2007-05-29 at 12:16 -0500, Roger Heflin wrote:
Dave,
Apparently there appears to be another different similar lockup,
The MTBF has risen from 1-2 hours without that patch to >100 hours,
so I am fairly sure the patch did correct the original lockup, or
at the v
Dave Kleikamp wrote:
Sorry if I'm missing anyone on the reply, but my mail feed is messed up
and I'm replying from the gmane archive.
On Tue, 15 May 2007 09:08:25 -0500, Roger Heflin wrote:
Hello,
Running 2.6.21.1 (FC6 Dist), with a RHEL client (client
appears to not be having is
Running bonnie over nfs on a RHEL4.4 client against a 2.6.21.1 server
got me this crash after about 4 hours of running on the server:
This was running lvm -> ext3 -> nfs nfsclient (RHEL4.4).
Ideas?
Roger
May 15 21:10:31 vault1 kernel: [ cut here ]
J. Bruce Fields wrote:
On Wed, May 16, 2007 at 08:55:19AM -0500, Roger Heflin wrote:
Running bonnie over nfs on a RHEL4.4 client against a 2.6.21.1 server
got me this crash after about 4 hours of running on the server:
This was running lvm -> ext3 -> nfs nfsclient (RHEL4.4).
Yipes
Dave Kleikamp wrote:
I don't have an answer to an ext3 deadlock, but this looks like a jfs
problem that was recently fixed in linux-2.6.22-rc1. I had intended to
send it to the stable kernel after it was picked up in mainline, but
hadn't gotten to it yet.
The patch is here:
http://git.kernel.
I am getting this bug under heavy IO/NFS on 2.6.21.1.
BUG: sleeping function called from invalid context at mm/mempool.c:210
So far I have got the error I believe 3 times.
Roger
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
I ran some tests on a 4 intel socket box with files in tmpfs (gold
6152 I think) and with the files interleaved 4way (I think) got the
same speeds you got on your intels (roughly) with defaults.
I also tested on my 6 core/4500u ryzen and got almost the same
speed(slightly slower) as on your large
On Fri, Apr 2, 2021 at 4:13 AM Paul Menzel wrote:
>
> Dear Linux folks,
>
>
> > Are these values a good benchmark for comparing processors?
>
> After two years, yes they are. I created 16 10 GB files in `/dev/shm`,
> set them up as loop devices, and created a RAID6. For resync speed it
> makes di
No where in the mount command did you tell it that this was a
nfsversion 3 only mount, the mount name itself means nothing to mount,
so it tired nfs version 4 first then nfs version 3.
Note this in the man page for nfs:
nfsvers=n The NFS protocol version number used to contact the
server's NF
I had a 9230...on older kernels it worked "ok" so long as you did not
do any smart commands, I removed it and went to something that works.
Marvell appears to be hit and miss with some cards/chips working
right and some not...
Do enough smartcmds and the entire board (all 4 ports) locked up and
pretty much any smartcommands...I was running something that got all
of the smart stats 1x per hour per disk...and this made it crash about
1x per week, if you were pushing the disks hard it appear to make it
even more likely to crash under the smart cmds, removing the commands
took things up to 2-
Gene,
How big is the file you have? Here is what I have, and this is
from several different kernels.
wc gadget_multi.txt
150 830 5482 gadget_multi.tx
cksum gadget_multi.txt
3973522114 5482 gadget_multi.txt
ls -l gadget_multi.txt
-rw-rw-r-- 1 root root 5482 Dec 20 09:51 gadget_multi.txt
rescue boot it, change the /boot mount line in /etc/fstab to add
noauto (like noauto,defaults...or whatever else you already have) and
change the last column to 0 to disable fsck on it.
It should boot then, and you have the machine fully up were you can do
better debugging.
ie mount /boot may giv
Doesn't NFS have an intr flag to allow kill -9 to work? Whenever I
have had that set it has appeared to work after about 30 seconds or
so...without that kill -9 does not work when the nfs server is
missing.
On Fri, Aug 1, 2014 at 8:21 PM, Jeff Layton wrote:
> On Fri, 1 Aug 2014 07:50:53 +1000
If you are robocoping small files you will hit other limits.
Best I have seen with small files is around 30 files/second, and that
involves multiple copies going on. Remember with a small files there
are several reads and writes that need to be done to complete a create
of a small file and each
What kind of underlying disk is it?
On Fri, Nov 14, 2014 at 7:36 AM, Jagan Teki wrote:
> On 14 November 2014 18:50, Roger Heflin wrote:
>> If you are robocoping small files you will hit other limits.
>>
>> Best I have seen with small files is around 30 files/second,
I know from some data I have seen that between the Intel Sandy Bridge
and Intel Ivy Bridge the same motherboards stopped delivering INTx
reliably (int lost under load around 1x every 30 days, driver and
firmware has no method to recover from failure) We had to transition
to using MSI on some PCI
>
> Please consider the environment before printing this email
>
>
>
> -Original Message-
> From: Roger Heflin [mailto:rogerhef...@gmail.com]
> Sent: Wednesday, March 04, 2015 10:31 AM
> To: McKay, Luke
> Cc: Andrey Utkin; Andrey Utkin; Stephen Hemminger;
> k
No idea if this would still work, but back before label/uuid and lvm
in initird I had a staticly linked "C" program that ran inside initrd,
it searched for likely places a boot device could be (mounted them and
looked for a file to confirm it was the right device, then unmounted
it), and when it fo
31 matches
Mail list logo