Why are there only 7-8 loop devices available?
What options do I have if I want to mount, say, 100 isos?
Thanks.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Ahh, very nice, thanks!
On Sat, 5 Feb 2005, Randy.Dunlap wrote:
Justin Piszcz wrote:
Why are there only 7-8 loop devices available?
What options do I have if I want to mount, say, 100 isos?
Documentation/kernel-parameters.txt say:
max_loop= [LOOP] Maximum number of loopback devices that can
When writing to or from the drive via NFS, after 1GB or 2GB, it "feels"
like the system slows to a crawl, the mouse gets very slow, almost like
one is burning a CD at 52X under PIO mode. I originally had this disk in
my main system with an Intel ICH5 chipset (ABIT IC7-G mobo) and a Pentium
4 2
Yes, only with NFS.
On Mon, 17 Jan 2005, Norbert van Nobelen wrote:
Only with NFS? I have a raid array of the same discs and the system just
sometimes seems to hang completely (for a second or less) and then to go on
again at a normal speed (110MB/s).
I am running a SuSE 9.1 stock kernel (2.6.5-7.1
On Sun, 18 Nov 2007, Christian Kujau wrote:
On Fri, 16 Nov 2007, Chris Wedgwood wrote:
Oops, I meant it for NFSD... and I'm somewhat serious. I'm not
saying it's a good long term solution, but a potentially safer
short-term workaround.
I've opened http://bugzilla.kernel.org/show_bug.cgi?i
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SUMS, I see ~280-320MB/s. When I ran the
second one I see upwards of what I should be seeing 500-520MB/s.
NOTE:: The
On Fri, 20 Jul 2007, Lennart Sorensen wrote:
On Fri, Jul 20, 2007 at 09:58:50AM -0400, Justin Piszcz wrote:
I have a multi-core Q6600 CPU on a 10-disk Raptor RAID 5 running XFS.
I just pulled down the Debian Etch 4.0 DVD ISO's, one for x86 and one for
x86_64, when I ran md5sum -c MD5SU
The kernel is 2.6.9-42.ELsmp on RHEL4 x86_64.
servername$ dd if=/dev/zero of=4.8tb_file bs=1M count=480
File size limit exceeded
servername$ (stopped at 2TB)
The 64bit should not need the option obviously, because it is 64bit, is this a b
ug in the kernel?
For 32bit:
.config - Linux Kernel
Erm, unless this is an EXT3 limitation--oops..
On Wed, 25 Jul 2007, Justin Piszcz wrote:
The kernel is 2.6.9-42.ELsmp on RHEL4 x86_64.
servername$ dd if=/dev/zero of=4.8tb_file bs=1M count=480
File size limit exceeded
servername$ (stopped at 2TB)
The 64bit should not need the option
s the problem, nevermind!
On Wed, 25 Jul 2007, Justin Piszcz wrote:
Erm, unless this is an EXT3 limitation--oops..
On Wed, 25 Jul 2007, Justin Piszcz wrote:
The kernel is 2.6.9-42.ELsmp on RHEL4 x86_64.
servername$ dd if=/dev/zero of=4.8tb_file bs=1M count=480
File size limit exceeded
serve
When I have an iPod attached via USB to an ABIT IC7-G board before it
boots up and let X start etc, the mouse (PS/2) does not function, but the
keyboard works OK.
GPM does not work either.
When I attach the iPod after the machine has booted up, everything is OK,
until the next reboot (with th
On Mon, 3 Sep 2007, Xavier Bestel wrote:
Hi,
I have a server running with RAID5 disks, under debian/stable, kernel
2.6.18-5-686. Yesterday the RAID resync'd for no apparent reason,
without even mdamd sending a mail to warn about that:
This is normal, you probably are running Debian(?) or a
On Mon, 3 Sep 2007, Bill Davidsen wrote:
Bauke Jan Douma wrote:
$> uname -a
Linux skyscraper 2.6.22.5 #7 SMP PREEMPT Sun Sep 2 12:12:25 CEST 2007 i686
GNU/Linux
$> cat /proc/cpuinfo | grep bogomips
bogomips: 4813.46
bogomips: 4810.91
bogomips: 4810.91
bogomips: 10583.94
Th
On Wed, 5 Sep 2007, Satyam Sharma wrote:
On Fri, 31 Aug 2007, Justin Piszcz wrote:
When I have an iPod attached via USB to an ABIT IC7-G board before it boots up
and let X start etc, the mouse (PS/2) does not function, but the keyboard
works OK.
GPM does not work either.
When I attach
Is there anyway to get/see what parameters were passed to a kernel module?
Running modinfo -p will show the defaults, but for example, st,
the scsi tape driver, is there a way to see what it is currently using? I
know in dmesg it shows this when you load it initially (but if say dmesg
has been
On Wed, 5 Sep 2007, Andreas Schwab wrote:
Justin Piszcz <[EMAIL PROTECTED]> writes:
Is there anyway to get/see what parameters were passed to a kernel module?
/sys/module//parameters
Andreas.
--
Andreas Schwab, SuSE Labs, [EMAIL PROTECTED]
SuSE Linux Products GmbH, Maxfeldst
CONFIG:
Software RAID 5 (400GB x 6): Default mkfs parameters for all filesystems.
Kernel was 2.6.21 or 2.6.22, did these awhile ago.
Hardware was SATA with PCI-e only, nothing on the PCI bus.
ZFS was userspace+fuse of course.
Reiser was V3.
EXT4 was created using the recommended options on its p
On Mon, 30 Jul 2007, Miklos Szeredi wrote:
Extrapolating these %cpu number makes ZFS the fastest.
Are you sure these numbers are correct?
Note, that %cpu numbers for fuse filesystems are inherently skewed,
because the CPU usage of the filesystem process itself is not taken
into account.
So
http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=2276643&body=REVIEWS#tabs
REVIEW BY: tesseract Reviewed Jun 26, 2007
Due to a bug in the hardware of this card it doesn't work with Linux. The
card does works at 100mb/s speed but when put to gigabit speeds it gets TX
U
On Fri, 19 Oct 2007, Justin Piszcz wrote:
http://www.tigerdirect.com/applications/searchtools/item-details.asp?EdpNo=2276643&body=REVIEWS#tabs
REVIEW BY: tesseract Reviewed Jun 26, 2007
Due to a bug in the hardware of this card it doesn't work with Linux. The
card does works a
As a regular user, I cannot see the sensors on the A-bit board, but I can
see the CPU temperature, how come I can see one but not the other?
Kernel: $ uname -a
Linux mybox 2.6.23.1 #4 SMP PREEMPT Sun Oct 14 15:20:53 EDT 2007 i686 GNU/Linux
Distribution: Debian Lenny
$ sensors
abituguru3-isa-00e
It turns out the one I did not test, was actually the best:
Used: 7z -mx=9 a linux-2.6.16.17.tar.7z linux-2.6.16.17.tar
$ du -sk * | sort -n
32392 linux-2.6.16.17.tar.7z
33520 linux-2.6.16.17.tar.lzma
33760 linux-2.6.16.17.tar.rar
38064 linux-2.6.16.17.tar.rz
39472 linux-2.6.16.17.tar.szip
39520
On Sun, 14 Oct 2007, Jan Engelhardt wrote:
On Oct 14 2007 15:34, Justin Piszcz wrote:
It turns out the one I did not test, was actually the best:
Used: 7z -mx=9 a linux-2.6.16.17.tar.7z linux-2.6.16.17.tar
$ du -sk * | sort -n
32392 linux-2.6.16.17.tar.7z
33520 linux-2.6.16.17.tar.lzma
On Sun, 14 Oct 2007, Jan Engelhardt wrote:
On Oct 14 2007 15:53, Justin Piszcz wrote:
What's with all these odd formats, and where is .zip? :)
Somehow... have you tried lrzip?
$ apt-cache search lrzip
$
I tried most of the main ones in the standard testing distribution within
D
On Sun, 14 Oct 2007, Al Viro wrote:
On Sun, Oct 14, 2007 at 09:46:15PM +0200, Jan Engelhardt wrote:
(Obviously we shall pick .7z)
The hell it is. Take a look at memory footprint of those suckers...
For compression with -mx=9 it does use 500-900 MiB of RAM, that is true.
For decompressio
On Sun, 14 Oct 2007, Jan Engelhardt wrote:
On Oct 14 2007 16:58, Justin Piszcz wrote:
compress:
PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+ COMMAND
10544 war 20 0 700m 681m 1632 S 141 20.7 1:41.46 7z
Just how you can utilize a CPU to 141% remains a mystery
On Sun, 14 Oct 2007, Mark M. Hoffman wrote:
Hi Justin:
(added some CCs)
* Justin Piszcz <[EMAIL PROTECTED]> [2007-10-14 15:30:18 -0400]:
As a regular user, I cannot see the sensors on the A-bit board, but I can
see the CPU temperature, how come I can see one but not the other?
On Mon, 15 Oct 2007, Hans de Goede wrote:
Mark M. Hoffman wrote:
Hi Justin:
(added some CCs)
* Justin Piszcz <[EMAIL PROTECTED]> [2007-10-14 15:30:18 -0400]:
As a regular user, I cannot see the sensors on the A-bit board, but I can
see the CPU temperature, how come I can see one b
On Mon, 15 Oct 2007, Rudolf Marek wrote:
Hi,
Most likely you have distro and custom libsensors installed on the system.
(and different PATH for root)
Please check how many of libsensors libraries is installed.
Thanks,
Rudolf
I only had one, in /app (lm-sensors-2.10.2) -- which has bee
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 273 0.0 0.0 0 0 ?DOct21 14:40 [pdflush]
root 274 0.0 0.0 0 0 ?DOct21 13:00 [pdflush]
After several days/weeks, this is the second time this
unt: 60
high: 62
batch: 15
vm stats threshold: 42
all_unreclaimable: 0
prev_priority: 12
start_pfn: 1048576
On Sun, 4 Nov 2007, Justin Piszcz wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
r
On Sun, 4 Nov 2007, BERTRAND Joël wrote:
Justin Piszcz wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 273 0.0 0.0 0 0 ?DOct21 14:40 [pdflush]
root 274 0.0 0.0 0 0 ?DOct21 13
On Sun, 4 Nov 2007, Michael Tokarev wrote:
Justin Piszcz wrote:
On Sun, 4 Nov 2007, Michael Tokarev wrote:
[]
The next time you come across something like that, do a SysRq-T dump and
post that. It shows a stack trace of all processes - and in particular,
where exactly each task is stuck
On Mon, 5 Nov 2007, Neil Brown wrote:
On Sunday November 4, [EMAIL PROTECTED] wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root 273 0.0 0.0 0 0 ?DOct21 14:40 [pdflush]
root 274 0.0 0.0 0 0 ?
On Mon, 5 Nov 2007, Dan Williams wrote:
On 11/4/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:
On Mon, 5 Nov 2007, Neil Brown wrote:
On Sunday November 4, [EMAIL PROTECTED] wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME COMMAND
root
On Tue, 6 Nov 2007, BERTRAND Joël wrote:
Done. Here is obtained ouput :
[ 1265.899068] check 4: state 0x6 toread read
write f800fdd4e360 written
[ 1265.941328] check 3: state 0x1 toread read
wri
On Tue, 6 Nov 2007, BERTRAND Joël wrote:
Justin Piszcz wrote:
On Tue, 6 Nov 2007, BERTRAND Joël wrote:
Done. Here is obtained ouput :
[ 1265.899068] check 4: state 0x6 toread read
write f800fdd4e360 written
[ 1265.941328] check
On Thu, 8 Nov 2007, BERTRAND Joël wrote:
BERTRAND Joël wrote:
Chuck Ebbert wrote:
On 11/05/2007 03:36 AM, BERTRAND Joël wrote:
Neil Brown wrote:
On Sunday November 4, [EMAIL PROTECTED] wrote:
# ps auxww | grep D
USER PID %CPU %MEMVSZ RSS TTY STAT START TIME
COMMAND
ro
On Thu, 8 Nov 2007, Carlos Carvalho wrote:
Jeff Lessem ([EMAIL PROTECTED]) wrote on 6 November 2007 22:00:
>Dan Williams wrote:
> > The following patch, also attached, cleans up cases where the code looks
> > at sh->ops.pending when it should be looking at the consistent
> > stack-based snapsh
My .config is attached, please let me know if any other information is
needed and please CC (lkml) as I am not on the list, thanks!
Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to
the RAID5 running XFS.
Any idea what happened here?
[473795.214705] BUG: unable to hand
On Sat, 20 Jan 2007, Justin Piszcz wrote:
> My .config is attached, please let me know if any other information is
> needed and please CC (lkml) as I am not on the list, thanks!
>
> Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying files over Samba to
> the RAID5 runnin
Perhaps its time to back to a stable (2.6.17.13 kernel)?
Anyway, when I run a cp 18gb_file 18gb_file.2 on a dual raptor sw raid1
partition, the OOM killer goes into effect and kills almost all my
processes.
Completely 100% reproducible.
Does 2.6.19.2 have some of memory allocation bug as well?
On Sat, 20 Jan 2007, Avuton Olrich wrote:
> On 1/20/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:
> > Perhaps its time to back to a stable (2.6.17.13 kernel)?
> >
> > Anyway, when I run a cp 18gb_file 18gb_file.2 on a dual raptor sw raid1
> > partition, the OOM
On Sat, 20 Jan 2007, Justin Piszcz wrote:
>
>
> On Sat, 20 Jan 2007, Avuton Olrich wrote:
>
> > On 1/20/07, Justin Piszcz <[EMAIL PROTECTED]> wrote:
> > > Perhaps its time to back to a stable (2.6.17.13 kernel)?
> > >
> > > Anyway, when I
2.6.19.2:
# hddtemp /dev/sda
/dev/sda: WDC WD740GD-00FLC0: 27C
2.6.20-rc5:
# hddtemp /dev/sda
/dev/sda: ATA WDC WD740GD-00FL: S.M.A.R.T. not available
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at ht
On Sun, 21 Jan 2007, [EMAIL PROTECTED] wrote:
> From: Justin Piszcz <[EMAIL PROTECTED]>
> Date: Sat, Jan 20, 2007 at 04:03:42PM -0500
> >
> >
> > My swap is on, 2GB ram and 2GB of swap on this machine. I can't go back
> > to 2.6.17.13 as it
On Sun, 21 Jan 2007, [EMAIL PROTECTED] wrote:
> From: Justin Piszcz <[EMAIL PROTECTED]>
> Date: Sat, Jan 20, 2007 at 04:03:42PM -0500
> >
> >
> > My swap is on, 2GB ram and 2GB of swap on this machine. I can't go back
> > to 2.6.17.13 as it
On Sun, 21 Jan 2007, [EMAIL PROTECTED] wrote:
> From: Justin Piszcz <[EMAIL PROTECTED]>
> Date: Sun, Jan 21, 2007 at 11:48:07AM -0500
> >
> > What about all of the changes with NAT? I see that it operates on
> > level-3/network wise, I enabled that and ba
On Sun, 21 Jan 2007, Justin Piszcz wrote:
>
>
> >
> > Good luck,
> > Jurriaan
> > --
> > > What does ELF stand for (in respect to Linux?)
> > ELF is the first rock group that Ronnie James Dio performed with back in
> > the early 1970
Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
the OOM killer and kill all of my processes?
Doing this on a single disk 2.6.19.2 is OK, no issues. However, this
happens every time!
Anything to try? Any other output needed? Can someone shed some light on
this situ
On Sun, 21 Jan 2007, Greg KH wrote:
> On Sun, Jan 21, 2007 at 12:29:51PM -0500, Justin Piszcz wrote:
> >
> >
> > On Sun, 21 Jan 2007, Justin Piszcz wrote:
> >
> > >
> > >
> > > >
> > > > Good luck,
> > >
On Mon, 22 Jan 2007, kyle wrote:
> Hi,
>
> Yesterday I tried to increase the value of strip_cache_size to see if I can
> get better performance or not. I increase the value from 2048 to something
> like 16384. After I did that, the raid5 freeze. Any proccess read / write to
> it stucked at D st
On Mon, 22 Jan 2007, kyle wrote:
> >
> > On Mon, 22 Jan 2007, kyle wrote:
> >
> > > Hi,
> > >
> > > Yesterday I tried to increase the value of strip_cache_size to see if I
> > > can
> > > get better performance or not. I increase the value from 2048 to something
> > > like 16384. After I did tha
On Mon, 22 Jan 2007, Steve Cousins wrote:
>
>
> Justin Piszcz wrote:
> > Yes, I noticed this bug too, if you change it too many times or change it at
> > the 'wrong' time, it hangs up when you echo numbr > /proc/stripe_cache_size.
> >
> > Basic
On Mon, 22 Jan 2007, Steve Cousins wrote:
>
>
> Justin Piszcz wrote:
> > Yes, I noticed this bug too, if you change it too many times or change it at
> > the 'wrong' time, it hangs up when you echo numbr > /proc/stripe_cache_size.
> >
> > Basic
On Mon, 22 Jan 2007, Pavel Machek wrote:
> On Sun 2007-01-21 14:27:34, Justin Piszcz wrote:
> > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
> > the OOM killer and kill all of my processes?
> >
> > Doing this on a single disk 2.6.19.2
> What's that? Software raid or hardware raid? If the latter, which
driver?
Software RAID (md)
On Mon, 22 Jan 2007, Andrew Morton wrote:
> > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]>
> > wrote:
> > Why does copying an 18GB
On Tue, 23 Jan 2007, Neil Brown wrote:
> On Monday January 22, [EMAIL PROTECTED] wrote:
> > Justin Piszcz wrote:
> > > My .config is attached, please let me know if any other information is
> > > needed and please CC (lkml) as I am not on the list, thanks!
> >
On Tue, 23 Jan 2007, Michael Tokarev wrote:
> Justin Piszcz wrote:
> []
> > Is this a bug that can or will be fixed or should I disable pre-emption on
> > critical and/or server machines?
>
> Disabling pre-emption on critical and/or server machines seems to be a good
&
On Tue, 23 Jan 2007, Michael Tokarev wrote:
> Justin Piszcz wrote:
> >
> > On Tue, 23 Jan 2007, Michael Tokarev wrote:
> >
> >> Disabling pre-emption on critical and/or server machines seems to be a good
> >> idea in the first place. IMHO anyway.. ;)
On Mon, 22 Jan 2007, Chuck Ebbert wrote:
> Justin Piszcz wrote:
> > My .config is attached, please let me know if any other information is
> > needed and please CC (lkml) as I am not on the list, thanks!
> >
> > Running Kernel 2.6.19.2 on a MD RAID5 volume. Copying
he ICH8 chipset also uses some memory, in any event mem=256
causes the machine to lockup before it can even get to the boot/init
processes, the two leds on the keyboard were blinking, caps lock and
scroll lock and I saw no console at all!
Justin.
On Mon, 22 Jan 2007, Justin Piszcz wrote:
>
On Mon, 22 Jan 2007, Andrew Morton wrote:
> > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]>
> > wrote:
> > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
> > the OOM killer and kill all of my processes?
>
And FYI yes I used mem=256M just as you said, not mem=256.
Justin.
On Wed, 24 Jan 2007, Justin Piszcz wrote:
> > Is it highmem-related? Can you try it with mem=256M?
>
> Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
> 2.6.20-rc5-6 single to get b
On Mon, 22 Jan 2007, Andrew Morton wrote:
> > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz <[EMAIL PROTECTED]>
> > wrote:
> > Why does copying an 18GB on a 74GB raptor raid1 cause the kernel to invoke
> > the OOM killer and kill all of my processes?
>
On Thu, 25 Jan 2007, Pavel Machek wrote:
> Hi!
>
> > > Is it highmem-related? Can you try it with mem=256M?
> >
> > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
> > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I
> > use an onboard graphics
On Thu, 25 Jan 2007, Pavel Machek wrote:
> Hi!
>
> > > Is it highmem-related? Can you try it with mem=256M?
> >
> > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
> > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I
> > use an onboard graphics
On Thu, 25 Jan 2007, Pavel Machek wrote:
> Hi!
>
> > > Is it highmem-related? Can you try it with mem=256M?
> >
> > Bad idea, the kernel crashes & burns when I use mem=256, I had to boot
> > 2.6.20-rc5-6 single to get back into my machine, very nasty. Remember I
> > use an onboard graphics
On Thu, 25 Jan 2007, Nick Piggin wrote:
> Justin Piszcz wrote:
> >
> > On Mon, 22 Jan 2007, Andrew Morton wrote:
>
> > >After the oom-killing, please see if you can free up the ZONE_NORMAL memory
> > >via a few `echo 3 > /proc/sys/vm/drop_caches'
On Wed, 24 Jan 2007, Bill Cizek wrote:
> Justin Piszcz wrote:
> > On Mon, 22 Jan 2007, Andrew Morton wrote:
> >
> > > > On Sun, 21 Jan 2007 14:27:34 -0500 (EST) Justin Piszcz
> > > > <[EMAIL PROTECTED]> wrote:
> > > > Why does c
I am using a dual port Intel NIC on an A-Bit IC7-G; any reason why I get
these?
[4298634.444000] eth2: TX underrun, threshold adjusted.
[4299146.645000] eth2: TX underrun, threshold adjusted.
[4299146.645000] eth2: TX underrun, threshold adjusted.
[4299147.437000] eth2: TX underrun, threshold adj
On Mon, 25 Dec 2006, Robert Hancock wrote:
> Justin Piszcz wrote:
> > I am using a dual port Intel NIC on an A-Bit IC7-G; any reason why I get
> > these?
> >
> > [4298634.444000] eth2: TX underrun, threshold adjusted.
> > [4299146.645000] eth2
I had the same problem you did when I put 3 identical controllers
together. To get around that problem I used 2 TX133s and 1 TX100x2. I
believe this is the root cause of your problems.
Justin.
On Tue, 26 Dec 2006, Erik Ohrnberger wrote:
> First off, Merry Christmas, Seasons Greetings and Hap
Each of these are averaged over three runs with 6 SATA disks in a SW RAID
5 configuration:
(dd if=/dev/zero of=file_1 bs=1M count=2000)
128k_stripe: 69.2MB/s
256k_stripe: 105.3MB/s
512k_stripe: 142.0MB/s
1024k_stripe: 144.6MB/s
2048k_stripe: 208.3MB/s
4096k_stripe: 223.6MB/s
8192k_stripe: 226.0
definitely have a good idea on what's happening :-)
Cheers,
Jason
On Fri, 2007-02-23 at 06:41 -0500, Justin Piszcz wrote:
Each of these are averaged over three runs with 6 SATA disks in a SW RAID
5 configuration:
(dd if=/dev/zero of=file_1 bs=1M count=2000)
128k_stripe: 69.2MB/s
256k_st
Hi,
Anyone from Intel that reads LKML, could you provide an update as to what
is happening with support for your HECI Controller/QPS chip, which is used
on 965 (and possibly other?) chipsets.
I bought an Intel board, thinking everything would be supported, because
it is an Intel board. The
On Mon, 5 Feb 2007, Arjan van de Ven wrote:
On Sun, 2007-02-04 at 10:57 -0500, Justin Piszcz wrote:
Hi,
Anyone from Intel that reads LKML, could you provide an update as to what
is happening with support for your HECI Controller/QPS chip, which is used
on 965 (and possibly other?) chipsets
It appears to have been dead for awhile now, did I miss something?
One of my scripts uses this functionality, which now appears
dead/disabled/offline.
Can anyone provide an update?
Thanks,
Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a messa
On Fri, 9 Feb 2007, Jan Engelhardt wrote:
It appears to have been dead for awhile now, did I miss something?
One of my scripts uses this functionality, which now appears
dead/disabled/offline.
Can anyone provide an update?
kernel.org front page sayz:
Aug 21, 2003: Please don't use finger
6-2: new low speed USB
device using uhci_hcd and address 3
Feb 16 09:26:22 p34 kernel: [1007261.527521] usb 6-2: configuration #1
chosen from 1 choice
Feb 16 09:26:22 p34 kernel: [1007261.981395] hiddev96: USB HID v1.00
Device [UPS] on usb-:00:1d.1-2
On Sat, 3 Feb 2007, Justin Piszcz
Quick question,
I am using the latest ixgb driver (1.0.126) as stated both on Intel's
website and here: http://sourceforge.net/forum/forum.php?forum_id=645203
After a number of hours, sometimes days, I will get this error on the
console and the box locks up:
ixgb: eth2: ixgb_clean_tx_irq: D
On Thu, 25 Jan 2007, Mark Hahn wrote:
> > Something is seriously wrong with that OOM killer.
>
> do you know you don't have to operate in OOM-slaughter mode?
>
> "vm.overcommit_memory = 2" in your /etc/sysctl.conf puts you into a mode where
> the kernel tracks your "committed" memory needs, an
On Fri, 26 Jan 2007, Andrew Morton wrote:
> On Wed, 24 Jan 2007 18:37:15 -0500 (EST)
> Justin Piszcz <[EMAIL PROTECTED]> wrote:
>
> > > Without digging too deeply, I'd say you've hit the same bug Sami Farin and
> > > others
> > > h
Just re-ran the test 4-5 times, could not reproduce this one, but I'll
keep running this kernel w/patch for a while and see if it happens again.
On Fri, 26 Jan 2007, Andrew Morton wrote:
> On Wed, 24 Jan 2007 18:37:15 -0500 (EST)
> Justin Piszcz <[EMAIL PROTECTED]> wrote:
When I disconnect my UPS from the wall, I have to wait 15-30 seconds
before the USB drier 'polls' this information and tells me that the UPS is
on battery power (via knutclient or syslog via nut):
[EMAIL PROTECTED] POWER ALERT on Fri Jan 26 12:49:29 EST 2007
With a serial connection, I would ge
On Fri, 26 Jan 2007, Adrian Bunk wrote:
> On Sun, Jan 21, 2007 at 10:54:09AM -0500, Justin Piszcz wrote:
> > On Sun, 21 Jan 2007, [EMAIL PROTECTED] wrote:
> >
> > > From: Justin Piszcz <[EMAIL PROTECTED]>
> > > Date: Sat, Jan 20, 2007 at 04:03:42PM -0500
Under SCSI device support.
-> [*] Asynchronous SCSI scanning
Does this affect actual SCSI devices ONLY or SATA drives as well?
Justin.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kerne
Not sure if this is normal or not, was not doing anything out of the
ordinary and after secs/3600.. ~ 42hrs this occured?
$ uptime
19:06:17 up 2 days, 29 min, 10 users, load average: 0.13, 0.07, 0.06
$ uname -ra
Linux p34 2.6.20-rc7 #2 SMP Wed Jan 31 20:03:09 EST 2007 i686 GNU/Linux
One thi
I'm not sure what is causing this problem but I was curious is this on a
32bit or 64bit platform?
Justin.
On Tue, 12 Dec 2006, Haar János wrote:
> Hello, list,
>
> I am the "big red button men" with the one big 14TB xfs, if somebody can
> remember me. :-)
>
> Now i found something in the 2.6.
I have a question I could not quickly find on Google/mailing lists--
Say I have some sort of global filesystem or NFS which is 200TB.
Is there a limit either:
A) In the Linux kernel
or
B) In the NFS spec
That would limit the client as to what it could see via NFS or global
filesystem?
Or coul
Thanks for the info!
On Mon, 18 Dec 2006, Trond Myklebust wrote:
> On Mon, 2006-12-18 at 14:21 -0500, Justin Piszcz wrote:
> > I have a question I could not quickly find on Google/mailing lists--
> >
> > Say I have some sort of global filesystem or NFS which is 200TB.
>
Using 4 raptor 150s:
Without the tweaks, I get 111MB/s write and 87MB/s read.
With the tweaks, 195MB/s write and 211MB/s read.
Using kernel 2.6.19.1.
Without the tweaks and with the tweaks:
# Stripe tests:
echo 8192 > /sys/block/md3/md/stripe_cache_size
# DD TESTS [WRITE]
DEFAULT: (512K)
$ dd
On Fri, 12 Jan 2007, Michael Tokarev wrote:
> Justin Piszcz wrote:
> > Using 4 raptor 150s:
> >
> > Without the tweaks, I get 111MB/s write and 87MB/s read.
> > With the tweaks, 195MB/s write and 211MB/s read.
> >
> > Using kernel 2.6.19.1.
> >
chunk size)
On Fri, 12 Jan 2007, Justin Piszcz wrote:
>
>
> On Fri, 12 Jan 2007, Michael Tokarev wrote:
>
> > Justin Piszcz wrote:
> > > Using 4 raptor 150s:
> > >
> > > Without the tweaks, I get 111MB/s write and 87MB/s read.
> >
On Fri, 12 Jan 2007, Al Boldi wrote:
> Justin Piszcz wrote:
> > RAID 5 TWEAKED: 1:06.41 elapsed @ 60% CPU
> >
> > This should be 1:14 not 1:06(was with a similarly sized file but not the
> > same) the 1:14 is the same file as used with the other benchmarks. and t
md3 of=/dev/null bs=1M count=10240
10240+0 records in
10240+0 records out
10737418240 bytes (11 GB) copied, 398.069 seconds, 27.0 MB/s
Awful performance with your numbers/drop_caches settings.. !
What were your tests designed to show?
Justin.
On Fri, 12 Jan 2007, Justin Piszcz wrote:
>
>
On Sat, 13 Jan 2007, Al Boldi wrote:
> Justin Piszcz wrote:
> > Btw, max sectors did improve my performance a little bit but
> > stripe_cache+read_ahead were the main optimizations that made everything
> > go faster by about ~1.5x. I have individual bonnie++ benchma
On Sat, 13 Jan 2007, Al Boldi wrote:
> Justin Piszcz wrote:
> > On Sat, 13 Jan 2007, Al Boldi wrote:
> > > Justin Piszcz wrote:
> > > > Btw, max sectors did improve my performance a little bit but
> > > > stripe_cache+read_ahead were the main optimizati
On Thu, 21 Jun 2007, Pim Zandbergen wrote:
Jesse Barnes wrote:
What, are you going to report this to GigaByte?
No, but you should if you haven't already. I think GigaByte probably gets
its BIOS from another BIOS vendor (maybe Intel), so when that vendor
provides them with an update, the
On Thu, 21 Jun 2007, Mattias Wadenstein wrote:
On Thu, 21 Jun 2007, Neil Brown wrote:
I have that - apparently naive - idea that drives use strong checksum,
and will never return bad data, only good data or an error. If this
isn't right, then it would really help to understand what the caus
101 - 200 of 353 matches
Mail list logo