> SFF ATA controllers are peculiar in that...
>
> 1. it doesn't have reliable IRQ pending bit.
>
> 2. it doesn't have reliable IRQ mask bit.
>
> 3. some controllers tank the machine completely if status or data
> register is accessed differently than the chip likes.
And 4. which is a killer for
James Chapman wrote:
> Mark Lord wrote:
>> One way to deal with it in an embedded device, is to force the
>> application that's generating the I/O to self-throttle.
>> Or modify the device driver to self-throttle.
>
> Does disk access have to be so interrupt driven? Could disk interrupt
> handling
Mark Lord wrote:
> Fajun Chen wrote:
>>
>> As a matter of fact, I'm using /dev/sg*. Due to the size of my test
>> application, I have not be able to compress it into a small and
>> publishable form. However, this issue can be easily reproduced on my
>> ARM XScale target using sg3_util code as foll
Fajun Chen wrote:
As a matter of fact, I'm using /dev/sg*. Due to the size of my test
application, I have not be able to compress it into a small and
publishable form. However, this issue can be easily reproduced on my
ARM XScale target using sg3_util code as follows:
1. Run printtime.c attache
On 11/18/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> Fajun Chen wrote:
> >..
> > I verified your program works in my system and my application works as
> > well if changed accordingly. However, this change (indirect IO in sg
> > term) may come at a performance cost for IO intensive applications
> >
Fajun Chen wrote:
..
I verified your program works in my system and my application works as
well if changed accordingly. However, this change (indirect IO in sg
term) may come at a performance cost for IO intensive applications
since it does NOT utilize mmaped buffer managed by sg driver. Please
On 11/18/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> Fajun Chen wrote:
> > On 11/17/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> ..
> >> What you probably intended to do instead, was to use mmap to just allocate
> >> some page-aligned RAM, not to actually mmap'd any on-disk data. Right?
> >>
> >> Her
Fajun Chen wrote:
On 11/17/07, Mark Lord <[EMAIL PROTECTED]> wrote:
..
What you probably intended to do instead, was to use mmap to just allocate
some page-aligned RAM, not to actually mmap'd any on-disk data. Right?
Here's how that's done:
read_buffer = (U8 *)mmap(NULL, buf_sz, PROT_R
On 11/17/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> Fajun Chen wrote:
> > On 11/17/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> >> Fajun Chen wrote:
> >>> On 11/16/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> Fajun Chen wrote:
> ..
> >>> This problem also happens with R/W DMA ops. Below are simpli
Fajun Chen wrote:
On 11/17/07, Mark Lord <[EMAIL PROTECTED]> wrote:
Fajun Chen wrote:
On 11/16/07, Mark Lord <[EMAIL PROTECTED]> wrote:
Fajun Chen wrote:
..
This problem also happens with R/W DMA ops. Below are simplified code snippets:
// Open one sg device for read
if ((sg_fd =
On 11/17/07, James Chapman <[EMAIL PROTECTED]> wrote:
> Fajun Chen wrote:
> > On 11/16/07, Tejun Heo <[EMAIL PROTECTED]> wrote:
> >> Fajun Chen wrote:
> >>> I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
> >>> and libata version 2.00 are loaded on ARM XScale board. Under he
On 11/17/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> Fajun Chen wrote:
> > On 11/16/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> >> Fajun Chen wrote:
> >>> Hi All,
> >>>
> >>> I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
> >>> and libata version 2.00 are loaded on ARM XScale
Fajun Chen wrote:
> On 11/16/07, Tejun Heo <[EMAIL PROTECTED]> wrote:
>> Fajun Chen wrote:
>>> I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
>>> and libata version 2.00 are loaded on ARM XScale board. Under heavy
>>> cpu load (e.g. when blocks per transfer/sector count is
Fajun Chen wrote:
On 11/16/07, Mark Lord <[EMAIL PROTECTED]> wrote:
Fajun Chen wrote:
Hi All,
I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
and libata version 2.00 are loaded on ARM XScale board. Under heavy
cpu load (e.g. when blocks per transfer/sector count is set
On 11/16/07, Mark Lord <[EMAIL PROTECTED]> wrote:
> Fajun Chen wrote:
> > Hi All,
> >
> > I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
> > and libata version 2.00 are loaded on ARM XScale board. Under heavy
> > cpu load (e.g. when blocks per transfer/sector count is set t
On 11/16/07, Tejun Heo <[EMAIL PROTECTED]> wrote:
> Fajun Chen wrote:
> > I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
> > and libata version 2.00 are loaded on ARM XScale board. Under heavy
> > cpu load (e.g. when blocks per transfer/sector count is set to 1),
> > I've o
Fajun Chen wrote:
Hi All,
I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
and libata version 2.00 are loaded on ARM XScale board. Under heavy
cpu load (e.g. when blocks per transfer/sector count is set to 1),
I've observed that the test application can suck cpu away for l
Fajun Chen wrote:
> I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
> and libata version 2.00 are loaded on ARM XScale board. Under heavy
> cpu load (e.g. when blocks per transfer/sector count is set to 1),
> I've observed that the test application can suck cpu away for long
Hi All,
I use sg/libata and ata pass through for read/writes. Linux 2.6.18-rc2
and libata version 2.00 are loaded on ARM XScale board. Under heavy
cpu load (e.g. when blocks per transfer/sector count is set to 1),
I've observed that the test application can suck cpu away for long
time (more than
19 matches
Mail list logo