On Tue, 06 Nov 2001, Roland McGrath wrote:
> No, that doesn't quite make sense. The mflags argument to
Of course :)
> Try changing that kmalloc call to use __GFP_HIGH instead (that's what
> GFP_ATOMIC is defined to in oskit/linux/src/include/linux/mm.h).
Using __GFP_HIGH helped. So it seems d
No, that doesn't quite make sense. The mflags argument to
oskit_skbufio_mem_alloc uses OSENV_* flag bits, not GFP_* flag bits.
kmalloc needs GFP_* flag bits. However, note the assert early in
oskit_skbufio_mem_alloc, so you know mflags is always just OSENV_NONBLOCKING.
So just calling it with G
On Mon, 05 Nov 2001, Roland McGrath wrote:
> You said "fixing kmalloc was easy", but I didn't see you post any change to
> the oskit code. I'm not at all sure that kmalloc should change.
Ok, here is what I fixed. mflags is passed to oskit_skufio_mem_alloc but
it is not passed to kamlloc. If it
> The problem is that the driver exists in OSKit and doesn't work on the
> test kernels. Two others have emailed the list about the same
> problem, and there has been no response.
Well, you gets what you pays for. Has anyone tried to debug it? Have you
checked the state of the oskit driver vs
On Tue, Nov 06, 2001 at 04:18:31PM -0500, Roland McGrath wrote:
> Multiboot loading is the least of your troubles in implementing a
> user-space driver. If your only motivation is to support this device,
> then porting a Linux driver to oskit is by far your most efficient choice.
The problem is
Multiboot loading is the least of your troubles in implementing a
user-space driver. If your only motivation is to support this device,
then porting a Linux driver to oskit is by far your most efficient choice.
___
Bug-hurd mailing list
[EMAIL PROTECTE
Under the new sexy way of doing things, we load ext2fs as a multiboot
module, right?
Would I be able to load a userspace Harddrive driver that way? OSKit
doesn't support my adaptec controller, and doesn't look like it will
any time soon (no response on support forums when people have asked
befor
Diego Roversi <[EMAIL PROTECTED]> writes:
> I think that storeio is the only possibility to put a cache mechanism in
> user space. But I see some drawbacks:
> - memory user for caching can be paged out
If that happens, you're doing it wrong, I think. You want to use a
special pager for the cach
On Tue, Nov 06, 2001 at 09:21:19PM +0100, Diego Roversi wrote:
> I see that storeio have a option "-e" that hide the device. I suppose that
> using this option cause ext2fs to go through the storeio translator. So in
> this case I can happily implement caching in storeio (even if we use more CPU).
On Tue, Nov 06, 2001 at 10:26:59AM +0100, Marcus Brinkmann wrote:
> Data blocks don't go to storeio. The included libstore communicates with
> storeio about the storage type and does the actual reading/writing etc
> itself (see file_get_storage_info, store_create, store_encode and
> store_decode
[EMAIL PROTECTED] (Niels Möller) writes:
> Farid Hajji <[EMAIL PROTECTED]> writes:
>
> > The problem right now is that there is no memory sharing between normal
> > clients and the filesystem translators. Here, data is simply copied across
> > a costly IPC path, thus wasting a lot of CPU cycles.
On Tue, Nov 06, 2001 at 01:08:38AM +0100, Farid Hajji wrote:
> The main reaons for the slowness of the Hurd's file I/O, is that data is
> actually _copied_ more often than necessary between the clients and the
> file servers/translators. Just look at the sources of glibc and the hurd,
> starting e
> People keep saying they think the Hurd is "deeply wedded" to Mach IPC. I
> think all those people are just not really looking at the fundamentals of
> the Hurd code. Yes, we use MiG RPC presentations and Mach port operations.
This is very likely in my case. I am still new to the code and am
Farid Hajji <[EMAIL PROTECTED]> writes:
> The problem right now is that there is no memory sharing between normal
> clients and the filesystem translators. Here, data is simply copied across
> a costly IPC path, thus wasting a lot of CPU cycles.
I thought Mach had some mechanism that allowed ipc
14 matches
Mail list logo