lists two CPUs, and running two processes in this case is the wrong
thing to do. (Hyperthreading ends up degrading our performance,
perhaps due to cache or bus contention).
Please CC replies.
Thanks,
Dan Maas
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
t
> Getting the user's "interactive" programs loaded back
> in afterwards is a separate, much more difficult problem
> IMHO, but no doubt still has a reasonable solution.
Possibly stupid suggestion... Maybe the interactive/GUI programs should wake
up once in a while and touch a couple of their page
> Signals are a pretty dopey API anyway - so instead of trying to patch
> them up, why not think of something better for AIO?
I have to agree, in a way... At some point we need to swallow our pride,
admit that UNIX has a crappy event model, and implement something like Win32
GetMessage =)...
I'v
> Windows NT/2000 has flags that can be for each CreateFile operation
> ("open" in Unix terms), for instance
>
> FILE_ATTRIBUTE_TEMPORARY
> FILE_FLAG_WRITE_THROUGH
> FILE_FLAG_NO_BUFFERING
> FILE_FLAG_RANDOM_ACCESS
> FILE_FLAG_SEQUENTIAL_SCAN
>
There is a BSD-originated convention for t
> Is there a user-space implemenation (library?) for
> coroutines that would work from C?
Here is another one:
http://oss.sgi.com/projects/state-threads/
Regards,
Dan
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More
Just an update to my situation... I've implemented my idea of clearing the
associated PTE's when I need to free the DMA buffer, then re-filling them in
nopage(). This seems to work fine; if the user process tries anything fishy,
it gets a SIGBUS instead of accessing the old mapping.
I encountered
>> Later, the program calls the ioctl() again to set a smaller
>> buffer size, or closes the file descriptor. At this point
>> I'd like to shrink the buffer or free it completely. But I
>> can't assume that the program will be nice and munmap() the
>> region for me
> Look at drivers/char/drm, for
> That seems a bit perverse. How will the poor userspace program know
> not to access the pages you have yanked away from it? If you plan
> to kill it, better to do that directly. If you plan to signal it
> that the mapping is gone, it can just call munmap() itself.
Thanks Pete. I will explain
I am writing a device driver that, like many others, exposes a shared memory
region to user-space via mmap(). The region is allocated with vmalloc(), the
pages are marked reserved, and the user-space mapping is implemented with
remap_page_range().
In my driver, I may have to free the underlying v
> Are there any negative effects of editing include/asm/param.h to change
> HZ from 100 to 1024? Or any other number? This has been suggested as a
> way to improve the responsiveness of the GUI on a Linux system.
I have also played around with HZ=1024 and wondered how it affects
interactivity. I
IIRC the problem with implementing asynchronous *disk* I/O in Linux today is
that the filesystem code assumes synchronous I/O operations that block the
whole process/thread. So implementing "real" asynch I/O (without the
overhead of creating a process context for each operation) would require
re-w
> I am wondering if it is permitted to use message queues between a user
> application and a device driver module...
> Can anyone help me?
It may be theoretically possible, but an easier and much more common
approach to this type of thing is for the driver to export an mmap()
interface. You could
> I need to be able to obtain and pin approximately 8 MB of
> contiguous physical memory in user space. How would I go
> about doing that under Linux if it is at all possible?
The only way to allocate that much *physically* contiguous memory is by
writing a driver that grabs it at boot-time (I t
> It's not the select that waits. It's a delay in the tcp send
> path waiting for more data. Try disabling it:
>
> int f=1;
> setsockopt(s, SOL_TCP, TCP_NODELAY, &f, sizeof(f));
Bingo! With this fix, 2.2.18 performance becomes almost identical to 2.4.0
performance. I assume 2.4.0 disables Nagle
What kernel have you been using? I have reproduced your problem on a
standard 2.2.18 kernel (elapsed time ~10sec). However, using a 2.4.0 kernel
with HZ=1000, I see a 100x improvement (elapsed time ~0.1 sec; note that
increasing HZ alone should only give a 10x improvement). Perhaps the
scheduler w
> 08048000-08b5c000 r-xp 03:05 1130923
/tmp/newmagma/magma.exe.dyn
> 08b5c000-08cc9000 rw-p 00b13000 03:05 1130923
/tmp/newmagma/magma.exe.dyn
> 08cc9000-0bd0 rwxp 00:00 0
> Now, subsequent to each memory allocation, only the second number in the
> third line changes. It be
> > Being able to shut down by hitting the power switch is a little luxury
> > for which I've been willing to invest more than a year of my life to
> > attain. Clueless newbies don't know why it should be any other way, and
> > it's essential for embedded devices.
Just some food for thought - hi
> The pipe bandwidth is intimately related to pipe latency. Linux pipes
> are fairly small (only 4kB worth of data buffer), so they need good
> latency for good performance.
...
> The pipe bandwidth could be fairly easily improved by just doubling the
> buffer size (or by using VM tricks), but it
> Shouldn't there also be a way to add non-filedescriptor based events
> into this, such as "child exited" or "signal caught" or shm things?
Waiting on pthreads condition variables, POSIX message queues, and
semaphores (as well as fd's) at the same time would *rock*...
Unifying all these "waitab
> I have a question about the time-slice of linux, how do I know it, or how
> can I test it?
First look for the (platform-specific) definition of HZ in
include/asm/param.h. This is how many timer interrups you get per second (eg
on i386 it's 100). Then look at include/linux/sched.h for the defini
The memory map of a user process on x86 looks like this:
-
KERNEL (always present here)
0xC000
-
0xBFFF
STACK
-
MAPPED FILES (incl. shared libs)
0x4000
-
HEAP (brk()/malloc())
EXECUTABLE CODE
0x08048000
-
> All portability issues aside, if one is writing an application in
> Linux that one would be tempted to make multithreaded for
> whatever reason, what would be the better Linux way of doing
> things?
Let's go back to basics. Look inside your computer. See what's there:
1) one (or more) CPUs
2)
22 matches
Mail list logo