> I do not think if we do a ring buffer that events should be obtainable
> via a syscall at all. Rather, I think this system call should be
> purely "sleep until ring is not empty".
Mmm, yeah, of course. That's much simpler. I'm looking forward to
Evgeniy's next patch set.
> The ring buffer s
From: Ulrich Drepper <[EMAIL PROTECTED]>
Date: Tue, 01 Aug 2006 00:53:10 -0700
> This is the case to keep in mind here. I thought Zach and the other
> involved in the discussions in Ottawa said this has been shown to be a
> problem and that a ring buffer implementation with something other than
>
Herbert Xu wrote:
> The other to consider is that events don't come from the hardware.
> Events are written by the kernel. So if user-space is just reading
> the events that we've written, then there are no cache misses at all.
Not quite true. The ring buffer can be written to from another
proce
On Mon, Jul 31, 2006 at 03:00:28PM -0700, David Miller ([EMAIL PROTECTED])
wrote:
> From: Evgeniy Polyakov <[EMAIL PROTECTED]>
> Date: Mon, 31 Jul 2006 23:41:43 +0400
>
> > Since kevents are never generated by kernel, but only marked as ready,
> > length of the main queue performs as flow control
From: Evgeniy Polyakov <[EMAIL PROTECTED]>
Date: Fri, 28 Jul 2006 09:23:12 +0400
> I completely agree that existing kevent interface is not the best, so
> I'm opened for any suggestions.
> Should kevent creation/removing/modification be separated too?
I do not think so, object for these 3 operati
From: Zach Brown <[EMAIL PROTECTED]>
Date: Thu, 27 Jul 2006 12:18:42 -0700
[ I kept this thread around in my inbox because I wanted to give it
some deep thought, so sorry for replying to old bits... ]
> So as the kernel generates events in the ring it only produces an event
> if the ownership f
> Ok, let's do it in the following way:
> I present new version of kevent with new syscalls and fixed issues mentioned
> before, while people look at it we can end up with mapped buffer design.
> Is it ok?
Yeah, that sounds good. I'm looking forward to seeing the next set of
patches :).
- z
-
T
From: Brent Cook <[EMAIL PROTECTED]>
Date: Mon, 31 Jul 2006 17:16:48 -0500
> There has to be some thread that is responsible for reading
> events. Perhaps a reasonable thing for a blocked thread that cannot
> process events to do is to yield to one that can?
The reason one decentralizes event pro
On Monday 31 July 2006 17:00, David Miller wrote:
>
> So we'd have cases like this, assume we start with a full event
> queue:
>
> thread Athread B
>
> dequeue event
> aha, new connection
> accept()
> register new kevent
>
From: Evgeniy Polyakov <[EMAIL PROTECTED]>
Date: Mon, 31 Jul 2006 23:41:43 +0400
> Since kevents are never generated by kernel, but only marked as ready,
> length of the main queue performs as flow control, so we can create a
> mapped buffer which will have space equal to the main queue length
> m
On Mon, Jul 31, 2006 at 02:33:22PM +0400, Evgeniy Polyakov ([EMAIL PROTECTED])
wrote:
> Ok, let's do it in the following way:
> I present new version of kevent with new syscalls and fixed issues mentioned
> before, while people look at it we can end up with mapped buffer design.
> Is it ok?
Since
On Mon, Jul 31, 2006 at 03:57:16AM -0700, David Miller wrote:
>
> So I would say for up to 4 or 5 events, system call overhead alone
> touches as many cache lines as the events themselves.
Absolutely.
The other to consider is that events don't come from the hardware.
Events are written by the ke
From: Evgeniy Polyakov <[EMAIL PROTECTED]>
Date: Mon, 31 Jul 2006 14:50:37 +0400
> In syscall time kevents copy 40bytes for each event + 12 bytes of header
> (number of events, timeout and command number). That's likely two cache
> lines if only one event is reported.
Do you know how many cachel
On Mon, Jul 31, 2006 at 08:35:55PM +1000, Herbert Xu ([EMAIL PROTECTED]) wrote:
> Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> >
> >> - if there is space, report it in the ring buffer. Yes, the buffer
> >> can be optional, then all events are reported by the system call.
> >
> > That requires
Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
>
>> - if there is space, report it in the ring buffer. Yes, the buffer
>> can be optional, then all events are reported by the system call.
>
> That requires a copy, which can neglect syscall overhead.
> Do we really want it to be done?
Please note
On Sat, Jul 29, 2006 at 09:18:47AM -0700, Ulrich Drepper ([EMAIL PROTECTED])
wrote:
> Evgeniy Polyakov wrote:
> > Btw, why do we want mapped ring of ready events?
> > If user requestd some event, he definitely wants to get them back when
> > they are ready, and not to check and then get them?
> >
Nicholas Miell wrote:
> [...] and was wondering
> if you were familiar with the Solaris port APIs* and,
I wasn't.
> if so, you could
> please comment on how your proposed event channels are different/better.
There indeed is not much difference. The differences are in the
details. The way thos
On Sat, 2006-07-29 at 19:48 +0400, Evgeniy Polyakov wrote:
> On Fri, Jul 28, 2006 at 09:32:42PM -0700, Nicholas Miell ([EMAIL PROTECTED])
> wrote:
> > Speaking of API design choices, I saw your OLS paper and was wondering
> > if you were familiar with the Solaris port APIs* and, if so, you could
>
On Saturday 29 July 2006 18:18, Ulrich Drepper wrote:
> Evgeniy Polyakov wrote:
> > Btw, why do we want mapped ring of ready events?
> > If user requestd some event, he definitely wants to get them back when
> > they are ready, and not to check and then get them?
> > Could you please explain more o
Evgeniy Polyakov wrote:
> Btw, why do we want mapped ring of ready events?
> If user requestd some event, he definitely wants to get them back when
> they are ready, and not to check and then get them?
> Could you please explain more on this issue?
If of course makes no sense to enter the kernel t
On Fri, Jul 28, 2006 at 09:32:42PM -0700, Nicholas Miell ([EMAIL PROTECTED])
wrote:
> Speaking of API design choices, I saw your OLS paper and was wondering
> if you were familiar with the Solaris port APIs* and, if so, you could
> please comment on how your proposed event channels are different/b
On Fri, Jul 28, 2006 at 08:38:02PM -0700, Ulrich Drepper ([EMAIL PROTECTED])
wrote:
> Zach Brown wrote:
> > Ulrich, would you be satisfied if we didn't
> > have the userspace mapped ring on the first pass and only had a
> > collection syscall?
>
> I'm not the one to make a call but why rush thing
On Fri, 2006-07-28 at 20:38 -0700, Ulrich Drepper wrote:
> Zach Brown wrote:
> > Ulrich, would you be satisfied if we didn't
> > have the userspace mapped ring on the first pass and only had a
> > collection syscall?
>
> I'm not the one to make a call but why rush things? Let's do it right
> from
Zach Brown wrote:
> Ulrich, would you be satisfied if we didn't
> have the userspace mapped ring on the first pass and only had a
> collection syscall?
I'm not the one to make a call but why rush things? Let's do it right
from the start. Later changes can only lead to problems with users of
the
>>> Clearly we should port httpd to kevents and take some measurements :)
oh, I see, I forgot the 't' in 'thttpd'. My mistake.
- z
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/maj
Evgeniy Polyakov wrote:
> On Fri, Jul 28, 2006 at 12:01:28PM -0700, Zach Brown ([EMAIL PROTECTED])
> wrote:
>> Clearly we should port httpd to kevents and take some measurements :)
>
> One of my main kevent benchmarks (socket notifications for
> accept/receive) is handmade http server.
Yeah, so
On Fri, Jul 28, 2006 at 12:01:28PM -0700, Zach Brown ([EMAIL PROTECTED]) wrote:
> Clearly we should port httpd to kevents and take some measurements :)
One of my main kevent benchmarks (socket notifications for
accept/receive) is handmade http server.
I compared it with FreeBSD kqueue, epoll and k
> So, I'm going to create kevent_create/destroy/control and kevent_get_events()
> Or any better names?
Yeah, that sounds good.
> Some events are impossible to create in userspace (like timer
> notification, which requires timer start and check when timer
> completed).
We're not talking about *c
> Things were like that at one point in time, but file descriptors turn out
> to introduce a huge gaping security hole with SUID programs. The problem
> is that any event context is closely tied to the address space of the
> thread issuing the syscalls, and file descriptors do not have this cl
On Fri, Jul 28, 2006 at 11:33:16AM -0700, Zach Brown ([EMAIL PROTECTED]) wrote:
>
> > I completely agree that existing kevent interface is not the best, so
> > I'm opened for any suggestions.
> > Should kevent creation/removing/modification be separated too?
>
> Yeah, I think so.
So, I'm going t
> I completely agree that existing kevent interface is not the best, so
> I'm opened for any suggestions.
> Should kevent creation/removing/modification be separated too?
Yeah, I think so.
>>> Hmm, it looks like I'm lost here...
>> Yeah, it seems my description might not have sunk in :). We're
On Thu, Jul 27, 2006 at 06:02:38PM -0400, Benjamin LaHaise ([EMAIL PROTECTED])
wrote:
> On Thu, Jul 27, 2006 at 02:44:50PM -0700, Zach Brown wrote:
> >
> > >> int kevent_getevents(int event_fd, struct ukevent *events,
> > >> int min_events, int max_events,
> > >> struct timeval
On Thu, Jul 27, 2006 at 02:32:05PM -0700, Zach Brown ([EMAIL PROTECTED]) wrote:
>
> >>int kevent_getevents(int event_fd, struct ukevent *events,
> >>int min_events, int max_events,
> >>struct timeval *timeout);
> >
> > I used only one syscall for all operations, above
On Thu, Jul 27, 2006 at 02:44:50PM -0700, Zach Brown wrote:
>
> >>int kevent_getevents(int event_fd, struct ukevent *events,
> >>int min_events, int max_events,
> >>struct timeval *timeout);
> >
> > You've just reinvented io_getevents().
>
> Well, that's certainly one
>> int kevent_getevents(int event_fd, struct ukevent *events,
>> int min_events, int max_events,
>> struct timeval *timeout);
>
> You've just reinvented io_getevents().
Well, that's certainly one inflammatory way to put it. I would describe
it as suggesting that t
>> int kevent_getevents(int event_fd, struct ukevent *events,
>> int min_events, int max_events,
>> struct timeval *timeout);
>
> I used only one syscall for all operations, above syscall is
> essentially what kevent_user_wait() does.
Essentially, yes, but the diff
On Thu, Jul 27, 2006 at 12:18:42PM -0700, Zach Brown wrote:
> The easy part is fixing up the somewhat obfuscated collection call.
> Instead of coming in through a multiplexer that magically treats a void
> * as a struct kevent_user_control followed by N ukevents (as specified
> in the kevent_user_c
On Thu, Jul 27, 2006 at 12:18:42PM -0700, Zach Brown ([EMAIL PROTECTED]) wrote:
> > I have to say that the user API is not the nicest in the world. Yet,
> > at the same time, I cannot think of a better one :)
>
> I want to first focus on the event collection side of the API because I
> think we c
> I like this work a lot, as I've stated before.
Yeah, me too. I think we're very close to having a workable system
here. A few weeks of some restructuring and we all might be very happy.
> The data structures
> look like they will scale well and it takes care of all the limitations
> that net
On Mon, Jul 24, 2006 at 11:17:08PM -0700, David Miller ([EMAIL PROTECTED])
wrote:
> From: Evgeniy Polyakov <[EMAIL PROTECTED]>
> Date: Sun, 9 Jul 2006 17:24:46 +0400
>
> > This patch includes core kevent files:
> > - userspace controlling
> > - kernelspace interfaces
> > - initialisation
> >
From: Evgeniy Polyakov <[EMAIL PROTECTED]>
Date: Sun, 9 Jul 2006 17:24:46 +0400
> This patch includes core kevent files:
> - userspace controlling
> - kernelspace interfaces
> - initialisation
> - notification state machines
>
> It might also inlclude parts from other subsystem (like network
On Sun, Jul 09, 2006 at 05:59:42PM +0300, Pekka Enberg ([EMAIL PROTECTED])
wrote:
> On 7/9/06, Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
> >+struct kevent *kevent_alloc(gfp_t mask)
> >+{
> >+ struct kevent *k;
> >+
> >+ if (kevent_cache)
> >+ k = kmem_cache_alloc(kevent
On 7/9/06, Evgeniy Polyakov <[EMAIL PROTECTED]> wrote:
+struct kevent *kevent_alloc(gfp_t mask)
+{
+ struct kevent *k;
+
+ if (kevent_cache)
+ k = kmem_cache_alloc(kevent_cache, mask);
+ else
+ k = kzalloc(sizeof(struct kevent), mask);
+
+ retur
This patch includes core kevent files:
- userspace controlling
- kernelspace interfaces
- initialisation
- notification state machines
It might also inlclude parts from other subsystem (like network related
syscalls so it is possible that it will not compile without other
patches applied).
Si
44 matches
Mail list logo