> Instead of a linked list, how about using a dynamic array?
>
> https://en.wikipedia.org/wiki/Dynamic_array
>
> This would give you constant-time lookups, amortized constant time
> insertions and deletions, and better data locality and cache behavior.
>
> What do you think?
It's a good idea, in
Hi,
Joan Lledó writes:
> As I mentioned in my previous post about ioctl operations[1], LwIP
> establishes a maximum limit of sockets, which is a big problem for a
> system like the Hurd. Now I've finished all tasks in my initial
> proposal, I thought it was a good idea to spend some time studyin
Joan Lledó writes:
> They are using the 2-clause BSD license, is OK to apply the patch with
> that license?
The 2-clause BSD license is OK for my lwip_poll patches.
I would prefer that you squash the bug fixes into the original
patch.
pgpqvTWEQkK0K.pgp
Description: PGP signature
>
> I meant the S_io_select and S_io_select_timeout functions should
> be able to return without replying to the RPC, if the socket is
> not immediately ready. They would instead put the RPC in a
> queue, and if the socket later becomes ready, another thread
> would remove the RPC from the queue a
Joan Lledó writes:
> I've got a question about your patch. Why did you say io_select
> without a thread per socket would require a
> different solution? I'm studying the patch b/c I'd like to send it to
> lwip maintainers, and don't find the problem.
I meant the S_io_select and S_io_select_timeo
Hello again Kalle,
I've got a question about your patch. Why did you say io_select
without a thread per socket would require a
different solution? I'm studying the patch b/c I'd like to send it to
lwip maintainers, and don't find the problem.
2017-08-15 9:50 GMT+02:00 Joan Lledó :
> Hello Kalle,
Hello Kalle,
I've applied and tested your patches and seem to be working fine. The
translator is now using lwip_poll() instead of lwip_select().
I've pushed the changes to github. Thank you very much.
Joan Lledó writes:
> About your patches, which lwip version are you using?
Commit b82396f361ec9ce45cf0e997eb7ee5cc5aac12ec from your
lwip-hurd repository. (I noticed the CRLFs because they ended up
in the diff.)
> Will you send them to the lwip patch tracker?
I don't intend to, because I have
> Too bad there is no lwip_poll function.
> Might the LwIP folks be amenable to adding one?
Yes, they talked about that in their mailing list[1].
> Your size calculations seem wrong.
> glibc defines fd_mask = __fd_mask = long int.
> FD_SETSIZE is the number of bits that fit in fd_set.
> But then
The handling of LWIP_SOCKET_OFFSET looks inconsistent.
Suppose LWIP_SOCKET_OFFSET is 100 and LWIP_SOCKET_OPEN_COUNT is
defined. The first alloc_socket call sets newsock->count = 100
and returns 100. However, if get_socket(100) is then called,
it first subtracts LWIP_SOCKET_OFFSET from s, resultin
Kalle Olavi Niemitalo writes:
> I wonder how hard it would be to implement those select ops in
> the translator without hogging a thread for each.
To clarify: if one thread in a program calls select on an fd_set
that lists ten sockets, then glibc sends ten io_select requests
before it waits for
Joan Lledó writes:
> Since Glibc calls the io_select() operation each time the user
> calls send() or recv(), in practice such sockets are just
> unusable.
Too bad there is no lwip_poll function.
Might the LwIP folks be amenable to adding one?
> [3]
> https://github.com/jlledom/lwip-hurd/commi
As I mentioned in my previous post about ioctl operations[1], LwIP
establishes a maximum limit of sockets, which is a big problem for a
system like the Hurd. Now I've finished all tasks in my initial
proposal, I thought it was a good idea to spend some time studying
this issue and trying to find a
13 matches
Mail list logo