Daniel Lezcano wrote:
> Denis V. Lunev wrote:
>
>>Recently David Miller and Herbert Xu pointed out that struct net becomes
>>overbloated and un-maintainable. There are two solutions:
>>- provide a pointer to a network subsystem definition from struct net.
>> This costs an additional dereferrence
Ah, sorry. Didn't notice it's called only on boot.
Acked-By: Kirill Korotaev <[EMAIL PROTECTED]>
Kirill Korotaev wrote:
> imho panic() is too much.
> create_singlethread_workqueue() can fail e.g. due to out of memory...
>
> Thanks,
> Kirill
>
>
> Daniel
imho panic() is too much.
create_singlethread_workqueue() can fail e.g. due to out of memory...
Thanks,
Kirill
Daniel Lezcano wrote:
> Subject: make netns cleanup to run in a separate queue
> From: Benjamin Thery <[EMAIL PROTECTED]>
>
> This patch adds a separate workqueue for cleaning up a net
Eric W. Biederman wrote:
> Patrick McHardy <[EMAIL PROTECTED]> writes:
>
>
>>Eric W. Biederman wrote:
>>
>>>-- The basic design
>>>
>>>There will be a network namespace structure that holds the global
>>>variables for a network namespace, making those global variables
>>>per network namespace.
>>
Ben Greear wrote:
> Kirill Korotaev wrote:
>
>>Patrick McHardy wrote:
>>
>>
>>>I believe OpenVZ stores the current namespace somewhere global,
>>>which avoids passing the namespace around. Couldn't you do this
>>>as well?
>>>
Jeff Garzik wrote:
> Eric W. Biederman wrote:
>
>>Jeff Garzik <[EMAIL PROTECTED]> writes:
>>
>>
>>>David Miller wrote:
>>>
I don't accept that we have to add another function argument
to a bunch of core routines just to support this crap,
especially since you give no way to turn it off
Patrick McHardy wrote:
> Eric W. Biederman wrote:
>
>>-- The basic design
>>
>>There will be a network namespace structure that holds the global
>>variables for a network namespace, making those global variables
>>per network namespace.
>>
>>One of those per network namespace global variables will
Ben Greear wrote:
> Patrick McHardy wrote:
>
>>Eric W. Biederman wrote:
>>
>>
>>>-- The basic design
>>>
>>>There will be a network namespace structure that holds the global
>>>variables for a network namespace, making those global variables
>>>per network namespace.
>>>
>>>One of those per netw
Deniel,
Daniel Lezcano wrote:
> Pavel Emelianov wrote:
>
I did this at the very first version, but Alexey showed me that this
would be wrong. Look. When we create the second device it must be in
the other namespace as it is useless to have them in one namespace.
But if we have the
David Miller wrote:
> From: Pavel Emelianov <[EMAIL PROTECTED]>
> Date: Wed, 06 Jun 2007 19:11:38 +0400
>
>
>>Veth stands for Virtual ETHernet. It is a simple tunnel driver
>>that works at the link layer and looks like a pair of ethernet
>>devices interconnected with each other.
>
>
> I would s
>>The loss of performances is very noticeable inside the container and
>>seems to be directly related to the usage of the pair device and the
>>specific network configuration needed for the container. When the
>>packets are sent by the container, the mac address is for the pair
>>device but the IP
David Miller wrote:
> From: Alexey Dobriyan <[EMAIL PROTECTED]>
> Date: Wed, 14 Mar 2007 16:07:11 +0300
>
>
>>ANK says: "It is rarely used, that's wy it was not noticed.
>>But in the places, where it is used, it should be disaster."
>>
>>Signed-off-by: Alexey Dobriyan <[EMAIL PROTECTED]>
>
>
>
Eric, really good job!
Patches: 1-13, 15-24, 26-32, 34-44, 46-49, 52-55, 57 (all except below)
Acked-By: Kirill Korotaev <[EMAIL PROTECTED]>
14/59 - minor (extra space)
25/59 - minor note
33/59 - not sorted sysctl IDs
45/59 - typo
50/59 - copyright/file note
51/59 - copyright/fil
1. I ask for not setting your authorship/copyright on the code which you just
copied
from other places. Just doesn't look polite IMHO.
2. please don't name files like ipc/ipc_sysctl.c
ipc/sysctl.c sounds better IMHO.
3. any reason to introduce CONFIG_SYSVIPC_SYSCTL?
why not simply do
>
Eric, though I personally don't care much:
1. I ask for not setting your authorship/copyright on the code which you just
copied
from other places. Just doesn't look polite IMHO.
2. I would propose to not introduce utsname_sysctl.c.
both files are too small and minor that I can't see much reaso
IDs not sorted in enum. see below.
> From: Eric W. Biederman <[EMAIL PROTECTED]> - unquoted
>
> We need to have the the definition of all top level sysctl
> directories registers in sysctl.h so we don't conflict by
> accident and cause abi problems.
>
> Signed-off-by: Eric W. Biederman <[EMAIL P
another small minor note.
> From: Eric W. Biederman <[EMAIL PROTECTED]> - unquoted
>
> Signed-off-by: Eric W. Biederman <[EMAIL PROTECTED]>
> ---
> arch/frv/kernel/pm.c | 50
> +++---
> 1 files changed, 43 insertions(+), 7 deletions(-)
>
> diff --g
minor extra space in table below...
Kirill
> From: Eric W. Biederman <[EMAIL PROTECTED]> - unquoted
>
> Signed-off-by: Eric W. Biederman <[EMAIL PROTECTED]>
> ---
> fs/xfs/linux-2.6/xfs_sysctl.c | 258
> 1 files changed, 180 insertions(+), 78 deletions(
>>>If there is a better and less intrusive while still being obvious
>>>method I am all for it. I do not like the OpenVZ thing of doing the
>>>lookup once and then stashing the value in current and the special
>>>casing the exceptions.
>>
>>Why?
>
>
> I like it when things are obvious and not im
Herbert Poetzl wrote:
my point (until we have an implementation which clearly
shows that performance is equal/better to isolation)
is simply this:
of course, you can 'simulate' or 'construct' all the
isolation scenarios with kernel bridging and routing
and tricky injection/marking of packets, b
On Tue, Sep 05, 2006 at 08:45:39AM -0600, Eric W. Biederman wrote:
Daniel Lezcano <[EMAIL PROTECTED]> writes:
For HPC if you are interested in migration you need a separate IP
per container. If you can take you IP address with you migration of
networking state is simple. If you can't take your
Yes, performance is probably one issue.
My concerns was for layer 2 / layer 3 virtualization. I agree a layer 2
isolation/virtualization is the best for the "system container".
But there is another family of container called "application container",
it is not a system which is run inside a cont
Signed-Off-By: Kirill Korotaev <[EMAIL PROTECTED]>
diff --git a/net/core/neighbour.c b/net/core/neighbour.c
index 89b7904..a45bd21 100644
--- a/net/core/neighbour.c
+++ b/net/core/neighbour.c
@@ -1429,6 +1429,9 @@ int neigh_table_clear(struct neigh_table
kfree(tbl->phash_buck
Basically there are currently 3 approaches that have been proposed.
The trivial bsdjail style as implemented by Serge and in a slightly
more sophisticated version in vserver. This approach as it does not
touch the packets has little to no packet level overhead. Basically
this is what I have cal
Temporary code to play with network namespaces in the simplest way.
Do
exec 7< /proc/net/net_ns
in your bash shell and you'll get a brand new network namespace.
There you can, for example, do
ip link set lo up
ip addr list
ip addr add 1.2.3.4 dev lo
ping -n 1.2.3
before dst_lock is tried.
Meanwhile, someone on CPU1 adds an entry to gc list and
starts the timer.
If CPU2 was preempted long enough, this timer can expire
simultaneously with resuming timer handler on CPU1, arriving
exactly to the situation described.
Signed-Off-By: Dmitry Mishin <[EMAIL PROTEC
2) for cases where we haven't implemented dynamic
table growth, specifying a proper limit argument
to the hash table allocation is a sufficient
solution for the time being
Agreed, just we don't know what the proper limits are.
I guess it would need someone running quite a lot of benchm
1) dynamic table growth is the only reasonable way to
handle this and not waste memory in all cases
Definitely that's the ideal way to go.
But there's alot of state to update (more or less
atomically, too) in the TCP hashes. Looks tricky to
do that without hurting performance, especiall
David Miller wrote:
From: Andi Kleen <[EMAIL PROTECTED]>
Date: Tue, 8 Aug 2006 08:53:03 +0200
That's still too big. Consider a 2TB machine, with all memory in LOWMEM.
Andi I agree with you, route.c should pass in a suitable limit.
I'm just suggesting a fix for a seperate problem.
So summa
David Miller wrote:
we quickly discover this GIT commit:
424c4b70cc4ff3930ee36a2ef7b204e4d704fd26
[IPV4]: Use the fancy alloc_large_system_hash() function for route hash table
- rt hash table allocated using alloc_large_system_hash() function.
Signed-off-by: Eric Dumazet <[EMAIL PROTECTED]>
S
Structures related to IPv4 rounting (FIB and routing cache)
are made per-namespace.
Hi Andrey,
if the ressources are private to the namespace, how do you will handle
NFS mounted before creating the network namespace ? Do you take care of
that or simply assume you can't access NFS anymore ?
Cleanup of dev_base list use, with the aim to make device list per-namespace.
In almost every occasion, use of dev_base variable and dev->next pointer
could be easily replaced by for_each_netdev loop.
A few most complicated places were converted to using
first_netdev()/next_netdev().
As a proof
My point is that if you make namespace tagging at routing time, and
your packets are being routed only once, you lose the ability
to have separate routing tables in each namespace.
Right. What is the advantage of having separate the routing tables ?
it is impossible to have bridged networking,
Cleanup of dev_base list use, with the aim to make device list per-namespace.
In almost every occasion, use of dev_base variable and dev->next pointer
could be easily replaced by for_each_netdev loop.
A few most complicated places were converted to using
first_netdev()/next_netdev().
As a proof
34 matches
Mail list logo