Andrey Savochkin wrote:
Hi All,
I'd like to resurrect our discussion about network namespaces.
In our previous discussions it appeared that we have rather polar concepts
which seemed hard to reconcile.
Now I have an idea how to look at all discussed concepts to enable everyone's
usage scenario.
Sorry, dont' understand your proposal correctly from the previous talk. :)
But...
On Tuesday 12 September 2006 07:28, Eric W. Biederman wrote:
> Do you have some concrete arguments against the proposal?
Yes, I have. I think it is unnecessary complication. This complication will
followed in additi
Dmitry Mishin <[EMAIL PROTECTED]> writes:
> On Monday 11 September 2006 18:57, Herbert Poetzl wrote:
>> I completely agree here, we need a separate namespace
>> for that, so that we can combine isolation and virtualization
>> as needed, unless the bind restrictions can be completely
>> expressed w
Dmitry Mishin <[EMAIL PROTECTED]> writes:
> On Sunday 10 September 2006 06:47, Herbert Poetzl wrote:
>> well, I think it would be best to have both, as
>> they are complementary to some degree, and IMHO
>> both, the full virtualization _and_ the isolation
>> will require a separate namespace to wo
On Monday 11 September 2006 18:57, Herbert Poetzl wrote:
> I completely agree here, we need a separate namespace
> for that, so that we can combine isolation and virtualization
> as needed, unless the bind restrictions can be completely
> expressed with an additional mangle or filter table (as
> wa
Herbert Poetzl wrote:
On Mon, Sep 11, 2006 at 04:40:59PM +0200, Daniel Lezcano wrote:
I am currently working on this and I am finishing a prototype bringing
isolation at the ip layer. The prototype code is very closed to
Andrey's patches at TCP/UDP level. So the next step is to merge the
prot
On Mon, Sep 11, 2006 at 04:40:59PM +0200, Daniel Lezcano wrote:
> Dmitry Mishin wrote:
> >On Friday 08 September 2006 22:11, Herbert Poetzl wrote:
> >
> >>actually the light-weight ip isolation runs perfectly
> >>fine _without_ CAP_NET_ADMIN, as you do not want the
> >>guest to be able to mess with
Dmitry Mishin wrote:
On Friday 08 September 2006 22:11, Herbert Poetzl wrote:
actually the light-weight ip isolation runs perfectly
fine _without_ CAP_NET_ADMIN, as you do not want the
guest to be able to mess with the 'configured' ips at
all (not to speak of interfaces here)
It was only an e
On Sun, Sep 10, 2006 at 11:45:35AM +0400, Dmitry Mishin wrote:
> On Sunday 10 September 2006 06:47, Herbert Poetzl wrote:
> > well, I think it would be best to have both, as
> > they are complementary to some degree, and IMHO
> > both, the full virtualization _and_ the isolation
> > will require a
On Sat, Sep 09, 2006 at 09:41:35PM -0600, Eric W. Biederman wrote:
> Herbert Poetzl <[EMAIL PROTECTED]> writes:
>
> > On Sat, Sep 09, 2006 at 11:57:24AM +0400, Dmitry Mishin wrote:
> >> On Friday 08 September 2006 22:11, Herbert Poetzl wrote:
> >> > actually the light-weight ip isolation runs perf
Dmitry Mishin <[EMAIL PROTECTED]> writes:
> On Sunday 10 September 2006 07:41, Eric W. Biederman wrote:
>> I certainly agree that we are not at a point where a final decision
>> can be made. A major piece of that is that a layer 2 approach has
>> not shown to be without a performance penalty.
> B
On Sunday 10 September 2006 07:41, Eric W. Biederman wrote:
> I certainly agree that we are not at a point where a final decision
> can be made. A major piece of that is that a layer 2 approach has
> not shown to be without a performance penalty.
But it is required. Why to limit possible usages?
On Sunday 10 September 2006 06:47, Herbert Poetzl wrote:
> well, I think it would be best to have both, as
> they are complementary to some degree, and IMHO
> both, the full virtualization _and_ the isolation
> will require a separate namespace to work,
[snip]
> I do not think that folks would w
Herbert Poetzl <[EMAIL PROTECTED]> writes:
> On Sat, Sep 09, 2006 at 11:57:24AM +0400, Dmitry Mishin wrote:
>> On Friday 08 September 2006 22:11, Herbert Poetzl wrote:
>> > actually the light-weight ip isolation runs perfectly
>> > fine _without_ CAP_NET_ADMIN, as you do not want the
>> > guest to
On Sat, Sep 09, 2006 at 11:57:24AM +0400, Dmitry Mishin wrote:
> On Friday 08 September 2006 22:11, Herbert Poetzl wrote:
> > actually the light-weight ip isolation runs perfectly
> > fine _without_ CAP_NET_ADMIN, as you do not want the
> > guest to be able to mess with the 'configured' ips at
> >
On Friday 08 September 2006 22:11, Herbert Poetzl wrote:
> actually the light-weight ip isolation runs perfectly
> fine _without_ CAP_NET_ADMIN, as you do not want the
> guest to be able to mess with the 'configured' ips at
> all (not to speak of interfaces here)
It was only an example. I'm thinkin
On Fri, Sep 08, 2006 at 05:10:08PM +0400, Dmitry Mishin wrote:
> On Thursday 07 September 2006 21:27, Herbert Poetzl wrote:
> > well, who said that you need to have things like RAW sockets
> > or other protocols except IP, not to speak of iptable and
> > routing entries ...
> >
> > folks who _want_
On Thursday 07 September 2006 21:27, Herbert Poetzl wrote:
> well, who said that you need to have things like RAW sockets
> or other protocols except IP, not to speak of iptable and
> routing entries ...
>
> folks who _want_ full network virtualization can use the
> more complete virtual setup and
On Thu, Sep 07, 2006 at 12:29:21PM -0600, Eric W. Biederman wrote:
> Daniel Lezcano <[EMAIL PROTECTED]> writes:
> >
> > IHMO, I think there is one reason. The unsharing mechanism is
> > not only for containers, its aim other kind of isolation like a
> > "bsdjail" for example. The unshare syscall is
Herbert Poetzl <[EMAIL PROTECTED]> writes:
> On Thu, Sep 07, 2006 at 08:23:53PM +0400, Kirill Korotaev wrote:
>
> well, who said that you need to have things like RAW sockets
> or other protocols except IP, not to speak of iptable and
> routing entries ...
>
> folks who _want_ full network virtua
Daniel Lezcano <[EMAIL PROTECTED]> writes:
>
> IHMO, I think there is one reason. The unsharing mechanism is not only for
> containers, its aim other kind of isolation like a "bsdjail" for example. The
> unshare syscall is flexible, shall the network unsharing be one-block
> solution ?
> For examp
On Thu, Sep 07, 2006 at 08:23:53PM +0400, Kirill Korotaev wrote:
> >>Herbert Poetzl wrote:
> >>
> >>>my point (until we have an implementation which clearly
> >>>shows that performance is equal/better to isolation)
> >>>is simply this:
> >>>
> >>> of course, you can 'simulate' or 'construct' all th
Herbert Poetzl wrote:
my point (until we have an implementation which clearly
shows that performance is equal/better to isolation)
is simply this:
of course, you can 'simulate' or 'construct' all the
isolation scenarios with kernel bridging and routing
and tricky injection/marking of packets, b
Caitlin Bestler wrote:
[EMAIL PROTECTED] wrote:
Finally, as I understand both network isolation and network
virtualization (both level2 and level3) can happily co-exist. We do
have several filesystems in kernel. Let's have several network
virtualization approaches, and let a user choose. Is t
Stephen Hemminger <[EMAIL PROTECTED]> writes:
> The problem with VNIC's is it won't work for all devices (without lots of
> work), and for many device's it requires putting the device in promiscuous
> mode. It also plays havoc with network access control devices.
Which is fine. If it works it is
On Wed, 06 Sep 2006 17:25:50 -0600
[EMAIL PROTECTED] (Eric W. Biederman) wrote:
> "Caitlin Bestler" <[EMAIL PROTECTED]> writes:
>
> > [EMAIL PROTECTED] wrote:
> >
> >>
> >>> Finally, as I understand both network isolation and network
> >>> virtualization (both level2 and level3) can happily co
"Caitlin Bestler" <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] wrote:
>
>>
>>> Finally, as I understand both network isolation and network
>>> virtualization (both level2 and level3) can happily co-exist. We do
>>> have several filesystems in kernel. Let's have several network
>>> virtualiza
[EMAIL PROTECTED] wrote:
>
>> Finally, as I understand both network isolation and network
>> virtualization (both level2 and level3) can happily co-exist. We do
>> have several filesystems in kernel. Let's have several network
>> virtualization approaches, and let a user choose. Is that makes
>>
Kir Kolyshkin wrote:
Herbert Poetzl wrote:
my point (until we have an implementation which clearly
shows that performance is equal/better to isolation)
is simply this:
of course, you can 'simulate' or 'construct' all the
isolation scenarios with kernel bridging and routing
and tricky inject
Kir Kolyshkin wrote:
> I am not sure about "network isolation" (used by Linux-VServer), but as
> it comes for level2 vs. level3 virtualization, I see a need for both.
> Here is the easy-to-understand comparison which can shed some light:
> http://wiki.openvz.org/Differences_between_venet_and_
Cedric Le Goater <[EMAIL PROTECTED]> writes:
> Eric W. Biederman wrote:
>
> hmm ? What about an MPI application ?
>
> I would expect each MPI task to be run in its container on different nodes
> or on the same node. These individual tasks _communicate_ between each
> other through the MPI layer (n
Eric W. Biederman wrote:
>> This family of containers are used too for HPC (high performance computing)
>> and
>> for distributed checkpoint/restart. The cluster runs hundred of jobs,
>> spawning
>> them on different hosts inside an application container. Usually the jobs
>> communicates with bro
Eric W. Biederman wrote:
Kir Kolyshkin <[EMAIL PROTECTED]> writes:
Herbert Poetzl wrote:
my point (until we have an implementation which clearly
shows that performance is equal/better to isolation)
is simply this:
of course, you can 'simulate' or 'construct' all the
isolation scenar
Kir Kolyshkin <[EMAIL PROTECTED]> writes:
> Herbert Poetzl wrote:
>> my point (until we have an implementation which clearly
>> shows that performance is equal/better to isolation)
>> is simply this:
>>
>> of course, you can 'simulate' or 'construct' all the
>> isolation scenarios with kernel br
Herbert Poetzl <[EMAIL PROTECTED]> writes:
> On Wed, Sep 06, 2006 at 11:10:23AM +0200, Daniel Lezcano wrote:
>>
>> As far as I see, vserver use a layer 3 solution but, when needed, the
>> veth "component", made by Nestor Pena, is used to provide a layer 2
>> virtualization. Right ?
>
> well, no,
Herbert Poetzl wrote:
my point (until we have an implementation which clearly
shows that performance is equal/better to isolation)
is simply this:
of course, you can 'simulate' or 'construct' all the
isolation scenarios with kernel bridging and routing
and tricky injection/marking of packets,
On Wed, Sep 06, 2006 at 11:10:23AM +0200, Daniel Lezcano wrote:
> Hi Herbert,
>
> >well, the 'ip subset' approach Linux-VServer and
> >other Jail solutions use is very clean, it just does
> >not match your expectations of a virtual interface
> >(as there is none) and it does not cope well with
> >
Kirill Korotaev wrote:
I think classifying network virtualization by Layer X is not good enough.
OpenVZ has Layer 3 (venet) and Layer 2 (veth) implementations, but
in both cases networking stack inside VE remains fully virtualized.
Let's describe all those (three?) approaches at
http://wiki.o
On Tue, Sep 05, 2006 at 08:45:39AM -0600, Eric W. Biederman wrote:
Daniel Lezcano <[EMAIL PROTECTED]> writes:
For HPC if you are interested in migration you need a separate IP
per container. If you can take you IP address with you migration of
networking state is simple. If you can't take your
Hi Herbert,
well, the 'ip subset' approach Linux-VServer and
other Jail solutions use is very clean, it just does
not match your expectations of a virtual interface
(as there is none) and it does not cope well with
all kinds of per context 'requirements', which IMHO
do not really exist on the ap
Herbert Poetzl <[EMAIL PROTECTED]> writes:
> On Tue, Sep 05, 2006 at 08:45:39AM -0600, Eric W. Biederman wrote:
>> Daniel Lezcano <[EMAIL PROTECTED]> writes:
>>
>> For HPC if you are interested in migration you need a separate IP
>> per container. If you can take you IP address with you migration
> This family of containers are used too for HPC (high performance computing)
> and
> for distributed checkpoint/restart. The cluster runs hundred of jobs, spawning
> them on different hosts inside an application container. Usually the jobs
> communicates with broadcast and multicast.
> Applicati
On Tue, Sep 05, 2006 at 08:45:39AM -0600, Eric W. Biederman wrote:
> Daniel Lezcano <[EMAIL PROTECTED]> writes:
>
> >>>2. People expressed concerns that complete separation of namespaces
> >>> may introduce an undesired overhead in certain usage scenarios.
> >>> The overhead comes from packets
Yes, performance is probably one issue.
My concerns was for layer 2 / layer 3 virtualization. I agree a layer 2
isolation/virtualization is the best for the "system container".
But there is another family of container called "application container",
it is not a system which is run inside a cont
For HPC if you are interested in migration you need a separate IP per
container. If you can take you IP address with you migration of
networking state is simple. If you can't take your IP address with
you a network container is nearly pointless from a migration
perspective.
Eric, please, I kno
Daniel Lezcano <[EMAIL PROTECTED]> writes:
>>>2. People expressed concerns that complete separation of namespaces
>>> may introduce an undesired overhead in certain usage scenarios.
>>> The overhead comes from packets traversing input path, then output path,
>>> then input path again in the
Hi all,
This complete separation of namespaces is very useful for at least two
purposes:
- allowing users to create and manage by their own various tunnels and
VPNs, and
- enabling easier and more straightforward live migration of groups of
processes with their environment.
Basically there are currently 3 approaches that have been proposed.
The trivial bsdjail style as implemented by Serge and in a slightly
more sophisticated version in vserver. This approach as it does not
touch the packets has little to no packet level overhead. Basically
this is what I have cal
Alexey Kuznetsov <[EMAIL PROTECTED]> writes:
> Hello!
>
>> (application) containers. Performance aside, are there any reasons why
>> this approach would be problematic for c/r?
>
> This approach is just perfect for c/r.
Yes. For c/r you need to take your state with you.
> Probably, this is the
Hello!
> (application) containers. Performance aside, are there any reasons why
> this approach would be problematic for c/r?
This approach is just perfect for c/r.
Probably, this is the only approach when migration can be done
in a clean and self-consistent way.
Alexey
-
To unsubscribe from t
Quoting Andrey Savochkin ([EMAIL PROTECTED]):
> Hi All,
>
> I'd like to resurrect our discussion about network namespaces.
> In our previous discussions it appeared that we have rather polar concepts
> which seemed hard to reconcile.
> Now I have an idea how to look at all discussed concepts to en
Hi All,
I'd like to resurrect our discussion about network namespaces.
In our previous discussions it appeared that we have rather polar concepts
which seemed hard to reconcile.
Now I have an idea how to look at all discussed concepts to enable everyone's
usage scenario.
1. The most straightforwa
Cedric Le Goater <[EMAIL PROTECTED]> writes:
> How that proposal differs from the initial Daniel's patchset ? how far was
> that patchset to reach a similar agreement ?
My impression is as follows. The OpenVz implementation and mine work
on the same basic principles of handling the network stack
Hello,
Eric W. Biederman wrote:
> Thinking about this I am going to suggest a slightly different direction
> for get a patchset we can merge.
>
> First we concentrate on the fundamentals.
> - How we mark a device as belonging to a specific network namespace.
> - How we mark a socket as belonging
Thinking about this I am going to suggest a slightly different direction
for get a patchset we can merge.
First we concentrate on the fundamentals.
- How we mark a device as belonging to a specific network namespace.
- How we mark a socket as belonging to a specific network namespace.
As part of
55 matches
Mail list logo