FYI: I heard from Zen yesterday that public IPv6 support isn't on their roadmap 
as yet, so anything we do would purely have to be about internal testing of 
IPv6 support.

Thanks

Dave

On 11 Dec 2012, at 14:51, Alexander Sack <a...@linaro.org> wrote:

> OK,
> 
> In general it feels like this is a bit lower priority than many other topics 
> ...
> 
> I propose: let's create a "LAVA lab IPv6 support" roadmap card where
> we collect the elements of IPv6 support.
> 
> Milestone forecast for delivery would be April or May 2013 for now I
> guess. Anyone volunteers to create the roadmap card stub?
> 
> 
> On Tue, Dec 11, 2012 at 4:09 AM, Michael Hudson-Doyle
> <michael.hud...@linaro.org> wrote:
>> Alexander Sack <a...@linaro.org> writes:
>> 
>>> On Mon, Dec 10, 2012 at 9:46 AM, Dave Pigott <dave.pig...@linaro.org> wrote:
>>>> Hi all,
>>>> 
>>>> I was just discussing IPv6 with Philip Colmer, our new IT Services Manager 
>>>> (cc'd on this mail), and it strikes me that we should at least be 
>>>> considering dual running at some point in the future, i.e. providing both 
>>>> v4 and v6. I'm not clear what the ramifications are, or as yet whether Zen 
>>>> will support it. Philip has experience with this, and seems to remember 
>>>> that Zen do support it, but I'll bang an e-mail out to them to check.
>>>> 
>>>> The reason for this e-mail is to start a discussion as to whether we think 
>>>> it's worth raising a BP, or if we can ignore this issue.
>>>> 
>>>> Thoughts, comments and brickbats welcome.
>>> 
>>> I am quite sure that supporting IPv6 inside the LAVA lab is a
>>> worthwhile thing to do...
>> 
>> What does this mean?  FWIW, the ethernet interfaces on machines in the
>> lab appear to have IPv6 addresses:
>> 
>> eth0      Link encap:Ethernet  HWaddr 68:b5:99:be:54:8c
>>          inet addr:192.168.1.10  Bcast:255.255.0.0  Mask:255.255.0.0
>>          inet6 addr: fe80::6ab5:99ff:febe:548c/64 Scope:Link <------------- 
>> HERE
>>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>>          RX packets:20176938 errors:0 dropped:0 overruns:0 frame:0
>>          TX packets:37330059 errors:0 dropped:0 overruns:0 carrier:0
>>          collisions:0 txqueuelen:1000
>>          RX bytes:11878897411 (11.8 GB)  TX bytes:50409227661 (50.4 GB)
>>          Interrupt:31 Memory:f8000000-f8012800
>> 
>> but I don't know if that means very much (I can't even get ping6 to talk
>> to the address of eth0 on the host I'm running it on -- but I know very
>> little about IPv6 in general).
>> 
>> One thing that maaaaybe we'll have to watch for is that until we have an
>> IPv6 internet address we don't end up preferring AAAA records over A
>> records when trying to connect to hosts that have both.
>> 
>>> Whether we need public IPv6 or not, I don't have any strong feelings.
>>> I see that IPv6 is probably modern; so if it comes more or less for
>>> free I would say: let's think through this, make a plan and decide.
>> 
>> It seems Zen don't really support this yet.  We can do 6in4/6to4 or
>> whatever it's called if we want -- I guess the advantage of this would
>> be being able to route to devices in the lab without having to bounce
>> through linaro-gateway[0] but I don't know if that would be useful
>> really[1].
>> 
>> [0] This is also a risk if we don't configure things correctly!  We
>>    currently assume that various admin interfaces with weak passwords
>>    are not directly routeable.  I presume that configuring this sort of
>>    thing is part of setting up 6in4 though.
>> 
>> [1] The person doing the routing would need to have access to the IPv6
>>    internet too presumably, which I certainly don't have currently.
>> 
>> Cheers,
>> mwh
> 
> 
> 
> -- 
> Alexander Sack
> Director, Linaro Platform Engineering
> http://www.linaro.org | Open source software for ARM SoCs
> http://twitter.com/#!/linaroorg - http://www.linaro.org/linaro-blog


_______________________________________________
linaro-dev mailing list
linaro-dev@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/linaro-dev

Reply via email to