ce property. The
user-data is only passed to the instance that runs Docker, not the
containers. Configuring the CMD and/or environment variables for the
container is the correct approach.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
a
instances not only with Docker, but with a connector to a libswarm server
(swarmd). That swarmd process would need to be running an listening
somewhere. You'd need to load (and possibly write) plugins for
libswarm/swarmd to provide your scheduling. You
to implement it in our daily
> use. I have had to change at least one wikipage so far, it is far easier
> if folks simply employ the correct usage from the beginning.
>
On wiki pages and other published medium -- absolutely.
For our daily use in IRC and other casual discussion
team
to remain both focused and to conserve precious compute resources. If this
is an issue, then I'd like to plot a timeline, however rough, with the
infrastructure team.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev
>
> Given resource concerns, maybe just adding it to the experimental
> pipeline would be sufficient?
For clarity, the discussed patch is to promote an existing experimental job
to silent.
Regards -Eric
___
OpenStack-dev mailing list
OpenStack-dev@lists
des an environment similar to openstack-infra that can consistently
and reliably run on one's laptop, while bringing a devstack-managed
OpenStack installation online in 5-8 minutes.
Like other devstack-based installs, this is not for running production
OpenStack deployments.
--
many of the "Linux containers" features are disabled for
one reason or another.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
very useful. That said, all of
the really "interesting" things done by Nova that require privileges are
done by rootwrap... a rootwrap which leveraged Docker would make
containerization of Nova more meaningful and would be a boon for Nova
security overall.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
On Tue, Aug 26, 2014 at 3:35 PM, Martinx - ジェームズ
wrote:
> Hey Stackers! Wait! =)
>
> Let me ask something...
>
> Why are you guys using Docker within a VM?!?! What is the point of doing
> such thing?!
>
> I thought Docker was here to entirely replace the virtualization layer,
> bringing a "bare
feel that being able to stay out of
tree and having our own core team would be beneficial, but I wouldn't want
to do this unless it applied equally to all drivers.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.opensta
ng it may still be "valid" and "included" (if not core), is a big
step for OpenStack in reducing that cost. Obviously, all I've just said
could be applied to the ZeroMQ driver as well as it applies to Docker.
The OpenStack CI system is now advanced and mature enough that breaking
34f93fca236807ed/nova/tests/virt/test_ironic_api_contracts.py
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
a separate service for managing containers draws a thick line
in the sand that will somewhat stiffen innovation around
hypervisor-based virtualization. That isn't necessarily a bad thing,
it will help maintain stability in the project. However, the choice
and the implications shouldn't be ig
On Fri, Nov 22, 2013 at 11:49 AM, Krishna Raman wrote:
> Reminder: We are meting in about 15 minutes on #openstack-meeting channel.
I wasn't able to make it. Was meeting-bot triggered? Is there a log of
today's discussion?
Thank you,
en responding to
patches and blueprints offerings improvements and feature requests for
oslo.messaging.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
sing on are in-line with moving
It sounds like the current model and process, while not perfect, isn't
too dysfunctional. Attempting to move the EC2 or GCE code into a
Stackforge repository might kill them before they can reach that bar
you're looking to set.
What more is needed from the
pdate the GCE code prior to
submission.
Does anyone else care to comment or discuss using Pecan/WSME? Alex
Levine? Doug Hellman? Russell?
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.opens
ress on the use-cases? Is there a wiki page?
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
ack/
You might also try manually launching docker-registry, or stopping it if it
is already running.
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
sponse through other means, but
we should take this off the developer list which is not for usage questions
(those usually go to the general list, operator list, or Ask OpenStack)
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenSta
nce of
this driver from Sam Alba.
I'll try and reproduce this myself.
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
en able to reproduce an error myself, but wish
to confirm that this matches the error you're seeing.
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
would we need to do to
> test it?
>
Looking at the code, I don't expect it to be an issue. The monkey-patching
will cause eventlet.spawn to be called for threading.Thread. The code looks
eventlet-friendly enough on the surface. Error handing around file
read/write could be affect
the traffic inside
> of the container would be able to route correctly to the host and reach
> the services.
>
>
Swapnil, try adding a value for HOST_IP into your localrc, matching your
machine's IP address.
Regards,
Eric Windisch
___
O
er driver can simply warn or exit if HOST_IP is
set to 127.0.0.1 as the error that is received currently is certainty not
obvious enough.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
limited testing of the v3 API, however, I've seen relatively few failures
and most or all overlapped with the existing v2 failures. I'm not sure how
Russell or the community feels about skipping Tempest tests for v3, and I
would like to try making these pass, but I
own branch which
includes these patches (using the NOVA_REPO / NOVA_BRANCH variables in
devstack).
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
>
> I think for this cycle we really do need to focus on consolidating and
> testing the existing driver design and fixing up the biggest
> deficiency (1) before we consider moving forward with lots of new
+1
> 1) Outbound messaging connection re-use - right now every outbound
> messaging creat
way to know that our tests are breaking
without manually checking Kibana, such as an email.
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
e this possible!
Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
;m not as familiar with Heat's
'software config' as I should be, although I attended a couple sessions on
it last week. I'm not sure how this solves the problem? Is the plan to have
the software-config-agent communicate over the network to/from Heat, and to
the instance'
s is reflected in the etherpad. My approach to this question
was already with the presumption there is value in having access to block
devices without filesystems, but that there would be additional utility
should we have a viable story for mounting filesystems.
--
Regards,
Eric Windisch
__
considered that we could provide
configurations that only allow FUSE. Granted, there might be some
possibility of implementing a solution that would limit containers to
mounting specific types of filesystems, such as only allowing FUSE mounts.
--
Regards,
Eric Windisch
>
>
> What I hope to do is setup a check doing CI on devstack-f20 nodes[3],
> this will setup a devstack based nova with the nova-docker driver and
> can then run what ever tests make sense (currently only a minimal test,
> Eric I believe you were looking at tempest support maybe it could be
> hook
obs in the gate run significantly
faster than devstack on a laptop? Does that have to be the case? Can we not
consolidate these into a single solution that is always fast for everyone,
all the time? Something used in dev and gating? Something that might reduc
tc/hosts so much.
I had no problems, but I haven't tested Dockenstack with the Docker 0.9 or
0.10 releases, I last used it on 0.8.1. I'll be updating the Dockerfile
and testing it throughly with the latest Docker release once we merge the
devstack patches.
Regards,
Eric Windisch
locally on their laptops/workstations (or in 3rd-party
CI).
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
the code itself is bad. What
this driver needs is major cleanup, refactoring, and better testing.
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
e tests as either the
driver itself improves, or support in consuming projects and/or
oslo.messaging itself improves. I'd suggest that effort is better spent
there than building new bespoke tests.
Thanks and good luck! :)
Regards,
Eric Windisch
___
ient because it eliminates the need for a static configuration,
making tempest tests much easier to run and generally easier for anyone to
deploy, but it's intended to be an example of hooking into an inventory
service, not necessarily the defacto solution.
--
Regards,
Eric Windisch
ount calls and handle them in
userspace.
References:
*
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/tree/Documentation/prctl/seccomp_filter.txt?id=HEAD
* http://chdir.org/~nico/seccomp-nurse/
--
Regards,
Eric Windisch
___
OpenStack-
Where you could avoid the risk is if the image you're getting from
> glance is not in fact a filesystem, but rather a tar.gz of the container
> filesystem. Then Nova would simply be extracting the contents of the
> tar archive and not accessing an untrusted filessytem image
t could prove to be a useful feature in its own right.
It is the ability to write to the block device which is a risk should it be
mounted.
Having that read-only view also provides a certain awareness to the
container of the existence of that volume. It allows the container to
ATTEMPT to perf
s were originally planning to come prior to last
week's addition of the containers breakout room.
I suspect other containers-oriented folks might yet want to register. If
so, I think now would be the time to speak up.
--
Regards,
Eric Windisch
___
ttending for
containers-specific matters, but have already registered for the Nova
mid-cycle, should we recommend they release their registrations to help
clear the wait-list?
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openst
est-stable,
version-in-ubuntu, version-in-rhel", or any number of back-versions
included in the gate. The version-in-rhel and version-in-ubuntu might be
good candidates for 3rd-party CI.
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
On Wed, Jul 16, 2014 at 12:55 PM, Roman Bogorodskiy <
rbogorods...@mirantis.com> wrote:
> Eric Windisch wrote:
>
> > This thread highlights more deeply the problems for the FreeBSD folks.
> > First, I still disagree with the recommendation that they contribute to
>
always be the 128bit output of MD5.
This much might be obvious, but I felt it was worth clarifying and
etching into the blueprint or other design documentation.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http
rather than a webpage.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
e...
There are other components of oslo that are terse and questionable as
standalone libraries. For these, it might make sense to aggressively
consider rolling some modules together?
One clear example would be log.py and log_handler.py, another would be
peri
Ceilometer. Once you move or recreate the blueprint, please email the
list with the updated URL.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
offering a peer-to-peer solution.
If someone so strongly desires and prefers AMQP 1.0 over ZeroMQ for
peer-to-peer messaging that they'll write and maintain an implementation
for oslo.rpc / oslo.messaging, I'd be happy to see it introduced. I suspect
there is much code that could be share
ver. Because
a pure peer-to-peer system has no centralized broker, there needs to be
some peer tracker to provide an analogue to a queue. It would be possible
for an AMQP 1.0 based driver to leverage this module.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
explicit server + fanout would be useful.
One example might be an in-memory state update between local processes
using something akin to the following target:
Target(exchange='nova', topic='scheduler', server=CONF.host,
fanout=True)
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
ds.
They can be broken for external consumers of these methods, because there
shouldn't be any. It will be a good lesson to anyone that tries to abuse
private methods.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.o
ack(). This opens up the option, besides creating a new
method, to simply updating all the existing method calls that exist in
amqp.py, impl_kombu.py, and impl_qpid.py.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.
the primary projects, but will ultimately become stalled if the library
work is not first completed.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev@lists.openstack.org
http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
ving it now? My belief here is we should be following the
principle of "ask forgiveness, not permission". Try Python 3 and then
fallback to Python 2 whenever possible.
--
Regards,
Eric Windisch
___
OpenStack-dev mailing list
OpenStack-dev
Ceilometer), or is this something you intend to put into
service.py?
Also, fyi, I'm not actually terribly opposed to this patch. It makes
some sense. I just want to make sure we don't foul up the abstraction
in some way or unintentionally give developers rope they'll inevitably
strangle t
out of api.py and into its own module of session.py. This
session management code is probably what you'll most have to decide is
worthwhile bringing in and if Glance really has such unique
requirements that it needs to bother with maintaining this code on its
own.
--
Regards,
Eric Windisch
_
determine their bay ID,
while also guaranteeing uniqueness (or as unique as UUID gets, anyway).
Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage questions)
Unsubscribe: openstack-dev-requ...@lists.op
:uuid:${TENANT_ID}'), 'swarmbay')")
$ cat > ~/container.json << END
{
"bay_uuid": "$BAY_UUID",
"name": "test-container",
"image_id": "cirros",
"command": "ping -c 4 8.8.8.8
t 75% compatible at the moment.
Ideally, the Docker backend would work with both single docker hosts and
clusters of Docker machines powered by Swarm. It would be nice, however, if
scheduler hints could be passed from Magnum to
needs
access to the network. The capabilities and namespaces mechanisms resolve
these security conundrums and simplify principle of least privilege.
Regards,
Eric Windisch
__
OpenStack Development Mailing List (not for usage
like [6].
>I'm willing to prepare a current driver architecture overview with some
> graphics UML charts, and to continue discuss the driver architecture.
>
Documentation has always been a sore point. +2
--
Regards,
Eric Windisch
ᐧ
>From my experience, making fast moving changes is far easier when code is
split out. Changes occur too slowly when integrated.
I'd be +1 on splitting the code out. I expect you will get more done this
way.
Regards,
Eric
o
stay alive. As for myself, for the record, I am seldom involved at this
point, but do contribute some occasional time into reviews or the odd patch
in my free time.
I'll finish to say that I do think it's finally time to consider pulling it
back it. While doing so m
67 matches
Mail list logo