Thanks everyone, patch was merged.
On Tue, Mar 1, 2016 at 6:22 PM, Dmitriy Shulyak
wrote:
> Hello folks,
>
> I am not sure that i will need FFE, but in case i wont be able to land
> this patch [0] tomorrow - i would like to ask for one in advance. I will
> need FFE for 2-3 days,
Hello folks,
I am not sure that i will need FFE, but in case i wont be able to land this
patch [0] tomorrow - i would like to ask for one in advance. I will need
FFE for 2-3 days, depends mainly on fuel-web cores availability.
Merging this patch has zero user impact, and i am also using it alread
Hello folks,
This topic is about configuration storage which will connect data sources
(nailgun/bareon/others) and orchestration. And right now we are developing
two projects that will overlap a bit.
I understand there is not enough context to dive into this thread right
away, but i will apprecia
Hi Oleg,
I want to mention that we are using similar approach for deployment engine,
the difference is that we are working not with components, but with
deployment objects (it could be resources or tasks).
Right now all the data should be provided by user, but we are going to add
concept of manage
On Wed, Oct 21, 2015 at 1:21 PM, Igor Kalnitsky
wrote:
> We can make bidirectional dependencies, just like our deployment tasks do.
I'm not sure that we are on the same page regarding problem definition.
Imagine the case when we have environment with next set of roles:
1. standalone-rabbitmq
2
about when I wrote:
>
> > Or should we improve it somehow so it would work for one nodes,
> > and will be ignored for others?
>
> So I'm +1 for extending our meta information with such dependencies.
>
> Sincerely,
> Igor
>
> On Wed, Oct 21, 2015 at 12:51 PM,
Hi,
Can we ignore the problem above and remove this limitation? Or should
> we improve it somehow so it would work for one nodes, and will be
> ignored for others?
>
I think that this validation needs to be accomplished in a different way,
we don't need 1 controller for the sake of 1 controller,
1
+1
On Tue, Sep 8, 2015 at 9:02 AM, Anastasia Urlapova
wrote:
> +1
>
> On Mon, Sep 7, 2015 at 6:30 PM, Tatyana Leontovich <
> tleontov...@mirantis.com> wrote:
>
>> Fuelers,
>>
>> I'd like to nominate Andrey Sledzinskiy for the fuel-ostf core team.
>> He’s been doing a great job in writing patches
Hi team,
I wasnt able to participate in fuel weekly meeting, so for those of you who
are curious
how to create roles with fuel client - here is documentation on this topic
[1].
And here is example how it can be used, together with granular deployment,
to
create new roles and add deployment logic
Hello,
I would vote for 2nd, but i also think that we can generate same
information, on merge for example, that will be printed during first run
and place it directly in repository (maybe even README?). I guess this is
what your 3rd approach is about?
So, can we go with both?
On Tue, Mar 3,
Hello,
On Tue, Mar 3, 2015 at 6:12 PM, Evgeniy L wrote:
> Solution [3] is to implement plugin manager as a separate service
> and move all of the complexity there, fuelclient will be able to use
> REST API to install/delete/update/downgrade plugins.
> In the same way as it's done for OSTF.
>
I
+1 for separate tasks/graph validation library
In my opinion we may even migrate graph visualizer to this library, cause
it is most usefull during development and to demand installed fuel with
nailgun feels a bit suboptimal
On Tue, Feb 17, 2015 at 12:58 PM, Kamil Sambor wrote:
> Hi all,
>
> I
On Mon, Feb 9, 2015 at 1:35 PM, Przemyslaw Kaminski
wrote:
> > Well i think there should be finished_at field anyway, why not to
> > add it for this purpose?
>
> So you're suggesting to add another column and modify all tasks for
> this one feature?
Such things as time stamps should be on all t
On Mon, Feb 9, 2015 at 12:51 PM, Przemyslaw Kaminski wrote:
> Well, there are some problems with this solution:
> 1. No 'pick latest one with filtering to network_verify' handler is
> available currently.
>
Well i think there should be finished_at field anyway, why not to add it
for this purpose
On Thu, Jan 15, 2015 at 6:20 PM, Vitaly Kramskikh
wrote:
> I want to discuss possibility to add network verification status field for
> environments. There are 2 reasons for this:
>
> 1) One of the most frequent reasons of deployment failure is wrong network
> configuration. In the current UI net
On Sat, Feb 7, 2015 at 9:42 AM, Andrew Woodward wrote:
> Dmitry,
>> thanks for sharing CLI options. I'd like to clarify a few things.
>>
>> > Also very important to understand that if task is mapped to role
>> controller, but node where you want to apply that task doesn't have this
>> role - it w
> > Also very important to understand that if task is mapped to role
> controller, but node where you want to apply that task doesn't have this
> role - it wont be executed.
> Is there a particular reason why we want to restrict a user to run an
> arbitrary task on a server, even if server doesn't
> Thank you for the excellent run-down of the CLI commands. I assume this
> will make its way into the developer documentation? I would like to know if
> you could point me to more information about the inner workings of granular
> deployment. Currently it's challenging to debug issues related to g
Hello folks,
Not long ago we added necessary commands in fuel client to work with
granular deployment configuration and API.
So, you may know that configuration is stored in fuel-library, and uploaded
into database during
bootstrap of fuel master. If you want to change/add some tasks right on
mas
> >> But why to add another interface when there is one already (rest api)?
>
> I'm ok if we decide to use REST API, but of course there is a problem which
> we should solve, like versioning, which is much harder to support, than
> versioning
> in core-serializers. Also do you have any ideas how it
Guys, is it crazy idea to write tests for deployment state on node in
python?
It even can be done in unit tests fashion..
I mean there is no strict dependency on tool from puppet world, what is
needed is access to os and shell, maybe some utils.
>> What plans have Fuel Nailgun team for testing th
Andrew,
What should be sorted out? It is unavoidable that people will comment and
ask questions during development cycle.
I am not sure that merging spec as early as possible, and than add comments
and different fixes is good strategy.
On the other hand we need to eliminate risks.. but how merging
> Also I would like to mention that in plugins user currently can write
> 'roles': ['controller'],
> which means that the task will be applied on 'controller' and
> 'primary-controller' nodes.
> Plugin developer can get this information from astute.yaml file. But I'm
> curious if we
> should change
e
>>> unified hierarchy (like Fuel as CA for keys on controllers for different
>>> env's) then it will fit better than other options. If we implement 3rd
>>> option then we will reinvent the wheel with SSL in future. Bare rsync as
>>> storage for private k
> 1. as I mentioned above, we should have an interface, and if interface
> doesn't
> provide required information, you will have to fix it in two places,
> in Nailgun and in external-serializers, instead of a single place i.e.
> in Nailgun,
> another thing if astute.yaml is a bad interf
Hi folks,
I want to discuss the way we are working with generated keys for
nova/ceph/mongo and something else.
Right now we are generating keys on master itself, and then distributing
them by mcollective
transport to all nodes. As you may know we are in the process of making
this process describe
>
>
> It's not clear what problem you are going to solve with putting serializers
> alongside with deployment scripts/tasks.
>
I see two possible uses for specific serializer for tasks:
1. Compute additional information for deployment based not only on what is
present in astute.yaml
2. Request info
; I wouldn't differentiate tasks for primary and other controllers.
> "Primary-controller" logic should be controlled by task itself. That will
> allow to have elegant and tiny task framework ...
>
> --
> Best regards,
> Sergii Golovatiuk,
> Skype #golserge
>
On Tue, Jan 27, 2015 at 10:47 AM, Vladimir Kuklin
wrote:
> This is an interesting topic. As per our discussions earlier, I suggest
> that in the future we move to different serializers for each granule of our
> deployment, so that we do not need to drag a lot of senseless data into
> particular t
On Thu, Jan 22, 2015 at 7:59 PM, Evgeniy L wrote:
> The problem with merging is usually it's not clear how system performs
> merging.
> For example you have the next hash {'list': [{'k': 1}, {'k': 2}, {'k':
> 3}]}, and I want
> {'list': [{'k': 4}]} to be merged, what system should do? Replace the
Hello all,
You may know that for deployment configuration we are serializing
additional prefix for controller role (primary), with the goal of
deployment order control (primary-controller always should be deployed
before secondaries) and some condiions in fuel-library code.
However, we cannot gua
> not to prolong single mode, I'd like to see it die. However we will
> need to be able to add, change, remove, or noop portions of the tasks
> graph in the future. Many of the plugins that cant currently be built
> would rely on being able to sub out parts of the graph. How is that
> going to fact
Hi guys,
I want to discuss the way we are working with deployment configuration that
were redefined for cluster.
In case it was redefined by API - we are using that information instead of
generated.
With one exception, we will generate new repo sources and path to manifest
if we are using update
> 1) Verify network got failed with message Expected VLAN (not
> received) untagged at the interface Eth1 of controller and compute nodes.
>
> In our set-up Eth1 is connected to the public network, which we disconnect
> from public network while doing deployment operation as FUEL itself works
Swagger is not related to test improvement, but we started to discuss it
here so..
@Przemyslaw, how hard it will be to integrate it with nailgun rest api
(web.py and handlers hierarchy)?
Also is there any way to use auth with swagger?
On Mon, Dec 1, 2014 at 1:14 PM, Przemyslaw Kaminski
wrote:
>
>
>
>- environment_config.yaml should contain exact config which will be
>mixed into cluster_attributes. No need to implicitly generate any controls
>like it is done now.
>
> Initially i had the same thoughts and wanted to use it the way it is, but
now i completely agree with Evgeniy t
Is it possible to send http requests from monit, e.g for creating
notifications?
I scanned through the docs and found only alerts for sending mail,
also where token (username/pass) for monit will be stored?
Or maybe there is another plan? without any api interaction
On Thu, Nov 27, 2014 at 9:39 A
> Im working on Zabbix implementation which include HA support.
>
> Zabbix server should be deployed on all controllers in HA mode.
But zabbix-server will stay and user will be able to assign this role where
he wants?
If so there will be no limitations on roles allocation strategy that user
can us
I tried to reproduce this behavior with tasks.yaml:
# Deployment is required for controllers
- role: ['controller']
stage: post_deployment
type: puppet
parameters:
puppet_manifest: site.pp
puppet_modules: "puppet/:/etc/puppet/modules"
timeout: 360
And actually plugin was built s
> I have nothing against using some 3rd party service. But I thought this
> was to be small -- disk monitoring only & notifying the user, not stats
> collecting. That's why I added the code to Fuel codebase. If you want
> external service you need to remember about such details as, say, duplicate
>
>
> When the interfaces are updated with data from the agent we attempt to
> match the MAC to an existing interface (
> https://github.com/stackforge/fuel-web/blob/master/nailgun/nailgun/network/manager.py#L682-L690).
> If that doesn't work we attempt to match by name. Looking at the data that
>
Guys, maybe we can use existing software, for example Sensu [1]?
Maybe i am wrong, but i dont like the idea to start writing our "small"
monitoring applications..
Also something well designed and extendable can be reused for statistic
collector
1. https://github.com/sensu
On Wed, Nov 12, 2014 at
Hi folks,
There was interesting research today on random nics ordering for nodes in
bootstrap stage. And in my opinion it requires separate thread...
I will try to describe what the problem is and several ways to solve it.
Maybe i am missing the simple way, if you see it - please participate.
Link
Hi guys,
Long time ago i've made patch [1] which added tests distribution between
processes and databases. It was simple py.test configuration which allows
us to reduce time of test execution almost linearly, on my local machine
one test run (distributed over 4 cores) takes 250 seconds.
At that t
Not long time ago we discussed necessity of power management feature in
Fuel.
What is your opinion on power management support in Cobbler, i took a look
at documentation [1] and templates [2] that we have right now.
And it actually looks like we can make use of it.
The only issue is that power a
Hi guys,
I was trying to get rid from static pool in fuel and there is couple of
concerns that I want to
discuss with you.
1. Do we want to get rid from static pool completely (e.g remove any notion
of static pool
in nailgun, fuelmenu)? Or it will be enough to allow overlapping, and leave
possibil
>
> Let's do the same for Fuel. Frankly, I'd say we could take OpenStack
> standards as is and use them for Fuel. But maybe there are other opinions.
> Let's discuss this and decide what to do. Do we actually need those
> standards at all?
>
> Agree that we can take openstack standarts as example,
Hello everyone,
I want to raise concerns about progress bar, and its usability.
In my opinion current approach has several downsides:
1. No valuable information
2. Very fragile, you need to change code in several places not to break it
3. Will not work with plugable code
Log parsing works under o
Hi team,
After several discussions i want to propose generic format
for describing deployment tasks, this format is expected to cover
all tasks (e.g pre-deployment and post-deployment), also it should cover
different actions like upgrade/patching
action: upload_file
id: upload_astute
roles: *
para
If there is no checkboxes (read configuration) and plugin is installed -
all deployment tasks will be applied
to every environment, but why do you think that there will be no checkboxes
in most cases?
Right now we already have like 2 types of plugins (extensions), classified
by usage of settings t
new ceph nodes, and there are no ready ceph nodes in the cluster
> hence you should install ceph-mon on the controllers.
>
> The same for rabbitmq, if there is new controller, run rabbit
> reconfiguration on
> non-controllers nodes.
>
> Thanks,
>
> On Tue, Oct 7, 201
Hi, I would use 1st option.
Consider plugins which is mutually exclusive, and you need to disable
default tasks if plugin is selected.
As example:
- neutron backend plugins
neutron_backend:
choices:
- ovs
- contrail
value: ovs
Right now we are using ovs as neutron_backend, but what w
Hi folks,
I want to discuss cluster reconfiguration scenarios, i am aware of 2 such
bugs:
- ceph-mon not installed on controllers if cluster initially was deployed
without ceph-osd
- config with rabbitmq hosts not updated on non-controlles nodes after
additional controllers is added to cluster [1]
Hi,
As i understood you want to store some mappings of tags to hosts in
database, but then you need to sort out api
for registering hosts and/or discovery mechanism for such hosts. It is
quite complex.
It maybe be usefull, in my opinion it would be better to have simpler/more
flexible variant.
Fo
uppetlabs.com/projects/mcollective-plugins/wiki/FactsFacterYAML
> [1]
> https://docs.puppetlabs.com/mcollective/reference/ui/filters.html#fact-filters
> [2] https://docs.puppetlabs.com/mcollective/reference/plugins/facts.html
>
>
> On Wed, Sep 10, 2014 at 6:04 PM, Dmitriy Shulyak
>
Hi folks,
Some of you may know that there is ongoing work to achieve kindof
data-driven orchestration
for Fuel. If this is new to you, please get familiar with spec:
https://review.openstack.org/#/c/113491/
Knowing that running random command on nodes will be probably most usable
type of
orchest
Hi team,
I want to discuss benefits of using host networking [1] for docker
containers, on master node.
This feature was added in docker 0.11 and basicly means - reuse host
networking stack, without
creating separate namespace for each container.
In my opinion it will result in much more stable
Hi,
1. There is several incompatibilities with network checker in 5.0 and 5.1,
mainly caused by introduction of multicast verification.
Issue with additional release information, which easy to resolve by
excluding multicast on 5.0 environment
[1] https://bugs.launchpad.net/fuel/+bug/1342814
Issue
als
> are needed?
>
>
> 2014-06-25 13:02 GMT+04:00 Dmitriy Shulyak :
>
> Looks like we will stick to #2 option, as most reliable one.
>>
>> - we have no way to know that openrc is changed, even if some scripts
>> relies on it - ostf should not fail with au
Looks like we will stick to #2 option, as most reliable one.
- we have no way to know that openrc is changed, even if some scripts
relies on it - ostf should not fail with auth error
- we can create ostf user in post-deployment stage, but i heard that some
ceilometer tests relied on admin user, al
> to represent received data
>> * interactive mode: cliff makes possible to provide a shell mode, just
>> like psql do
>>
>> Well, I vote to use cliff inside fuel client. Yeah, I know, we need to
>> rewrite a lot of code, but we
>> can do it step-by-step.
>&g
Hi folks,
I am wondering what our story/vision for plugins in fuel client [1]?
We can benefit from using cliff [2] as framework for fuel cli, apart from
common code
for building cli applications on top of argparse, it provides nice feature
that allows to
dynamicly add actions by means of entry po
turity level. Thanks
>
> ~Sergii
>
>
> On Wed, Jun 11, 2014 at 12:48 PM, Dmitriy Shulyak
> wrote:
>
>> Actually i am proposing salt as alternative, the main reason - salt is
>> mature, feature full orchestration solution, that is well adopted even by
>> our intern
l [1], do we
> really want to have some intermediate steps (I mean salt) to do it?
>
> [1] https://wiki.openstack.org/wiki/Mistral
>
>
> On Wed, Jun 11, 2014 at 10:38 AM, Dmitriy Shulyak
> wrote:
>
>> Yes, in my opinion salt can completely replace
>> astute/mcoll
gt; Can you please clarify, does the suggested approach implies that we can
> have both puppet & SaltStack? Even if you ever switch to anything
> different, it is important to provide a smooth and step-by-step way for it.
>
>
>
> On Mon, Jun 9, 2014 at 6:05 AM, Dmitriy Shulyak
&
Hi folks,
I know that sometime ago saltstack was evaluated to be used as orchestrator
in fuel, so I've prepared some initial specification, that addresses basic
points of integration, and general requirements for orchestrator.
In my opinion saltstack perfectly fits our needs, and we can benefit f
Created spec https://review.openstack.org/#/c/94907/
I think it is WIP still, but would be nice to hear some comments/opinions
On Thu, May 22, 2014 at 1:59 AM, Robert Collins
wrote:
> On 18 May 2014 08:17, Miller, Mark M (EB SW Cloud - R&D - Corvallis)
> wrote:
> > We are considering the follo
>
>
>> HA-Proxy version 1.4.24 2013/06/17 What was the reason this approach
>> was dropped?
>>
>
> IIRC the major reason was that having 2 services on same port (but
> different interface) would be too confusing for anyone who is not aware
> of this fact.
>
>
Major part of documentation for hapro
Adding haproxy (or keepalived with lvs for load balancing) will require
binding haproxy and openstack services on different sockets.
Afaik there is 3 approaches that tripleo could go with.
Consider configuration with 2 controllers:
haproxy:
nodes:
- name: controller0
ip:
Hi Marios, thanks for raising this.
There is in progress blueprint that should address some issues with neutron
ha deployment -
https://blueprints.launchpad.net/neutron/+spec/l3-high-availability.
Right now neutron-dhcp agent can be configured as active/active.
But l3-agent and metadata-agent st
Hi,
There is number of additional network verifications that can improve
troubleshooting experience or even cluster perfomance, like:
1. multicast group verification for corosync messaging
2. network connectivity with jumbo packets
3. l3 connectivity verification
4. some fencing v
71 matches
Mail list logo