thanks all.
Issue got resolved.
I am not sure whether the changes i made that causes the issue to resolve
or something else.
i made following changes in both the nova.conf (controller node and compute
node )
scheduler_default_filters=AllHostsFilter
ram_allocation_ratio=3.0
Regards,
On Thu, May
Hi,
On Thu, May 12, 2016 at 2:17 PM, Chinmaya Dwibedy wrote:
> Upon trying to Install the rdo-release-kilo package to enable the RDO
> repository using
Fedora can install OpenStack, and I even do for testing... but I keep
at F22 and F23 (current releases). But since I do development on those
I u
Hi ,
Thank you Gerard and Remo for your response and suggestion. Will try with
centos 7/EL7. I believe, OpenStack and Fedora distribution release times
are independent of each other. Looking into the
https://repos.fedorapeople.org/repos/openstack/openstack-kilo/, it appears
that, rdo-release-kil
Sec groups should be fine since default is to accept all traffic internally you
need to make sure your router has all the networks so it can route. If not you
can do some work around like create a static route Etc.
Remo
Inviato da iPhone
> Il giorno 11 mag 2016, alle ore 22:26, Jagga Soorma
It is possible I have set it up today for a test without any issue on both
Liberty and mitaka
Inviato da iPhone
> Il giorno 11 mag 2016, alle ore 21:44, Jagga Soorma ha
> scritto:
>
> Hey Guys,
>
> I am trying to setup a network within a project with a single router
> connected to 2 priv ne
I am running kilo but that should be ok. Did you have to do anything
special to create this environment or just the basic stuff. Any changes
needed on the security groups (doubt it)? Just wondering what might be
causing this setup not to work for me when the single network setup works
just fine.
If this is not a production setup, try recreating or updating your nova db
su -s /bin/sh -c "nova-manage db sync" nova
And then restart all relevant services on controller node.
Irfan,
You compute host might not be able to communicate with the controller
node. In order to identify the root cau
I executed following command on the scheduler log:
grep -E 'WARNING\b' /var/log/nova/nova-scheduler -C 5 -5
output is :
***
root@controller:/var/lib/nova/instances# grep -E 'WARNING
Irfan,
You compute host might not be able to communicate with the controller
node. In order to identify the root cause, try isolating them into
different regions first, spin up cirros instance on them one by one and
then find out in the relevant logs.
Ideally, logs in your compute hosts shoul
You will find these two links very useful:
1. http://docs.openstack.org/icehouse/training-guides/
content/associate-computer-node.html#associate-vm-provisioning-indepth
2.
https://ilearnstack.com/2013/04/26/request-flow-for-provisioning-instance-in-openstack/
On Thu, May 12, 2016 a
Hi,
I wanted to know how a vm is created in openstack. There are so many
components of each service that I am getting confused. I understand at a
high level that nova-scheduler will find the compute host, nova-compute
will create the vm and nova interacts with neutron for port allocation. But
I wan
On 05/11/2016 11:08 AM, schmitt wrote:
Hi,
I'm implementing the feature of "Identity Provider Specific WebSSO" on
RHEL7+RHOSP8,
according to the document:
http://docs.openstack.org/developer/keystone/configure_federation.html.
In the part of "Configure Apache to use a federation capable
auth
thanks.
No, i am not even able to create instance using tiny flavor as well. before
integrating Ironic service , i was able to create instances and it was
working fine. issues stared coming when ironic came into the picture.
as i mentioned earlier, i have two compute nodes .. one for VMs and one fo
try with a tiny image and see what you get. If you suspect that the
available RAM is the issue, use the tiny flavor to begin with and see what
happens. if tiny one is ok, you at least can remove some of the factors and
focus on your baremetal.
Thanks.
Tong Li
IBM Open Technology
Building 501/B20
thanks.
i observed that, instances_path option was not there in the nova.conf file
, i added the same and restarted the service , still the issue persist
as per the log, ExactRamFilter filter has some issue
i have created one flavor called baremetal. so does that mean that , the
RAM which is assign
There is a new repo for RDO I would check that out.
Inviato da iPhone
> Il giorno 11 mag 2016, alle ore 10:28, Gerard Braad ha
> scritto:
>
> Hi,
>
>> On Thu, May 12, 2016 at 1:04 AM, Remo Mattei wrote:
>> edit your packstack answer file and say no to mysql if you have already
>> installed
Hi,
I have a 4 node setup of openstack liberty. 1 controller , 2 compute node
and 1 network node. I did a apt-get barbican-api barbican-worker
barbican-keystone-listener and installed the components. The deployment is
pointing to the ubuntu liberty repository. The database used by all the
services
Hi,
On Thu, May 12, 2016 at 1:04 AM, Remo Mattei wrote:
> edit your packstack answer file and say no to mysql if you have already
> installed it.
the issue is not only in mysql, but also caused by a dependency of
packstack itself; 'openstack-puppet-modules' (rdo repo) and what is
provided by F21
Fuel-mirror uses Packetary to download necessary packages.
In turn, Packetary gets a list of mandatory packages, parses
their metadata and recursively resolves all their dependencies
and then downloads all necessary packages. So, it is kind of
self validating. If there were no errors while cloning
edit your packstack answer file and say no to mysql if you have already
installed it.
> On May 11, 2016, at 09:22, Chinmaya Dwibedy wrote:
>
> Hi All,
>
> I am getting the below issue while trying to install openstack-packstack on
> Fedora 21. Can anyone please let me know what might be th
Hi,
On Thu, May 12, 2016 at 12:21 AM, Chinmaya Dwibedy wrote:
> Fedora 21.
F21 is EOL. This means no updates and no security fixes.
Also, issues caused by dependencies in the OS will not likely be solved.
https://fedoraproject.org/wiki/End_of_life
Please consider upgrading or using CentOS7/EL7.
Hi All,
I am getting the below issue while trying to install openstack-packstack on
Fedora 21. Can anyone please let me know what might be the cause and its
solution? Thanks in advance for your support.
Used the below procedures
a) systemctl disable firewalld
b) systemctl disable Netwo
Hi,
I'm implementing the feature of "Identity Provider Specific WebSSO" on
RHEL7+RHOSP8,
according to the document:
http://docs.openstack.org/developer/keystone/configure_federation.html.
In the part of "Configure Apache to use a federation capable authentication
method",
I choose Mellon pro
Hi everyone,
At the beginning of OpenStack we have been using wiki.openstack.org as
our default community lightweight information publication platform.
There were/are lots of things in there, reference information that a
couple people struggled to keep up to date (and non-vandalized), old
pag
I had similar issues. after a lot of searching, I finally figured that the
compute node uses a location which has very little space left. By default,
the instance_path point to /var/lib/nova/instances, on some of the
machines, you may have very little space there if you use SSD to just to
hold you
Can you please check q-svc & q-agt logs and ensure no issues are observed..
On Wed, May 11, 2016 at 6:17 PM, Irfan Sayed wrote:
> in the nova.conf file i have commented out this and tried creating
> instances but still the issue.
>
> #scheduler_use_baremetal_filters=True
> #scheduler_use_baremet
in the nova.conf file i have commented out this and tried creating
instances but still the issue.
#scheduler_use_baremetal_filters=True
#scheduler_use_baremetal_filters=True
anything else i have to comment as well.?
Regards,
On Wed, May 11, 2016 at 5:49 PM, Chris Buccella wrote:
> > now i d
> now i dont understand what is the meaning of "ExactRamFilter returned 0
hosts"
http://docs.openstack.org/developer/nova/api/nova.scheduler.filters.exact_ram_filter.html
You could try dropping the filter from scheduler_default_filters in
nova.conf
-Chris
On Wed, May 11, 2016 at 12:11 AM, Irf
Hi,
we are running OpenStack Mitaka on CentOS + RDO and use Ceph Infernalis as our
main (and only) storage engine (glance, cinder, cinder-backup).
As far as I can see, Change 205282 [1] made it into Mitaka (stable), so my
current task is making snapshotting faster. Our hypervisors boot from 1
29 matches
Mail list logo