On 11/30/2018 05:52 PM, Mike Carden wrote:
Have you set the placement_randomize_allocation_candidates CONF option
and are still seeing the packing behaviour?
No I haven't. Where would be the place to do that? In a nova.conf
somewhere that the nova-scheduler containers on the controlle
On 11/30/2018 02:53 AM, Mike Carden wrote:
I'm seeing a similar issue in Queens deployed via tripleo.
Two x86 compute nodes and one ppc64le node and host aggregates for
virtual instances and baremetal (x86) instances. Baremetal on x86 is
working fine.
All VMs get deployed to compute-0. I can
On 11/28/2018 02:50 AM, Zufar Dhiyaulhaq wrote:
Hi,
Thank you. I am able to fix this issue by adding this configuration into
nova configuration file in controller node.
driver=filter_scheduler
That's the default:
https://docs.openstack.org/ocata/config-reference/compute/config-options.html
https://bugs.launchpad.net/neutron/+bug/1777640
Best,
-jay
On 11/06/2018 08:21 AM, Terry Lundin wrote:
Hi all,
I've been struggling with instances suddenly not being able to fetch
metadata from Openstack Queens (this has worked fine earlier).
Newly created VMs fail to connect to the magic i
On 09/07/2018 03:46 PM, Hang Yang wrote:
Hi there,
I'm new to the DIB tool and ran into an issue when used 2.16.0 DIB tool
to build a CentOS based image with pip-and-virtualenv element. It failed
at
https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/pip-and-
On 09/17/2018 09:39 AM, Peter Penchev wrote:
Hi,
So here's a possibly stupid question - or rather, a series of such :)
Let's say a company has two (or five, or a hundred) datacenters in
geographically different locations and wants to deploy OpenStack in both.
What would be a deployment scenario
" or possibly "shelve and then offload an instance", then that
is a different thing, and in both of *those* cases, resources are
released on the compute host.
Best,
-jay
Zitat von Jay Pipes :
On 08/30/2018 10:54 AM, Eugen Block wrote:
Hi Jay,
You need to set your ram_allocation_
On 08/23/2018 11:01 PM, 余婷婷 wrote:
Hi:
Sorry fo bothering everyone. Now I update my openstack to queen,and
use the nova-placement-api to provider resource.
When I use "/resource_providers/{uuid}/inventories/MEMORY_MB" to
update memory_mb allocation_ratio, and it success.But after some
m
On 08/30/2018 10:54 AM, Eugen Block wrote:
Hi Jay,
You need to set your ram_allocation_ratio nova.CONF option to 1.0 if
you're running into OOM issues. This will prevent overcommit of memory
on your compute nodes.
I understand that, the overcommitment works quite well most of the time.
It j
On 08/30/2018 10:19 AM, Eugen Block wrote:
When does Nova apply its filters (Ram, CPU, etc.)?
Of course at instance creation and (live-)migration of existing
instances. But what about existing instances that have been shutdown and
in the meantime more instances on the same hypervisor have been
On 08/27/2018 09:40 AM, Risto Vaaraniemi wrote:
Hi,
I tried to migrate a guest to another host but it failed with a
message saying there's not enough capacity on the target host even
though the server should me nearly empty. The guest I'm trying to
move needs 4 cores, 4 GB of memory and 50 GB of
alidHost(reason="")
NoValidHost: No valid host was found.
: NoValidHost_Remote: No valid host was found.
2018-08-08 09:28:36.331 1648 WARNING nova.scheduler.utils
[req-ef0d8ea1-e801-483e-b913-9148a6ac5d90
2499343cbc7a4ca5a7f14c43f9d9c229 3850596606b7459d8802a72516991a19 -
default default] [ins
On 08/08/2018 09:37 AM, Cody wrote:
On 08/08/2018 07:19 AM, Bernd Bausch wrote:
I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?
The message is in the conductor log because it's the conductor that does
mos
On 08/08/2018 07:19 AM, Bernd Bausch wrote:
I would think you don't even reach the scheduling stage. Why bother
looking for a suitable compute node if you exceeded your quota anyway?
The message is in the conductor log because it's the conductor that does
most of the work. The others are just sl
On 08/07/2018 10:57 AM, Cody wrote:
Hi everyone,
I intentionally triggered an error by launching more instances than it
is allowed by the 'cpu_allocation_ratio' set on a compute node. When it
comes to logs, the only place contained a clue to explain the launch
failure was in the nova-conducto
On 07/16/2018 10:30 AM, Toni Mueller wrote:
Hi Jay,
On Fri, Jul 06, 2018 at 12:46:04PM -0400, Jay Pipes wrote:
There is no current way to say "On this dual-Xeon compute node, put all
workloads that don't care about dedicated CPUs on this socket and all
workloads that DO care about
Hi Tony,
The short answer is that you cannot do that today. Today, each Nova
compute node is either "all in" for NUMA and CPU pinning or it's not.
This means that for resource-constrained environments like "The Edge!",
there are not very good ways to finely divide up a compute node and make
On 07/02/2018 09:45 AM, Houssam ElBouanani wrote:
Hi,
I have recently finished installing a minimal OpenStack Queens
environment for a school project, and was asked whether it is possible
to deploy an additional compute node on bare metal, aka without an
underlying operating system, in order
On 05/02/2018 04:39 PM, Torin Woltjer wrote:
> There is no HA behaviour for compute nodes.
>
> You are referring to HA of workloads running on compute nodes, not HA of
> compute nodes themselves.
It was a mistake for me to say HA when referring to compute and
instances. Really I want to avoi
On 05/02/2018 02:43 PM, Torin Woltjer wrote:
I am working on setting up Openstack for HA and one of the last orders of
business is getting HA behavior out of the compute nodes.
There is no HA behaviour for compute nodes.
Is there a project that will automatically evacuate instances from a
dow
On 01/17/2018 12:46 PM, Jorge Luiz Correa wrote:
Hi, I would like some help to understand what does means each field in
output of the command 'openstack hypervisor stats show':
it's an amalgamation of legacy information that IMHO should be
deprecated from the Compute API.
FWIW, the "implemen
On 01/15/2018 12:58 PM, Satish Patel wrote:
But Fuel is active project, isn't it?
https://docs.openstack.org/fuel-docs/latest/
No, it is no longer developed or supported.
-jay
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/op
ironic, a nova, a neutron in a OpenStack environment
Does everything go well?
Sure, that should work just fine.
Best,
-jay
Thanks
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, January 02, 2018 8:59 PM
To: openstack@lists.openstack.org
Subject: Re
On 01/02/2018 06:09 AM, Guo James wrote:
Hi guys
I know that Ironic has support multi-nova-compute.
But I am not sure whether OpenStack support the situation than every
nova-compute has a unshare ironic
And these ironic share a nova and a neutron
I'm not quite following you... what do you mean
On 12/01/2017 08:57 AM, si...@turka.nl wrote:
Hi,
I have created a flavor with the following metadata:
quota:disk_write_bytes_sec='10240'
This should limit writing to disk to 10240 bytes (10KB/s). I also tried it
with a higher number (100MB/s).
Using the flavor I have launched an instance and
eview.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:validate_provider_summaries
You will need to wait for the Queens release for the complete
traits-based scheduling functionality to be operational.
Best,
-jay
-----Original Message-
From: Jay Pipes [mailto:jaypi...@
On 11/16/2017 12:06 AM, Ramu, MohanX wrote:
Hi All,
I have a use case that I need to apply some filter (Custom traits)
while Placement API fetch the resource providers for launching instance.
So that I can have list of resource provided which meets my
condition/filter/validation. The valida
On 10/06/2017 10:18 AM, Ramu, MohanX wrote:
Hi Jay,
I am able to create custom traits without any issue. Want to associate some
value to that traits.
Like I mentioned in the previous email, that's not how traits work :)
A trait *is* the value that is associated with a resource provider.
Bes
tion": 1,
"traits": [
"CUSTOM_XYZ"
]
}
BTW, a great place to see examples of both good and bad API usage is to
check out the Gabbit functional API tests for the placement API. Here is
the set of tests for the traits functionality:
https://github.com/ope
Rock on :)
On 10/04/2017 09:33 AM, Ramu, MohanX wrote:
Thank you so much Jay. After adding this header, working fine.
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 11:36 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits
Against the Pike placement API endpoint, make sure you send the
following HTTP header:
OpenStack-API-Version: placement 1.10
Best,
-jay
On 10/03/2017 02:01 PM, Ramu, MohanX wrote:
Please refer attached original one.
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com
Best,
-jay
Thanks & Regards,
Mohan Ramu
-Original Message-----
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, October 3, 2017 9:26 PM
To: openstack@lists.openstack.org
Subject: Re: [Openstack] Traits is not working
On 10/03/2017 11:34 AM, Ramu, MohanX wrote:
Hi,
We have im
On 10/03/2017 11:34 AM, Ramu, MohanX wrote:
Hi,
We have implemented OpenStack Ocata and Pike releases, able to consume
Placement resource providers API, not able to consume resource class APIs’.
I tried to run Triats API in Pike set up too. I am not able to run any
Traits API.
As per the O
On 09/26/2017 10:20 AM, Volodymyr Litovka wrote:
Hi Jay,
I know about this way :-) but Pike introduced ability to resize attached
volumes:
"It is now possible to signal and perform an online volume size change
as of the 2.51 microversion using the|volume-extended|external event.
Nova will p
Detach the volume, then resize it, then re-attach.
Best,
-jay
On 09/26/2017 09:22 AM, Volodymyr Litovka wrote:
Colleagues,
can't find ways to resize attached volume. I'm on Pike.
As far as I understand, it required to be supported in Nova, because
Cinder need to check with Nova whether it's
at do you think Jay?
On Wed, Sep 6, 2017 at 9:22 PM, Jay Pipes <mailto:jaypi...@gmail.com>> wrote:
On 09/06/2017 01:21 AM, Weichih Lu wrote:
Thanks for your response.
Is this mean if I want to create an instance with flavor: 16G
memory (hw:mem_page_size=lar
than 50 if you want to launch a
16GB instance on a host with 64GB of RAM. Try reserving 32 1GB huge pages.
Best,
-jay
2017-09-06 1:47 GMT+08:00 Jay Pipes <mailto:jaypi...@gmail.com>>:
Please remember to add a topic [nova] marker to your subject line.
Answer below.
On 09/0
Please remember to add a topic [nova] marker to your subject line.
Answer below.
On 09/05/2017 04:45 AM, Weichih Lu wrote:
Dear all,
I have a compute node with 64GB ram. And I set 50 hugepages wiht 1GB
hugepage size. I used command "free", it shows free memory is about
12GB. And free hugepag
On 08/18/2017 08:50 AM, Divneet Singh wrote:
Hello, I have trying to install ocata on Ubuntu 16.04 , for the time
being i have 2 nodes . just can't figure this out.
I have setup Placement API. But get error after restart nova service or
reboot
" 017-08-18 08:27:41.496 1422 WARNING nova.sched
On 07/23/2017 07:51 PM, Manuel Sopena Ballesteros wrote:
Dear Openstack community,
We are a medical research institute and we have been running HPC for
many years, we started playing with Openstack a few months ago and we
like it’s flexibility to deploy multiple environments. However we are
q
ows available if PCI device
still exist in the same address?
Because when I remove the GPU card, I replace it to a SFP+
network card in the same slot.
So when I type lspci the SFP+ card stay in the same address.
But it still doesn't
y much appreciate it.
All the best,
-jay
Thanks Jay!
-Peter
On Fri, Jul 7, 2017 at 12:50 PM, Jay Pipes <mailto:jaypi...@gmail.com>> wrote:
On 07/07/2017 12:30 PM, Peter Doherty wrote:
Hi,
If I'm interpreting this correctly, nova compute is calling
get
On 07/07/2017 12:30 PM, Peter Doherty wrote:
Hi,
If I'm interpreting this correctly, nova compute is calling
get_diagnostics on all instances, including ones currently in a shutdown
state. And then it throws an exception, and adds an entry into the
instance_faults table in the database.
no
Jay,
The status of the "removed" GPU still shows as "Available" in
pci_devices table.
2017-07-07 8:34 GMT+08:00 Jay Pipes <mailto:jaypi...@gmail.com>>:
Hi again, Eddie :) Answer inline...
On 07/06/2017 08:14 PM, Eddie Yen wrote:
Hi everyone,
Hi again, Eddie :) Answer inline...
On 07/06/2017 08:14 PM, Eddie Yen wrote:
Hi everyone,
I'm using OpenStack Mitaka version (deployed from Fuel 9.2)
In present, I installed two different model of GPU card.
And wrote these information into pci_alias and pci_passthrough_whitelist
in nova.conf
On 07/06/2017 02:17 AM, Eddie Yen wrote:
Hi, now I got another problem.
I have two models of GPU devices and I set both in pci_alias and
pcli_passthrogh_whitelist on controller and compute node(with this two
GPUs).
Now I removed one of GPU and delete its data in nova.conf, then restart
nova-a
On 07/05/2017 04:18 AM, Eddie Yen wrote:
Hi everyone,
I'm using OpenStack Mitaka (which deployed from Fuel 9.2) and doing GPU
things.
I got a problem is I need to delete current instance which contains GPU
to release device if I want assign GPU to another new instance temperately.
I'll got
Awesome, thanks Jose!
On 06/26/2017 11:12 PM, Jose Renato Santos wrote:
Jay
I created a bug report as you suggested:
https://bugs.launchpad.net/nova/+bug/1700684
Thanks for your help
Best
Renato
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Monday, June 26
On 06/26/2017 02:27 PM, Jose Renato Santos wrote:
Jay,
Thanks for your response
Let me clarify my point.
I am not expecting to see a change in the updated_at column of a server when
the rules of its security group changes.
I agree that would be a change to be handled by the Neutron Api, and wou
On 06/26/2017 12:58 PM, Jose Renato Santos wrote:
Hi
I am accessing the nova api using the gophercloud SDK
https://github.com/rackspace/gophercloud
I am running Openstack Newton installed with Openstack Ansible
I am accessing the “List Servers” call of the nova Api with the
Changes-Since pa
You have installed a really old version of Nova on that server. What are
you using to install OpenStack?
Best,
-jay
On 06/14/2017 12:13 PM, SGopinath s.gopinath wrote:
Hi,
I'm trying to install Openstack Ocata in
Ubuntu 16.04.2 LTS.
During installation of nova at this step
su -s /bin/sh -c
On 04/21/2017 07:29 AM, TanXin wrote:
I want to know if I subscribe successfully.
yes.
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
Unsubscribe : http://lists.openstack.or
On 04/10/2017 04:25 PM, Georgios Dimitrakakis wrote:
Hello again,
after some time...
The "nova list" command is very fast indeed and the only problem I
observer is through the EC2 API.
I have found out the following bug and I believe is related to it, what
do you people think?
https://bugs.la
Thanks for the update, Oisin, this is very useful! :)
Best,
-jay
On 03/28/2017 08:39 PM, Oisin O'Malley wrote:
There was 2 separate issues to resolve;
Firstly Nova was appending the default domain name .novalocal to the hostname
it presents via the meta-data service. This can be resolved by
+kevin benton
On 03/28/2017 07:20 AM, Bartłomiej Solarz-Niesłuchowski wrote:
Dear List,
I am beginner of openstack user.
Welcome to the OpenStack community! :)
I setup openstack with RDO on Centos 7.
I have 6 machines:
they have two interfaces enp2s0f0 (10.51.0.x) and enp2s0f1 (213.135.4
On 03/26/2017 11:49 PM, Matthew Taylor wrote:
Responded off-list.
For the benefit of the community, would one of you care to repeat the
answer on-list please?
Thanks!
-jay
On 27/3/17 14:22, Oisin O'Malley wrote:
Hi All,
What is the correct way to set an instances DNS search suffix via
DH
On 03/26/2017 04:06 PM, Georgios Dimitrakakis wrote:
Hello,
can someone let me know if it's an expected behavior the EC2API to be
very slow in Ocata?
I have an old installation of OpenStack (Icehouse) with NOVA-EC2 and
when requesting an instance's info I am getting them back in 9sec.
In a new
The error is:
On 03/16/2017 07:01 AM, Vikash Kumar wrote:
Placement service credentials do not work
Check that the user "placement" in the project "service" having the
password "testetst" can access the Keystone authentication endpoint at
"http://10.1.110.98:5000";.
Best,
-jay
___
On 03/08/2017 01:15 PM, Antonio Cappuccio wrote:
Hi all, we are configuring openbaton on top of openstack.
We have installed devstack and openbaton on the same VM, with ubuntu 14.04.
Both openbaton and openstack dashboards look fine, so we think we have
installed both products in the right way.
On 03/06/2017 06:05 AM, Raja T Nair wrote:
Hi,
Can I ask queries about Mirantis community version on this list?
If not, can somebody point to an appropriate link?
Hi Raja,
There's no such thing as Mirantis Community version. Are you referring
to OpenStack Fuel? Perhaps the Mirantis OpenStack
On 01/25/2017 03:19 AM, Eugen Block wrote:
All these instances are in our ceph cluster.
The instance path is defined in nova.conf:
# Where instances are stored on disk (string value)
instances_path = $state_path/instances
If one compute node fails but it's able to initiate a migration, the
sam
On 01/12/2017 05:31 AM, Balazs Gibizer wrote:
Hi,
The flavor field of the Instance object is a lazy-loaded field and the
projects field of the Flavor object is also lazy-loaded. Now it seems to
me that when the Instance object lazy loads instance.flavor then the
created Flavor object is orphaned
On 08/23/2016 10:13 AM, Alexandr Porunov wrote:
Hello,
My server accepts files in chunks (4 Kbytes each chunk. File's size can
be till 8 GB). Is it possible somehow store those chunks in Swift like a
single file? Does somebody know any solution to solve this problem?
Yes, you can do this.
The
On 07/20/2016 10:13 AM, Frank Ritchie wrote:
Hi all
Does anyone know if it is safe to change the name of a Nova Scheduler
server group directly in the database?
Yeah, should be safe to do this. instance_groups.name is a non-unique
column that isn't used for indexes, lookups or really anything
On 07/08/2016 09:59 AM, Brent Troge wrote:
context - high performance private cloud with cpu pinning
Is it possible to map vCPUs to specific pCPUs ?
Currently I see you can only direct which vCPUs are mapped to a specific
NUMA node
hw:numa_cpus.0=1,2,3,4
However, to get even more granular, is
On 06/20/2016 02:26 AM, Jean-Pierre Ribeauville wrote:
Hi,
Is there any way to for an instance to send any data to the compute node
( for my purpose a status byte is enough) ?
Generally, no, we don't want guests to be able to communicate with the
host via open channels. If you're looking fo
On 06/15/2016 03:06 AM, Jean-Pierre Ribeauville wrote:
Hi,
Is it possible to add a same aggregate in several AZ via the Horizon GUI ?
No this is not possible. An aggregate may only belong to a single AZ.
Best,
-jay
___
Mailing list: http://lists.o
On 02/22/2016 05:38 AM, Yngvi Páll Þorfinnsson wrote:
Hi
I want to stop instances from beeing created on one of our compute
nodes, i.e. „*compute1*“
But I want all current instances on *compute1* to be active and available.
I thus disable the nova service for this node:
# nova service-disable
On 12/03/2015 02:24 AM, Richard Raseley wrote:
I am tracking down an issue I am having in Horizon ( HTTPD output
http://paste.openstack.org/show/480704/ ) which lead me to looking at
the Nova extensions. When I try to execute a `nova list-extensions`
command with the debug flag, I get the followi
On 10/16/2015 11:08 AM, Chris Friesen wrote:
Someone recently asked me a question about keypairs and multiple users
and I thought I'd clarify a few things:
1) Each keypair is associated with a specific user.
2) A user cannot see a keypair belonging to another user.
3) If a user is part of mul
On 10/01/2015 04:38 PM, Rahul Cheyanda wrote:
Hello,
I had a question regarding utilization-aware-scheduling,
- is network utilization considered for scheduling ? (in Stable/Kilo ?
or in Stable/Liberty?)
No, it is not.
Best,
-jay
___
Mailing list:
On 08/13/2015 11:37 PM, Ivan Derbenev wrote:
*From:*Ivan Derbenev [mailto:ivan.derbe...@tech-corps.com]
*Sent:* Wednesday, August 5, 2015 1:21 PM
*To:* openstack@lists.openstack.org
*Subject:* [Openstack] New instances booting time
Hello, guys, I have a question
We now have OS Kilo + KVM+ Ubunt
On 07/26/2015 01:15 PM, Lars Kellogg-Stedman wrote:
So, the Kilo release notes say:
nova-manage migrate-flavor-data
But nova-manage says:
nova-manage db migrate_flavor_data
But that says:
Missing arguments: max_number
And the help says:
usage: nova-manage db migrate_fla
ase see my comment about too many variables changing between
environments to make it possible to determine causal relationships
between the environment and the performance degradation.
Best,
-jay
Regards,
Krishnaprasad
-Original Message-----
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent:
On 06/29/2015 07:19 AM, Narayanan, Krishnaprasad wrote:
Hello all,
I ran tests under the following settings to measure the IO performance
of MySQL database. I used Sysbench as the client workload generator. I
found that the performance of MySQL (both resource utilization and
application) has deg
On 06/09/2015 06:49 PM, Georgios Dimitrakakis wrote:
Hi all!
I would like to know if it's possible to pack as many VMs as possible
(based on the available resources) on one host
before populating another.
What I have seen so far is that by default it tries to balance the
available VMs on differ
Hi Romain,
I think you may be experiencing this bug:
https://bugs.launchpad.net/nova/+bug/1447884
with a fix currently up:
https://review.openstack.org/#/c/177084/
Best,
-jay
On 06/05/2015 04:02 AM, Aviolat Romain wrote:
Dear Openstack community
I have a strange intermittent problem with c
re to the
people who have very kindly contributed images so far, for suggestions
on what to read.
Thanks again.
-Original Message-
From: Muhammed Salehi
To: Jay Pipes
Cc: "openstack@lists.openstack.org"
Sent: Fri, 05 Jun 2015 6:44 AM
Subject: Re: [Openstack] Glance Images
I
Unrelated to the Glance Images building thing, but Jose, I wanted to
point you to the Bifrost project, which does standalone Ironic with
Ansible :)
https://github.com/juliakreger/bifrost/blob/master/README.rst
Might be something you and Julia Kreger could collaborate on?
Best,
-jay
On 06/04/
On 03/31/2015 02:23 AM, somshekar kadam wrote:
Any pointers or link to make openstack production grade/guidelines.
This question is too broad to answer. What do you consider "production
grade"? Are you referring to resiliency? Scale? What workloads run on
the cloud? Storage size? Throughput?
On 03/04/2015 09:00 AM, jankihchhat...@gmail.com wrote:
Hi
My colleague and me got into a discussion today about nova network. From
my understanding, a setup is said to be neutron if it has Neutron agents
installed and running and not that it is a three node architecture
meaning it has 3 physi
On 02/08/2015 11:39 PM, Vijaya Bhaskar wrote:
Please guys, Any ideas. I have not been able to fix the issue till now.
On Fri, Feb 6, 2015 at 1:33 PM, Vijaya Bhaskar
mailto:vijayabhas...@sparksupport.com>>
wrote:
Hi all,
I have an openstack setup with ceph as the storage backend and
On 01/02/2015 03:10 PM, Dean Troyer wrote:
Glance has the concept of 'image members' as the mechanism for sharing
images between projects. We need to add this to OSC and I'd like to
work out the commands/options to do so A while back, markwash and I sat
down and sorted a simple set of additions
On 01/06/2015 03:56 AM, ppnaik wrote:
Hi All,
I have a multi node setup of openstack juno on centos 7. After I
instantiate multiple VMs and check the nova-compute.log on compute nodes
it shows a negative value for free disks even though the the physical
system has a lot of free memory
I think y
On 12/28/2014 04:36 PM, Hui Kang wrote:
Hi,
I have a MySQL database using MyISAM engine, instead of innodb. When I
install the openstack service, I can successfully create the database
such as keystone, glance.
However, when I run
su -s /bin/sh -c "keystone-manage db_sync" keystone
it reports t
On 12/22/2014 11:20 AM, Eriane Leobrera wrote:
Hi OpenStack,
I would really appreciate if anyone can assist me on my dilemma. Below
are the capabilities I am looking for. We are on the process of deciding
between OpenStack vs CloudStack. Capabilities are much more important
for us, integrated an
Hi Jordi, thank you SO much for this email. It is excellent feedback for
our community and our developers. I've provided some comments inline,
but overall just wanted to thank you for bringing some of these product
needs to our attention.
On 12/03/2014 01:42 PM, Jordi Moles Blanco wrote:
Hi e
On 11/28/2014 11:22 AM, Georgios Dimitrakakis wrote:
Jay,
you were right!
If I remove the "availability zone" parameter then filters are applied!!!
Do you know if this is an expected behavior?
Honestly, the way our filter scheduler works with regards to aggregates
is so wonky that I wouldn'
On 11/27/2014 02:29 PM, Georgios Dimitrakakis wrote:
Does it has anything to do with the fact that I am specifically
requesting that node through the availability zone parameter?
If you run the boot command without the availability zone hint, does it
change the behaviour?
-jay
___
On 11/28/2014 06:40 AM, Chris wrote:
Hello Robert,
thx for your answer! Does we need to create new admin/service tenants
for the new services in the new region or should we use the old ones?
It's much easier to use the same ones, in my experience.
Best,
-jay
_
On 11/26/2014 03:36 PM, Amit Anand wrote:
I also took a look at a command you ran and I tried it, would this be
correct output (notice I didnt specify a DB in command)?
Yep, that all looks correct to me. I'm a little unsure what else to
investigate, frankly, Amit :( There's got to be *somethi
| *3DA97D7423D54524806BFF6A19D94F78EEF97338 | localhost |
| nova | *3DA97D7423D54524806BFF6A19D94F78EEF97338 | % |
| root | *7088873CEA983CB57491834389F9BB9369B9D756 | % |
+--+---+---+
10 rows in set (0.00 sec)
On Wed, Nov
On 11/26/2014 02:21 PM, Amit Anand wrote:
Hi Jay - I believe so below is the part that is in the nova.conf
# The SQLAlchemy connection string used to connect to the
# bare-metal database (string value)
connection=mysql://nova:PASSWORD@controller/nova
The PASSWORD is exactly the same what I have
On 11/25/2014 05:10 PM, Amit Anand wrote:
Hi all,
Setup: 3 node, one controller, one compute and one network all separate
machines. Juno install.
Recreating my Juno install and this is the second time Ive gotten this
error whne running "su -s /bin/sh -c "nova-manage db sync" nova" ( I got
in in
On 11/04/2014 04:34 AM, Chris wrote:
Hello,
We use OpenStack in one of our DC locations (location A). Now we want to
have compute nodes in other locations (location B).
In location B we want to have just compute nodes and an additional
glance server to prevent image transfers from location A t
On 10/30/2014 08:41 AM, Fabrizio Soppelsa wrote:
Do you have suggestions on how to further troubleshoot such an issue?
[1]
Oct 27 13:08:46 lc-20 kernel: ctx4008000f: no IPv6 routers present
Oct 27 13:10:03 lc-20 kernel: tipc: Resetting link
<1.1.20:ethSw0-1.1.10:ethSw0>, peer not responding
Oct
On 09/29/2014 06:51 AM, Nikesh Kumar Mahalka wrote:
How to get nova-compute logs in juno devstack?
If you set the following configuration option in your devstack localrc,
all the log files from the different screen session'd services will end
up in the $LOGDIR directory:
SCREEN_LOGDIR=$LOGD
On 09/15/2014 05:02 AM, Abbass MAROUNI wrote:
Thanks Jay,
Does it require any admin rights to do a custom query on the Nova
database ? And if so do you know where to look for such a query ? Which
part of the nova code to include in the filter ?
I'm not talking about doing something via the pub
into different host aggregates. The
problem I still see with that is that there is no way for the scheduler
to understand which physnet an instance "belongs on"...
-jay
-Original Message-
From: Jay Pipes [mailto:jaypi...@gmail.com]
Sent: Tuesday, September 16, 2014 11:5
On 09/15/2014 11:26 PM, Chris wrote:
Hello,
We have a OpenStack setup with a large number of compute nodes spread in
different subnets which are represented as different physnet’s in OpenStack.
When we start an instance we see that the scheduler choose a compute
node and tries to spawn the inst
1 - 100 of 167 matches
Mail list logo