It seems that radvd was not spawned successfully
in l3-agent log:
Dec 18 11:23:34 ci-overcloud-controller0-oxzkjphwfyw3 neutron-l3-agent:
Stderr: '/usr/bin/neutron-rootwrap: Unauthorized command: ip netns exec
qrouter-6066faaa-0e35-4e7b-8988-7337c493bad7 radvd -C
/var/run/neutron/ra/6066faaa-0
On 23.12.14 03:08, Mark Kirkwood wrote:
> I've been taking a look at this
> (https://github.com/stackforge/swift-ceph-backend and forks etc). Looks
> good.
> 1/ Async updates
>
> There's a comment in rados_server.py about not handling these. What
> exactly is the issue? (I note in Juno we need to
Hi All,
I am working on CPU Pinning feature of master branch of NOVA. Setup is
devstack.
I want to pin vcpus to pcpus using flavors and I have only one compute
node. No host aggregates
created.
I have created cpu pinning flavor and set cpuset and vcpupin parameters
using
1) nova flavor-create p
On 22.12.14 21:00, Amit Anand wrote:
> Thanks Christian I think thats exactly what I needed - I will try and
> make 2 new rings and see how that works. You wouldnt per chance know
> how I would limit the replicas, ie 2 in the Paris region and 1 at HQ?
> I only want one replica to goto the HQ region
> using RDO IceHouse packages I've set up an infrastructure atop of
> RHEL6.6 and am seeing a very unpleasant performance for the storage.
>
>cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200
>
>Results are just miserable. going from 1.2G/s down to 20M/s seems to be a big
>degradation.
On Tue, Dec 23, 2014 at 6:14 AM, Erik McCormick
wrote:
> I have a slightly different take on some of this:
>
> On Mon, Dec 22, 2014 at 11:52 AM, Jay Pipes wrote:
>>
>> On 12/22/2014 11:20 AM, Eriane Leobrera wrote:
>>
>>> Hi OpenStack,
>>>
>>> I would really appreciate if anyone can assist me on
Hi Team,
Is it possible to expand the partition of the above set up.
Scenario : Disk sda1 is 100% utilized.
Regards,
Dhanesh M.
On Tue, Dec 23, 2014 at 1:43 PM, Christian Schwede <
christian.schw...@enovance.com> wrote:
> On 22.12.14 21:00, Amit Anand wrote:
> > Thanks Christian I think thats
On 23.12.14 11:05, dhanesh1212121212 wrote:
> Is it possible to expand the partition of the above set up.
>
> Scenario : Disk sda1 is 100% utilized.
Do you mean to extend a partition on a disk or the Swift region/zone?
You could extend the partition on the disk if you're using something
like LVM
Hi everyone,
I am trying to deploy a openstack cluster using Juju Charms + MAAS. I just
tried to deploy mysql with bind_address = 10.0.11.5, but the options are
not applied. When I run juju get mysql , it reports the options correctly ,
however they are not.
vijay@openstack-test:~$ juju status
e
Dear list,
We've recently faced an issue regarding live migration on a Juno openstack
installation using a ceph giant shared storage.
The error that appear is
DestinationDiskExists: The supplied disk path
(/var/lib/nova/instances/instance-id-hidden) already exists, it is expected
not to exist.
OpenStack Security Advisory: 2014-041
CVE: Requested
Date: December 23, 2014
Title: Glance v2 API unrestricted path traversal
Reporter: Masahito Muroi (NTT)
Products: Glance
Versions: up to 2014.1.3 and 2014.2 version up to 2014.2.1
Description:
Masahito Muroi from NTT reported a vulnerability in
On 12/23/2014 01:38 AM, Robert van Leeuwen wrote:
>> using RDO IceHouse packages I've set up an infrastructure atop of
>> RHEL6.6 and am seeing a very unpleasant performance for the
>> storage.
>>
>> cinder-volume # dd if=/dev/zero of=baloon bs=1048576 count=200
>>
>> Results are just miserable.
There is a need for an end user service for backup/snapshot on a schedule.
Raksha (https://wiki.openstack.org/wiki/Raksha) was close but does not seem to
be actively developed at the moment. Extensions for remote replication on
schedule would be really welcome.
It is not clear to me how the st
> from above numbers it doesn't sound like storage node is the culprit -
> 50% drop happens on compute node going from baremetal to virtual. So I'm
> inclined to think it's a tuning of virtio (if that is even possible).
Some tuning tips for kvm: http://www.linux-kvm.org/page/Tuning_KVM
We saw a m
On 12/23/2014 01:36 PM, Robert van Leeuwen wrote:
>> from above numbers it doesn't sound like storage node is the
>> culprit - 50% drop happens on compute node going from baremetal to
>> virtual. So I'm inclined to think it's a tuning of virtio (if that
>> is even possible).
>
> Some tuning tips f
Hi samuel,
Sometimes live migration couldn't be rollback when error occurs, so the
destination instance disk path remains there, you have to manually resolve
this by delete that directory manually, and please try again.
On Tue, Dec 23, 2014 at 11:24 PM, samuel wrote:
> Dear list,
>
> We've rece
I have been trying to fix similar issue in
https://review.openstack.org/#/c/134693/
not sure that also fixes your issue.
2014-12-24 10:09 GMT+08:00 Yaguang Tang :
> Hi samuel,
>
> Sometimes live migration couldn't be rollback when error occurs, so the
> destination instance disk path remains ther
Hi, all. Trying to get a feel for OpenStack, so I installed DevStack.
And it works great... until I have the audacity to restart it. Apache
doesn't go fully live unless I do a restart -- and if I do that, I don't
get past the login screen (just tells me that admin had trouble
authenticating)
18 matches
Mail list logo