I am testing the qemu + glusterfs by using libgfapi. But I
find the glusterfs supported is not enabled by default. So
How and where can get the packages easily.
[1] https://bugs.launchpad.net/cloud-archive/+bug/1246924
--
Lei Zhang
Blog: http://jeffrey4l.github.io
twitter/weibo: @jeffrey4l
Guys,
Can I fill a BUG about this issue?! If yes, where?! Neutron Launchpad page?
Tks,
Thiago
On 12 November 2013 04:24, Martinx - ジェームズ wrote:
> At least one guy from Rackspace is aware of this problem, thanks Anne and
> James Denton! ^_^
>
> Hope to talk with James Page on IRC tomorrow, to
Plz try CONF.docker.registry_default_ip to get the value in driver.py line
280 per traceback
On Monday, November 18, 2013, Judd Maltin wrote:
>
> My code looks like this, and it works just fine:
>
> -
> docker_opts = [
> cfg.IntOpt('registry_default_
> port',
>default=5042
hello ZhangI follow the steps mentioned in the link given by you.but the error is still there.Please help. On Monday, 18 November 2013 10:04 AM, Hua ZZ Zhang wrote:
try work-around on ask.openstack.org here:
https://ask.openstack.org/en/question/4996/importerror-running-swift/
pragya
My code looks like this, and it works just fine:
-
docker_opts = [
cfg.IntOpt('registry_default_
port',
default=5042,
help=_('Default TCP port to find the '
'docker-registry container'),
deprecated_group='DEFAULT',
Hello Chenrui, thank you for answering, I've done some changes and they are
like you said,
cinder quota-update --gigabytes <#number> $tenant-id
cinder quota-show $tenant-id
I'll try to create the storages again tomorrow and I'll post the results
here.
Regards.
Guilherme.
2013/11/19 Chenrui (A
Please check your tenant quota and calculate how much free space in Grizzly
cinder quota-show ${tenant-id}
cinder list
If you are using Havana, there is a convenient command “cinder quota-usage”
:)
发件人: Guilherme Russi [mailto:luisguilherme...@gmail.com]
发送时间: 2013年11月18日 21:12
收件人: Razique Mahro
Yah, well it's always better to have a production-deployment schedule
based on a specific OpenStack release and stick to it.
Consider pushing to production the last "stable" version after extended
tests. Thruth is, every deployment should be made after a bench, a
specific testing protocol to mak
I've heard of a few ways:
1) use existing puppet or chef or fabric or salt or dsh scripts to treat them
like normal config files
2) use rsync to sync the ring files every so often (eg via cron)
3) host a web server on the admin box where you made the scripts and wget or
curl them in the swift
John, thanks for the good tips. I have a few followup questions.
During propagation do all of the ring.gz files need to be updated and
synchronized simultaneously? Is there a fuzzy-time period allowed where
some swift storage boxes can be using the old ring and others can be using
the new ring? Wh
Hello.
I am wondering how others manage the {account|container|object}.ring.gz
files for large deployments. Specifically, how do you propagate these files
once a change is made?
I am manually copying them out to every node after a change, but that does
not seem very practical to me. I could store
I did nothing besides a Compute Node reboot... Everything is back to normal
now, few hours after restarting it...
It is easily to reproduce this, every time I reboot a Compute Node, some
instances doesn't get its IP... Need to wait hours to get it back to
normal, without any intervention...
Unfor
Hi,
I'm an openstack newbie and I'm trying to setup the following on havana:
"Provider router with private networks" setup (as descirbed in
http://docs.openstack.org/trunk/install-guide/install/apt-debian/content/section_networking-provider-router_with-provate-networks.html)
on the following 3 pi
now, that's interesting, you didn't even restarted a service?
Did you found something into the dhcp-agent logs?
On 18 Nov 2013, at 10:46, Martinx - ジェームズ wrote:
> Thank you Razique!
>
> Out of nothing, all instances gets its IP automatically again, without even
> restarting it... Have no idea abo
Thank you Razique!
Out of nothing, all instances gets its IP automatically again, without even
restarting it... Have no idea about what had happened.
But, this is very weird, every time I restart a compute node, those network
problems appear... No idea about the source of this problem...
Tks aga
Check the dhcp-agent logs especially when you force a dhcp renew on these
instances.
Meanwhile, use tcpdump with:
`tcpdump -i ROUTER-INTERFACE -vvv -s 1500 '((port 67 or port 68) and (udp[8:1]
= 0x1))'`
if you want to check the DHCP paquets for a particular instance, get its mac
and:
`tcpdump -
Hi All.
I recently added a EMC V-Max storage system, and realized the multipath is
not working. The device is called /dev/mapper/ but when I see the
multipath -ll output, I see just one path. It is working fine with a NetApp
3250.
Looking into differences, I see the output of the iscsiadm dis
Just for the record...
A brand new instance, from the same Tenant, that ended up running within
the very same "problematic" Compute Node, gets its IP normally, look:
---
cloud-init start-local running: Mon, 18 Nov 2013 18:15:59 +. up 3.50
seconds
no instance data found in start-local
ci-info:
Okay... I'm calm... :-P
This is the second time I'm seeing this with Havana.
Compute Node reboots, lots of Instances doesn't get its IP anymore, look:
---
cloud-init start-local running: Mon, 18 Nov 2013 16:34:06 +. up 18.19
seconds
no instance data found in start-local
cloud-init-nonet wa
Hey Martin :)
On 18 Nov 2013, at 8:40, Martinx - ジェームズ wrote:
> Guys,
>
> My Havana (Ubuntu based) Compute Node was restarted and lots of Instances
> does not get an IP anymore.
>
> Tips?!
Stay clam
>
> It is ramdom, I mean, some instances of this same compute node are normal,
> while others have
Hello list,
I'm trying to setup an oracle linux 6.4 to take the networking
information from config drive instead of metadata server (no dhsp
server allowed).
The iso is attached to the image and i can access it after boot.
But cloud-init (0.7.2) doesn't do anything with the information from
ther
On 11/18/2013 4:15 AM, sylecn wrote:
Thanks for all the hints. Finally I have a working network.
The biggest problem is neutron.agent.metadata.agent has bad auth param
set in metadata_agent.ini config file.
The metadata agent log file did not catch my attention earlier.
Another problem is net
Guys,
My Havana (Ubuntu based) Compute Node was restarted and lots of Instances
does not get an IP anymore.
Tips?!
It is ramdom, I mean, some instances of this same compute node are normal,
while others have no IP.
I really need help here because my client's web site is completely off line
now.
You don't have the keystone tables, so keystone-manage didn't create them.
Maybe the credentials in the configuration file are not correct. You should
use exactly the same credentials to connect using the `mysql` command and
test if you are actually able to create an empty table. Remember to use
I'm Maximiliano Venesio from MercadoLibre and i have been working with
Alejandro in this issue.
In order to answer your last questions :
We are injecting the iptables PREROUTING rule directly to the compute node
(host machine), as is configured and working in essex.
regarding the question about i
Hello again guys, I'm trying to overcome another issue with cinder, I'm
here trying to create e 500 GB volume, this is my disk:
pvscan
PV /dev/sdb1 VG cinder-volumes-2 lvm2 [931,51 GiB / 531,51 GiB free]
Total: 1 [931,51 GiB] / in use: 1 [931,51 GiB] / in no VG: 0 [0 ]
But when I try:
c
On Thu, 2013-11-14 at 23:06 -0600, Jonathan Bryce wrote:
> The current difference in implementation is that to be part of the
> Core OpenStack Project, a module must receive Board approval to be in
> that set. Another intended difference is that the Core OpenStack
> Project definition would be used
On Fri, 2013-11-15 at 09:53 +0100, Thierry Carrez wrote:
> Stefano Maffulli wrote:
> > On 11/14/2013 09:56 AM, Boris Renski wrote:
> >> If per bylaws any integrated project can called itself "OpenStack Blah"
> >> then we return to the question of current difference between integrated
> >> and core.
Thanks for all the hints. Finally I have a working network.
The biggest problem is neutron.agent.metadata.agent has bad auth param set
in metadata_agent.ini config file.
The metadata agent log file did not catch my attention earlier.
Another problem is net, router, vm must be all in the same tena
Any pointers to what could be wrong?
On Mon, Nov 18, 2013 at 2:58 AM, Krishanu Dhar wrote:
> Antonio,
>
> Yes it was weird. So, as requested below is the dump. Note: ignore the
> table by the name "tenant". I created it manually to check if the keystone
> commands go through. (it did).
>
>
> te
On Mon, 2013-11-18 at 07:40 +, Radcliffe, Mark wrote:
> We need to distinguish between (1) adding the modules to the "Core
> OpenStack Project" which requires a recommendation by the TC and
> approval by the Board and (2) adding the modules to an integrated
> release (including Core OpenStack P
try work-around on ask.openstack.org here:
https://ask.openstack.org/en/question/4996/importerror-running-swift/
pragya jain
Hi Lorin,
I had a look at a Ubuntu installation and the package install scripts do not
configure neutron-ovs-cleanup to run at boot time.
OVS
has its own database, and at boot it recreates any internal interfaces
(qr-, qg- and dhcp taps) in the root namespace. The l3 and dhcp
agents don't
33 matches
Mail list logo