Hi everyone,
According to the statement of this article from Gartner group
http://blogs.gartner.com/lydia_leong/2012/04/03/citrix-cloudstack-openstack-and-the-war-for-open-source-clouds/
Openstack is a
highly immature platform.
But why? What's make Openstack so immature?
Any comments on that?
Th
Hi!
I'm sorry but I can't helpyou, however I'm very interested in your setup.
I'm also using Juju combined to MAAS. I have some issues at the moment
(juju status, ssh keys and so on...)
Are you also working on Bare Metal or on EC2 instances?
Cheers!
On Thu, May 3, 2012 at 3:04 PM, Jorge Luiz C
Hi everyone!
I was wondering which kind of backend storage are you using for your
nova-volume?
I found a lot of solutions like:
- LVM local
- Sheepdog
- Nexenta for NFS or NFS itself
- SAN
- GlusterFS
- NetApp
Any ideas? Feedback?
I like the GlusterFS ability to use both NFS
t;> ceph (known as rbd) :-)
> >>
> >>
> >> On Sat, May 5, 2012 at 6:06 PM, Sébastien Han
> wrote:
> >>> Hi everyone!
> >>>
> >>> I was wondering which kind of backend storage are you using for your
> >>> nov
Hi,
Do you also have an error when retrieving from the command line?
~Cheers!
On Wed, May 16, 2012 at 5:38 PM, Leander Bessa Beernaert <
leande...@gmail.com> wrote:
> Hello,
>
> I keep running into this error when i try to list the images/snapshot in
> dashboard: http://paste.openstack.org/sh
It's not an open-ssh issue.
Your virtual machine simply can't fetch the metadata, cloud-init can't to
be more accurate. Without this your ssh key is not imported. This is why
the machine is well running, you can ping it but you can't access it
because the authorized_keys file on the vm is not fulfi
- routing_source_ip=IP_CURRENT_NODE
>>- --my_ip=IP_CURRENT_NODE
>>
>> Regards,
>>
>> Leander
>>
>> On Thu, May 24, 2012 at 4:24 PM, Sébastien Han
>> wrote:
>>
>>> It's not an open-ssh issue.
>>> Your virtual machine si
Why did you reinstall everything?
There is no "just in case", I mean you solved your issue, it was from your
configuration not from openstack :)
It's a routing issue, same as earlier.
Check again those parameters, specially the first one:
- --routing_source_ip=IP_CURRENT_NODE
- --my_ip=IP_C
list. And yes, i'm still using
> a all-in-one setup for now.
>
> Thanks for the tip.
>
>
> On Thu, May 24, 2012 at 9:03 PM, Sébastien Han wrote:
>
>> Why did you reinstall everything?
>> There is no "just in case", I mean you solved your issue, i
Hi everyone,
I setup a ceph cluster and I use the RBD driver for nova-volume.
I can create volumes and snapshots but currently I can't attach them to an
instance.
Apparently the volume is detected as busy but it's not, no matter which
name I choose.
I tried from horizon and the command line, same
>
>
>>
>> debug1: identity file testkey.pem-cert type -1
>
>
>>
>> debug1: Remote protocol version 2.0, remote software version OpenSSH_5.8p1
>> Debian-7ubuntu1
>
>
>>
>> debug1: match: OpenSSH_5.8p1 Debian-7ubuntu1 pat OpenSSH*
Why don't you use the RabbitMQ builtin cluster solution?
I setup an active/active cluster with the buildin mecanism and put an
HAProxy on top with a priority on a specific node. (weight and backup
options).
For the mirrored queues don't we need to edit the openstack code?
Cheers.
~Seb
On Fri,
There are tons of answers by simply googling your issue... and this problem
is more related to the Ubuntu Server mailing but anyway you should try with
the Ubuntu Server 32 bits.
http://www.ubuntu.com/start-download?distro=server&bits=32&release=lts
Or try to enable Intel VT-x or AMD-V from your
a die-hard RabbitMQ admin -- is there a reason to use clustering over a
> decoupled solution for a greenfield application?
>
> --
> Eric Windisch
>
> On Friday, May 25, 2012 at 17:54 PM, Sébastien Han wrote:
>
> Why don't you use the RabbitMQ builtin cluster solution?
> I s
Hi,
You forgot to add the option:
auth_tcp = "none"
after the 'listen_tls = 0' to the /etc/libvirt/libvirtd.conf file
Cheers!
On Tue, May 8, 2012 at 8:37 PM, Vishvananda Ishaya wrote:
> I haven't tried sasl so hopefully someone else has an idea. I have
> sucessfully used qemu+ssh with ssh k
Hi,
Which tests did you perform in order to recover your internet connectivity?
It might seem stupid but did you check your /etc/resolv.conf, try to
desinstall/reinstall nova-network, flush the iptables?
Tell us more about the recovery tests you've already done, after this we
will be able go furt
Hello,
I'm not affected by this issue. I was, but it was related to the vnc issue,
after disabling the console, the live-migration is performed without any
problems.
Regards.
On Sat, Jun 9, 2012 at 8:59 PM, Anne Gentle
wrote:
> Absolutely should be mentioned in the docs - thanks for uncovering!
of them.
Here the link to the article:
http://www.sebastien-han.fr/blog/2012/06/10/introducing-ceph-to-openstack/
Regards.
Sébastien Han.
___
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https
Hi,
If you planned to use Swift to store the virtual images disk and run
instance over Swift: it's not possible.
If you planned to use swift for nova-volume (cinder) and attaching disk:
it's also not possible.
Swift is *NOT*:
- a filesystem
- a block device
Perhaps, Swift can be use as a
Hi Florian,
For my own setup, I wanted to achieve highly-available network, and avoid
the loss of the gateway of every instances running if nova-network falls
down. I couldn't afford 2 dedicated nodes to put nova-network itself in an
highly available state. Now if I loose a nova-network on a compu
Hi,
The official doc needs to be updated at some points. If you want to make
this compatible with Ubuntu 12.04.
You can check my article here
http://www.sebastien-han.fr/blog/2012/06/20/setup-cloud-pipe-vpn-in-openstack/and
the fork of the mirantis repo
https://github.com/leseb/cloudpipe-image-aut
Hi,
I'm sure if I understand everything but let me give a try.
By default, the compute nodes store virtual instances in
/var/lib/nova/instances/. Of course it's part of the compute node local FS.
If you want to store this directory somewhere else, use a DFS like
GlusterFS or even Ceph or a SAN.
Al
Hi everyone,
For those of you who want to achieve HA in nova. I wrote some resource
agents according to the OCF specification. The RAs available are:
- nova-scheduler
- nova-api
- novnc
- nova-consoleauth
- nova-cert
The how-to is available here:
http://www.sebastien-han.fr/blog/2
support of
> Pacemaker, however,
> OCF's are much more nicer, and still, I'd be interested in how you solved
> the RabbitMQ issue.
>
> Best regards,
> Christian Parpart.
>
> On Mon, Jul 2, 2012 at 7:38 PM, Sébastien Han wrote:
>
>> Hi everyone,
>>
&g
Which permissions did you set on /var/lib/nova/instances?
On Tue, Jul 3, 2012 at 3:48 PM, Leander Bessa Beernaert wrote:
> Hello all,
>
> I've been trying to get the live migration to work according to the guide
> http://docs.openstack.org/trunk/openstack-compute/admin/content/configuring-live-
Ok thanks! I will have a look :D
We keep in touch ;)
On Tue, Jul 3, 2012 at 4:09 PM, Christian Parpart wrote:
> On Tue, Jul 3, 2012 at 1:35 PM, Sébastien Han wrote:
>
>> Hi,
>>
>> Managing a resource via LSB only checks the PID. If the PID exists the
>> service
Hi!
Usually you get:
2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
> Domain id=2 name='instance-0002'
> uuid=57aca8a6-d062-4a08-8d87-e4d11d259ac7 is tainted: high-privileges
when you change permission in libvirt (root I presumed) which is not
necessary.
2012-07-1
I forgot to ask, did you enable the vnc console?
If so, with which parameters?
On Tue, Jul 10, 2012 at 11:48 AM, Sébastien Han wrote:
> Hi!
>
> Usually you get:
>
> 2012-07-09 13:58:27.179+: 10227: warning : qemuDomainObjTaint:1134 :
>> Domain id=2 name='instance-
0.0.1.1:6081/console
>> vncserver_proxyclient_address=10.0.1.2
>> vncserver_listen=10.0.1.2
>
>
> On Tue, Jul 10, 2012 at 10:49 AM, Sébastien Han
> wrote:
>
>> I forgot to ask, did you enable the vnc console?
>>
>> If so, with which parameters?
>
ue, Jul 10, 2012 at 12:07 PM, Leander Bessa Beernaert <
leande...@gmail.com> wrote:
> That did! Thanks :)
>
> Do you by change have any pointer on getting the live-migration to work
> without running libvirt under root?
>
>
> On Tue, Jul 10, 2012 at 10:55 AM, Sé
n Tue, Jul 10, 2012 at 11:17 AM, Sébastien Han
> wrote:
>
>> Great!
>>
>> The last time I ran the live-migration, it was with GlusterFS and CephFS
>> and I didn't changed any permissions in libvirt. I did the live-migration
>> with NFS once but it was in Dia
Np ;)
On Tue, Jul 10, 2012 at 12:33 PM, Leander Bessa Beernaert <
leande...@gmail.com> wrote:
> Ok. Thx for the help :)
>
>
> On Tue, Jul 10, 2012 at 11:30 AM, Sébastien Han
> wrote:
>
>> It's production ready, RedHat offers a commercial support on it.
>
$ sudo nova-manage service disable --host=ESSEX-1 --service nova-compute
It's also good to read the documentation before asking questions.
http://docs.openstack.org/essex/openstack-compute/admin/content/managing-the-cloud.html#d6e6254
Cheers.
On Thu, Jul 12, 2012 at 9:14 AM, Christian Wittwer w
http://www.sebastien-han.fr/blog/2012/07/10/delete-a-vm-in-an-error-state/
On Thu, Jul 12, 2012 at 8:34 PM, Tong Li wrote:
> Hi, Hien,
> I had same problem. The only way that I can get rid of it is to remove
> the record for that instance from the following 3 mysql db tables in the
> followin
Are you using multi_host option?
What is your nova network manager?
More info about your setup could be useful...
On Wed, Aug 1, 2012 at 9:25 AM, Alessandro Tagliapietra <
tagliapietra.alessan...@gmail.com> wrote:
> Hello,
>
> please help, this thing is getting me crazy. The vm starts fine but
You can always rename them with the dashboard, but this doesn't mean that
the hostname will change... It will remain the same for every VMs.
On Thu, Aug 2, 2012 at 9:31 AM, Shake Chen wrote:
> Hi
>
> Now I try to create more instance in same time in Dashobard. but the
> Instance name is same. h
Hello,
Looks nice but I look forward to read the one about VLAN manager :D
Thanks!
Cheer!
On Fri, Aug 3, 2012 at 8:50 PM, Eugene Kirpichov wrote:
> Hello community,
>
> I'd like to advertise that me and my colleague Piotr Siwczak at
> Mirantis have started a series of blog posts explaining the
Hi,
The interval can be managed via the periodic_interval flag in nova.conf,
which is by default 60 sec
Cheers!
On Mon, Aug 6, 2012 at 9:40 AM, Trinath Somanchi wrote:
> thanks a lot for the guidance...
>
>
>
> On Mon, Aug 6, 2012 at 12:57 PM, Michael Still <
> michael.st...@canonical.com> wr
Hi,
I think this only way is to edit the code like so:
- go to the line 66
of /usr/lib/python2.7/dist-packages/nova/virt/disk/api.py
- and change to 'default=mkfs.ext4 -L %(fs_label)s -F %(target)s',
Make sure to purge your /var/lib/nova/instances/_base
It worked for me :)
Let me know
/dist-packages/nova/utils.py:219
On Mon, Aug 6, 2012 at 1:51 PM, Jerico Revote wrote:
> Hi,
>
> Are you using Essex or Folsom when it worked for you?
>
> Regards,
>
> Jerico
>
> On 06/08/2012, at 8:48 PM, Sébastien Han wrote:
>
> Cool ;)
>
> On Mo
:
>
> virt_mkfs=default=mkfs.ext4 -L %(fs_label)s -F %(target)s
>
>
> On Aug 6, 2012, at 3:02 AM, Sébastien Han wrote:
>
> Hi,
>
> I think this only way is to edit the code like so:
>
>- go to the line 66
>of /usr/lib/python2.7/dist-packages/nova/virt/disk/ap
Hi!
I think it's a pretty useful feature, a good compromise. As you said using
a shared fs implies a lot of things and can dramatically decrease your
performance rather than using the local fs. I tested it and I will use it
for my deployment. I'll be happy to discuss more deeply with you about thi
Hi,
If eth0 is connected to the public switch and if eth1 is connected to
the private switch you can enable the ipv4 forwarding on the compute
node. Thanks to this the VMs will have access to the outside world and
the packet will be routed from eth1 to eth0 :).
Cheers!
On Tue, Aug 7, 2012 at 5:1
Beernaert
wrote:
> Is there a flag in the nova.conf file or is this something that needs to be
> done on the operating system?
>
>
> On Tue, Aug 7, 2012 at 8:26 PM, Sébastien Han
> wrote:
>>
>> Hi,
>>
>> If eth0 is connected to the public switch and if eth
Hi guys,
Any ideas on this?
https://bugs.launchpad.net/nova/+bug/1033675
https://answers.launchpad.net/nova/+question/205136
Any advice/tip will be truly appreciated :)
Cheers!
___
Mailing list: https://launchpad.net/~openstack
Post to : openstac
Hi everyone,
I tried a little today.
$ nova meta my_instance set hostname=new_hostname
I didn't get any errors.
Nothing in the instance (curl
http://169.254.169.254/latest/meta-data/hostname) even after reboot and
nothing in the instance db record.
Here are the nova-api trace, seems to be ok:
proc/sys/net/ipv4/ip_forward set
> to 1. However, i still can't make an instance connect to the outside.
>
> Any thoughts?
> On Tue, Aug 7, 2012 at 11:32 PM, Sébastien Han wrote:
>
>> It's part of the operating system
>>
>> # echo 1 > /proc/sys/ne
Thank you very much for those clarifications :D
On Fri, Aug 10, 2012 at 12:31 AM, Vishvananda Ishaya
wrote:
>
> On Aug 9, 2012, at 1:56 PM, Sébastien Han wrote:
>
>
> Did I miss something?
>
>
> Unfortunately this is confusing because the term metadata is used fo
Hi,
There is a line in /etc/openstack-dashboard/local_settings.py called
TIME_ZONE
# The timezone of the server. This should correspond with the timezone
# of your entire OpenStack installation, and hopefully be in UTC.
TIME_ZONE = "UTC"
Change it, restart apache and memcached, that should do th
Hi,
glance add is a deprecated command, use glance image-create instead.
When you want to reproduce the API request, you can always use -d arg
to enter in debug mode and see the API request translation.
For glance, you have something like (this is what I got from the -d option):
curl -i -X POST
Hi Stackers!
I tried to setup the cloudpipe VPN with Folsom. I followed the
official doc. Did I make something wrong?
Just opened a bug on launchpad about it.
https://bugs.launchpad.net/nova/+bug/1069573
Any idea?
Cheers!
___
Mailing list: https://l
I'll be glad to offer my help as well.
You can include me into this discussion.
Cheers!
On Mon, Oct 22, 2012 at 5:13 PM, Thierry Carrez wrote:
> Thierry Carrez wrote:
>> So this year around, to simplify organization I thought we would ask for
>> a one-day "openstack" devroom, which could be mer
Hi,
If you use 0 for the rootfs. It means that your vm uses the virtual
size of the based image. You can check the virtual size with qemu.
# qemu-img info
Cheers!
On Sat, Oct 27, 2012 at 11:08 PM, Jonathan Proulx wrote:
> Hi All,
>
> I know that specifying a zero size root volume in a flavor
Hi Stacker,
I know OpenStack is not designed that way and I don't think it's
possible (or maybe I misses something :)) but I was wondering if there
is any simple workaround to choose a specific floating IP to allocate.
In other words, don't give me a random or N+1 next floating IP
available but le
_config.html
>
>
> Cheers ;-)
>
>
> Emilien Macchi
>
> // eNovance Inc. http://enovance.com
> // ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
> // 10 rue de la Victoire 75009 Paris
>
>
Macchi
>>
>> // eNovance Inc. http://enovance.com
>> // ✉ emil...@enovance.com ☎ +33 (0)1 49 70 99 80
>> // 10 rue de la Victoire 75009 Paris
>>
>>
>> De:
tween fixed and floating ips.
>
> When you boot an instance, does it get a private or public IP? And - when
> you ran the network-create command, what IP range did you use?
>
> Kiall
>
> On Oct 30, 2012 9:26 AM, "Sébastien Han" wrote:
>>
>> @Kiall, if it d
anly..
>
> Thanks,
> Kiall
>
>
>
> On Tue, Oct 30, 2012 at 11:46 AM, Kiall Mac Innes
> wrote:
>>
>> Response inline.
>>
>> Thanks,
>> Kiall
>>
>>
>> On Tue, Oct 30, 2012 at 11:04 AM, Sébastien Han
>> wrote:
>>>
>
couple of research on Google, I found this 3 links:
- https://lists.launchpad.net/openstack/pdfGiNwMEtUBJ.pdf
- http://wiki.openstack.org/HAforNovaDB
- http://www.pixelbeat.org/docs/pacemaker-cloud/
Hope this will help you.
--
Yours sincerely.
Sébastien HAN.
On Tue, Feb 14, 2012 at 12:05
AFAIR it was also the case with Essex.
Cheers!
On Wed, Nov 21, 2012 at 9:46 AM, Razique Mahroua
wrote:
> I had the same issue at first, but Vish is right, once you start spawning
> an instance, everything should be brought up
>
> Regards,
> Razique
>
> *Nuage & Co - Razique Mahroua** *
> raziqu
Hi,
I don't think it's the best place to ask your question since it's not
directly related to OpenStack but more about Ceph. I just put in c/c
the ceph ML. Anyway, CephFS is not ready yet for production but I
heard that some people use it. People from Inktank (the company behind
Ceph) don't recomm
Hi,
For the cloud controller, use 2 machines with a pacemaker setup with those
resource agents. Simple as that.
We have 2 branches, one for Essex and one for Folsom.
https://github.com/madkiss/openstack-resource-agents
Cheers!
On Wed, Nov 21, 2012 at 9:59 AM, Razique Mahroua
wrote:
> Hey Edwa
* *
> razique.mahr...@gmail.com
>
>
> Le 21 nov. 2012 à 09:56, Sébastien Han a écrit :
>
> AFAIR it was also the case with Essex.
>
> Cheers!
>
>
> On Wed, Nov 21, 2012 at 9:46 AM, Razique Mahroua <
> razique.mahr...@gmail.com> wrote:
>
>> I had the same issu
ity than my small database
> does. Instead, I prefer to perform block migrations rather than live ones
> until cephfs becomes more stable.
>
> Dave Spano
> Optogenics
> Systems Administrator
>
>
> --
> *From: *"Sébastien Han"
>
Hi,
Just tried this, it works but I'd also like to rename
/var/lib/nova/instances/ according to the hostname. At the moment this only
rename (output from nova show):
| OS-EXT-SRV-ATTR:instance_name | mon-nom
Is it possible?
Cheers!
On Wed, Nov 28, 2012 at 7:31 PM, John Garbutt wrote:
>
t 2:16 PM, Vishvananda Ishaya
> wrote:
>
> >
> > On Nov 28, 2012, at 2:08 PM, Sébastien Han
> wrote:
> >
> >> Hi,
> >>
> >> Just tried this, it works but I'd also like to rename
> /var/lib/nova/instances/ according to the hostname. At th
Hi,
What I will do to achieve what you want:
_ take a snapshot of your instance
_ export the snapshot from wherever it's stored (filesystem for instance)
_ import it to Glance, make the image to public or assign it to the tenant
(not 100% sure if the latest is possible though...)
_ run a new vm w
dummy in project B
- delete the volume from project A
If you use Ceph RBD it's really easy for example.
For the rest I don't know.
--
Bien cordialement.
Sébastien HAN.
On Thu, Nov 29, 2012 at 9:55 AM, Lei Zhang wrote:
> Hi Sébastien,
>
> Good ideas. There is a very tri
rity_rules#0122012-12-12 23:46:29 TRACE
nova.openstack.common.rpc.amqp
self.firewall_driver.refresh_instance_security_rules(instance)#
This error seems harmless, as far as I can tell everything works perfectly.
Even so I'd like to have some input about it (ideally a fix bec
Hi Vish,
The logs don't show more, even after enabling DEBUG logs...
See debug mode below right away before and after the message:
http://pastebin.com/1LCXuaVi
I forgot to mention but it _only_ appears while rolling out a new instance.
Thanks.
--
Regards,
Sébastien Han.
On Sat, D
-components-ha/
For the latest article *please use* this repo, this our new location with
several branches (Essex/Folsom).
https://github.com/madkiss/openstack-resource-agents
--
Regards,
Sébastien Han.
On Mon, Dec 17, 2012 at 9:56 PM, Eugene Kirpichov wrote:
> Right, you only need HA for sw
Thanks Razique,
I still need to edit the official HA doc to give details about this
setup, I don't really have the time this week.
I hope I can save some time before the end of the year.
Cheers!
--
Regards,
Sébastien Han.
On Tue, Dec 18, 2012 at 12:13 AM, Razique Mahroua
wrote:
> Gre
Hi,
Stupid question, did you restart compute and api service?
I don't have any problems with those flags.
--
Regards,
Sébastien Han.
On Mon, Jan 7, 2013 at 9:58 AM, Robert van Leeuwen <
robert.vanleeu...@spilgames.com> wrote:
> Hi,
>
> I'm trying to get all logg
!
Cheers!
--
Regards,
Sébastien Han.
On Wed, Jan 9, 2013 at 8:14 PM, Alex Vitola wrote:
> I have 2 projects in my environment:
>
> ProjectQA1: ID -> 0001
> ProjectQA2: ID -> 0002
>
> root@Controller:# keystone tenant-list
> +-++-+
> |
Cool!
--
Regards,
Sébastien Han.
On Thu, Jan 10, 2013 at 11:15 AM, Alex Vitola wrote:
> Changed directly by the database.
>
> Not the best way but I did because it was an environment.
>
> So far I have not found any problems
>
>
> mysql> use nova;
> mysql&
If an admin user put it public, this is also possible.
--
Regards,
Sébastien Han.
On Fri, Jan 11, 2013 at 3:40 AM, Lei Zhang wrote:
> why not try boot from snapshot. That's will save some time.
>
>
> On Thu, Jan 10, 2013 at 5:18 AM, Sébastien Han
> wrote:
>>
>>
so you prefer to be asked for a password instead of log in passwordless?
as suggested, edit the base image and create a password for the user :)
--
Regards,
Sébastien Han.
On Tue, Jan 22, 2013 at 6:08 PM, Balamurugan V G
wrote:
> My ssh debug logs are below:
>
> $ ssh -vvv roo
+ RBD (Ceph)
+1 for the matrix, this will be really nice :-)
--
Regards,
Sébastien Han.
On Wed, Jan 30, 2013 at 5:04 PM, Tim Bell wrote:
>
>
> Is there a list of devices which are currently compatible with cinder and
> their relative functionality ?
>
>
>
> Looking
Just added some stuff about RBD where E refers to Essex.
--
Regards,
Sébastien Han.
On Thu, Jan 31, 2013 at 11:20 AM, Avishay Traeger wrote:
> openstack-bounces+avishay=il.ibm@lists.launchpad.net wrote on
> 01/31/2013 12:37:07 AM:
>> From: Tom Fifield
>> To: openstack@l
gards,
Sébastien Han.
On Thu, Jan 31, 2013 at 7:40 AM, Wolfgang Hennerbichler
wrote:
> Hi,
>
> I'm sorry if this has been asked before. My question is: can I integrate ceph
> into openstack's nova & cinder in a way, that I don't need
> /var/lib/nova/instances anymo
compute. With the boot from volume it's one RBD
per instance which brings way more IOPS to your instance. Still with
boot from volume you can also enjoy the rbd cache on the client side,
cache that will also helps with buffered IO.
Cheers!
--
Regards,
Sébastien Han.
On Thu, Jan 31, 2013 at 7:
hum ok now I wonder if you created a network or not?
# nova-manage network list
?
--
Regards,
Sébastien Han.
On Mon, Feb 4, 2013 at 7:09 PM, JR wrote:
> Hi Sébastien
>
> Problem is, I can't run nova network-list either!
>
> stack@gpfs6-int:~$ nova network-list
> ERR
nova network-list then look for the id and add the following to your
boot command:
nova boot bla bla bla --nic net-id=
Let me know if it's better.
Cheers.
--
Regards,
Sébastien Han.
On Mon, Feb 4, 2013 at 6:24 PM, JR wrote:
> Greetings,
>
> I'm running a devstack test
What's the problem to have one IP on service pool basis?
--
Regards,
Sébastien Han.
On Wed, Feb 13, 2013 at 8:45 PM, Samuel Winchenbach wrote:
> What if the VIP is created on a different host than keystone is started
> on? It seems like you either need to set net.ipv4.ip_nonloc
y create a resource group with all the openstack service inside it
(it's ugly but if it's what you want :)). Give me more info about your
setup and we can go further in the discussion :).
--
Regards,
Sébastien Han.
On Thu, Feb 14, 2013 at 3:15 PM, Samuel Winchenbach wrote:
> T
> he on
Hum I don't see the problem, it's possible to load-balance VIPs with LVS,
there are just IPs... Can I see your conf?
--
Regards,
Sébastien Han.
On Thu, Feb 14, 2013 at 8:34 PM, Samuel Winchenbach wrote:
> W
> ell, I think I will have to go with one ip per service and for
Ok but why direct routing instead of NAT? If the public IPs are _only_
on LVS there is no point to use LVS-DR.
LVS has the public IPs and redirects to the private IPs, this _must_ work.
Did you try NAT? Or at least can you give it a shot?
--
Regards,
Sébastien Han.
On Fri, Feb 15, 2013 at 3:55
But if are in a hurry and looking for a DFS then
GlusterFS seems to be a good candidate. NFS works pretty well too.
Cheers.
--
Regards,
Sébastien Han.
On Fri, Feb 15, 2013 at 4:49 PM, JuanFra Rodriguez Cardoso <
juanfra.rodriguez.card...@gmail.com> wrote:
> Another one:
&
Well if you follow my article, you will get LVS-NAT running. It's fairly
easy, no funky stuff. Yes you will probably need the postrouting rule, as
usual :). Let me know how it goes ;)
--
Regards,
Sébastien Han.
On Fri, Feb 15, 2013 at 8:51 PM, Samuel Winchenbach wrote:
> I
> didn
w.drbd.org/users-guide-8.3/s-resolve-split-brain.html
Cheers
--
Regards,
Sébastien Han.
On Tue, Feb 19, 2013 at 2:38 AM, Samuel Winchenbach wrote:
> Hi All,
>
> I recently switched from CentOS 6.3 to Ubuntu LTS server and have started
> encountering some really odd problems w
90 matches
Mail list logo