On 08/10/2012 12:17 PM, Vishvananda Ishaya wrote:
$> curl -v http://169.254.169.254:8775/
* About to connect() to 169.254.169.254 port 8775 (#0)
* Trying 169.254.169.254... Connection timed out
* couldn't connect to host
* Closing connection #0
curl: (7) couldn't connect to host
Any idea wher
On Aug 9, 2012, at 7:31 PM, Xin Zhao wrote:
> Hello,
>
> In my essex install on RHEL6, there is a problem with the metadata service.
> The metadata service works for instances running on the controller node, where
> the nova-api(metadata service) is running. But for the other worker nodes,
> th
On Aug 9, 2012, at 3:22 AM, Alessandro Tagliapietra
wrote:
> Hello guys,
>
> i've just installed kernel 3.4 from Ubuntu kernel PPA archive and after this
> upgrade VM aren't able to get the DHCP address but with tcpdump i see the
> request and offer on the network.
> Someone else experienced
On Aug 9, 2012, at 6:28 PM, Daryl Walleck wrote:
> As part of my work on Tempest, I've created an alternate backend
> configuration to use XML requests/responses. This right now mostly covers
> Nova, but could easily be extended to test other projects as well. I hadn't
> pushed it yet because
On Aug 9, 2012, at 8:14 PM, Doug Davis wrote:
>
> Situations like this are always interesting to watch. :-)
>
> On the one hand its open-source, so if you care about something then put up
> the resources to make it happen.
This attitude always bothers me. This isn't some Open Source labor
I would start to check out iptables and routes that are being setup (in
vms and outside).
If you are running a flat (no dhcp) network that usually makes it a lot
harder also.
On 8/9/12 7:31 PM, "Xin Zhao" wrote:
>Hello,
>
>In my essex install on RHEL6, there is a problem with the metadata
>serv
Hello,
In my essex install on RHEL6, there is a problem with the metadata service.
The metadata service works for instances running on the controller node,
where
the nova-api(metadata service) is running. But for the other worker nodes,
the metadata service is intermittent, ie. the instances so
As part of my work on Tempest, I've created an alternate backend configuration
to use XML requests/responses. This right now mostly covers Nova, but could
easily be extended to test other projects as well. I hadn't pushed it yet
because it seemed to be low priority, but I'd be more than glad to
Situations like this are always interesting to watch. :-)
On the one hand its open-source, so if you care about something then put
up the resources to make it happen.
On the other hand, that doesn't mean that as a developer you get to ignore
the bigger picture and only do 1/2 of the work becaus
Has anyone surveyed those subscribed to openstack-operators@lists.openstack.org
list for usage? While imperfect, at least it would be asking the
constituency that might be most affected. You might also consider asking
whether they would prefer JSON or XML, regardless of what they use today.I agre
Thank you very much for those clarifications :D
On Fri, Aug 10, 2012 at 12:31 AM, Vishvananda Ishaya
wrote:
>
> On Aug 9, 2012, at 1:56 PM, Sébastien Han wrote:
>
>
> Did I miss something?
>
>
> Unfortunately this is confusing because the term metadata is used for two
> different things.
>
> t
right. hope the document can help you . it is chinese.
the network is flatdchp + mutilhost
http://www.chenshake.com/ubuntu-12-04-openstack-essex-multinode-installation/
On Thu, Aug 9, 2012 at 10:51 PM, Kiall Mac Innes wrote:
> Also the metadata host should be set to 127.0.0.1 for multihost=Tr
On 08/09/2012 12:13 AM, Alessandro Tagliapietra wrote:
Hello guys,
i've just installed kernel 3.4 from Ubuntu kernel PPA archive and after this
upgrade VM aren't able to get the DHCP address but with tcpdump i see the
request and offer on the network.
Someone else experienced this? I've tried
On Aug 9, 2012, at 3:32 PM, George Reese wrote:
> Why aren't the integration tests both XML and JSON?
The simple answer is that no one has taken the time to write them. Our devstack
exercises use the python client bindings. Tempest has json clients but no xml
clients[1]. I think this demonstr
On 08/09/2012 12:13 AM, Alessandro Tagliapietra wrote:
Hello guys,
i've just installed kernel 3.4 from Ubuntu kernel PPA archive and after this
upgrade VM aren't able to get the DHCP address but with tcpdump i see the
request and offer on the network.
Someone else experienced this? I've tried
On Aug 9, 2012, at 1:56 PM, Sébastien Han wrote:
>
> Did I miss something?
Unfortunately this is confusing because the term metadata is used for two
different things.
the metadata visible to the instance is a replication of the aws metadata
server. it is constructed from the database (most
Hello Everyone,
We are in the unfortunate position of not knowing how good our OpenStack API
XML support is. All of our integration tests use json. Many of the compute
extensions don't even have XML deserializers. We also assume that there bugs we
don't even know about due to underuse. We need
Hello Everyone,
The nova meeting today was quite eventful. Minutes are included below. A couple
of important updates:
* we are putting nova-core review days on hold.
* nova-core is going to pay extra attention to reviewing so that we can get
everything merged by Tuesday
* after F-3 nova-core
If your eth0 (public interface) can access Internet, with the ip_forward
your instance should be able too...
On Wed, Aug 8, 2012 at 12:05 PM, Leander Bessa Beernaert <
leande...@gmail.com> wrote:
> So i have set up a small proof of concept, one controller node and two
> compute nodes. Since the
Hi everyone,
I tried a little today.
$ nova meta my_instance set hostname=new_hostname
I didn't get any errors.
Nothing in the instance (curl
http://169.254.169.254/latest/meta-data/hostname) even after reboot and
nothing in the instance db record.
Here are the nova-api trace, seems to be ok:
On Thu, Aug 9, 2012 at 2:33 PM, Thomas Gall wrote:
> > FAILED
> boot_from_volume
> FAILED euca
> FAILED floating_ips
> FAILED volumes
>
These all spawn instanc
On Thu, Aug 9, 2012 at 1:57 PM, Joe Gordon wrote:
> Did you turn off rate limiting in devstack? I have hit that in the past
>
> On Aug 9, 2012 12:36 PM, "Thomas Gall" wrote:
>>
>> Hi!
>>
>> I'm working on some code for scheduler_hints to be used during migration
>> and was running devstack/exerc
Did you turn off rate limiting in devstack? I have hit that in the past
On Aug 9, 2012 12:36 PM, "Thomas Gall" wrote:
> Hi!
>
> I'm working on some code for scheduler_hints to be used during migration
> and was running devstack/exercise.sh on the latest greatest git. Without
> any of my changes
Hi all,
I am having a terrible time getting my instances to work after a hard
reboot. I am using the most up-to date version of all openstack
packages provided by Ubuntu. I have included a list of packages, with
version, at the end of this email.
After a hard reboot "nova list" reports that th
>
> I'm not talking about all configuration options. I'm talking about this
> single configuration option. Existing installations that did not
> specify rpc_backend because they did not need to will break if the
> default is changed.
>
They would only break in Grizzly, following a one-release d
On 08/09/2012 02:39 PM, Eric Windisch wrote:
>
>>
>> I also don't understand why having a default that doesn't work for
>> anyone makes any sense.
>>
> I would hope that a localhost only installation with a username and password
> of 'guest' include a very small number of anyones. Who is re
This time I was the sole participant and here's what I had to say :)
Our current progress is as follows:
The team has almost finished the core code and is about to start
working on the F5 driver.
Most of the external API is implemented, and it's planned to polish
the driver/core interaction logic
Hi!
I'm working on some code for scheduler_hints to be used during migration
and was running devstack/exercise.sh on the latest greatest git. Without
any of my changes installed I see on a 12.04 install the following failures:
=
>
> I also don't understand why having a default that doesn't work for
> anyone makes any sense.
>
I would hope that a localhost only installation with a username and password of
'guest' include a very small number of anyones. Who is really using a
completely stock, default configuration s
Indeed, uploading large files with the Horizon webserver as an intermediate
relay is a nasty business which we want to discourage. We are looking at ways
to send files directly from the Horizon client-side UI to swift/glance for
large file upload in the future.
All the best,
- Gabriel
> -
On 08/09/2012 02:35 AM, Sajith Kariyawasam wrote:
> Hi all,
>
> After playing around with Openstack Mangement Console, Horizon, I
> realize that the image upload functionality is not provided there.
>
> Is there any special reason for that? Is it because there are no Rest
> services available a
On Aug 9, 2012, at 8:13 AM, Robert Kukura wrote:
> We should immediately change devstack to stop running the quantum agents
> as root, so at least the root_helper=sudo functionality is really being
> used.
>
> It looks like devstack does configure nova with the new
> rootwrap/rootwrap_config an
CC'ing openstack-dev since that is a more appropriate list for this
discussion.
On 08/08/2012 04:35 PM, Eric Windisch wrote:
> I believe that the RPC backend should no longer have any default.
I disagree and my reason is fairly straight-forward. Changing the
default will break existing configura
Joseph,
Yes sorry about the typo I was retyping these lines in the email.
Whatever, the problem seems to be that the Simple Scheduler I'm using is not
running the filters at all, so I now use the filter_scheduler (I'm on essex by
the way) and the filter does its job and filters out the host sin
REMINDER: Another meeting will take place today, in ~2 hours from now
(19:00 UTC), on #openstack-meeting (use http://webchat.freenode.net/
to join).
On Mon, Aug 6, 2012 at 3:09 PM, Eugene Kirpichov wrote:
> Hi,
>
> Below are the meeting notes from the IRC meeting about LBaaS which
> took place on
You're getting a '300 Multiple Choices' response as you haven't indicated a
version in your request. You can parse the body as json (indicated in the
headers) to see what API versions are available to you at any given time. If
you don't care about taking that extra step, just use a URI with 'v1
Boris-Michel,
One thing that I noticed was a typo: schedulre that can cause malfunction. I am
not sure what version you are using, but recently the extra_spec checking is
moved to compute_capabilities_filter.py (ComputeCapabilitiesFilter). As far as
I understand, the current ComputeFilter does
On 08/09/2012 10:32 AM, Thierry Carrez wrote:
> j...@redhat.com wrote:
>>>* Switch to rootwrap_config and deprecate root_helper
>>>This would fully align quantum-rootwrap with nova-rootwrap. However I'm
>>>not sure it's reasonable to deprecate root_helper=sudo in Folsom, given
>>>ho
Hi guys,
I currently have a working cloud with a working GPU passthrough setup
(CentOS/libvirt/Xen 4.1.2), now I need to work on adding this new resource to
openstack.
Here is the plan:
1. Create a new instance type (g1.small) with an extra spec like
"xpu_arch = radeon"
2. Modif
All, sorry for top posting, but this is a fine example of why we
really need bloggers to help with the documentation. These fragmented
instructions are difficult to rely on - we need maintainable,
process-oriented treatment of content.
Mirantis peeps, you have added in your blog entries to the doc
Also the metadata host should be set to 127.0.0.1 for multihost=True..
Thanks,
Kiall
On Aug 9, 2012 2:58 PM, "谢丹铭" wrote:
> Hi, list,
> I'm setting up openstack on Ubuntu 12.04 LTS with FlatDHCP mode network
> configuration. . Everything is OK in control node, but in compute node, I
> always me
With multihost=True, every nova-compute node also needs nova-api-metadata
installed..
That should sort it out...
Thanks,
Kiall
On Aug 9, 2012 2:58 PM, "谢丹铭" wrote:
> Hi, list,
> I'm setting up openstack on Ubuntu 12.04 LTS with FlatDHCP mode network
> configuration. . Everything is OK in contro
>From: Thierry Carrez
>Date: Thu, 09 Aug 2012 16:32:23 +0200
>
>j...@redhat.com wrote:
>>>* Switch to rootwrap_config and deprecate root_helper
>>>This would fully align quantum-rootwrap with nova-rootwrap. However
> I'm
>>>not sure it's reasonable to depre
j...@redhat.com wrote:
>>* Switch to rootwrap_config and deprecate root_helper
>>This would fully align quantum-rootwrap with nova-rootwrap. However I'm
>>not sure it's reasonable to deprecate root_helper=sudo in Folsom, given
>>how little tested quantum-rootwrap seems to be on Fols
On Aug 9, 2012, at 7:13 AM, "Daniel P. Berrange" wrote:
>
> With non-live migration, the migration operation is guaranteed to
> complete. With live migration, you can get into a non-convergence
> scenario where the guest is dirtying data faster than it can be
> migrated. With the way Nova curre
On Thu, Aug 09, 2012 at 07:10:17AM -0700, Vishvananda Ishaya wrote:
>
> On Aug 9, 2012, at 1:03 AM, Blair Bethwaite wrote:
>
> > Hi Daniel,
> >
> > Thanks for following this up!
> >
> > On 8 August 2012 19:53, Daniel P. Berrange wrote:
> >> not tune this downtime setting, I don't see how you'
On Aug 9, 2012, at 1:03 AM, Blair Bethwaite wrote:
> Hi Daniel,
>
> Thanks for following this up!
>
> On 8 August 2012 19:53, Daniel P. Berrange wrote:
>> not tune this downtime setting, I don't see how you'd see 4 mins
>> downtime unless it was not truely live migration, or there was
>
> Ye
Hi, list,
I'm setting up openstack on Ubuntu 12.04 LTS with FlatDHCP mode network
configuration. . Everything is OK in control node, but in compute node, I
always meet with the following issue.
So I can’t ssh to the vm instance.
2012-08-09 13:31:28,290 - util.py[WARNING]:
'http://169.254.169.254
Hi Adam,
The blueprint as revised to address Joe's comments looks good to me - nice
work. I especially like how the middleware is intended to cache the revocation
list for a configurable amount of time - it mirrors how token caching already
works.
Cheers,
Maru
On 2012-08-07, at 10:09 AM, A
>From: Thierry Carrez
>Date: Thu, 09 Aug 2012 10:34:17 +0200
>
>j...@redhat.com wrote:
>>> From: Dan Wendlandt
>>> If someone (Bob?) has the immediate cycles to make rootwrap work in
> Folsom with low to medium
>>> risk of disruption, I'd be open to exploring that, ev
Hi
Probably, before We had same problem.
Could you check libvirt log and resolve your host domain and nova.conf
vncserver_listen part?
(vncserver_listen=0.0.0.0)
Thanks!
Suzuki
On Thu, Aug 9, 2012 at 6:27 PM, Leander Bessa Beernaert
wrote:
> Hello,
>
> I'm no expert on the subject, but i think
On 08/09/2012 07:55 AM, 王鹏 wrote:
> HI everyone,
> when I test live migration use NFS ,that's my sitting according to
> http://docs.openstack.org/essex/openstack-compute/admin/content/configuring-live-migrations.html
>
> 1.add this line /var/lib/nova/instances *(rw,sync,fsid=0,no_root_squash) in
On 08/09/2012 12:59 PM, Scott Moser wrote:
On Aug 8, 2012, at 8:20 PM, "Simon Walter" wrote:
On 08/09/2012 06:45 AM, Jay Pipes
I guess I'll have to build a VM from scratch, as I was relying on the ssh key
to be able to ssh into the VM, which apparently is supplied by the meta-data
service.
On 08/09/2012 01:11 PM, tacy lee wrote:
try adding metadata_host to nova.conf
The thing is the iptable rules have 169.254.169.254 NATed correctly. So
the address is correct. It's just that the VMs cannot access it.
--
simonsmicrophone.com
___
Ma
Hello,
I'm no expert on the subject, but i think you should just use "mount -t nfs
172.18.32.7:/ /var/lib/nova/instances" instead of "mount -t nfs 172.18.32.7:
/var/lib/nova/instances /var/lib/nova/instances". Also from the stack trace
it seems your libvirtd is not running.
On Thu, Aug 9, 2012 at
Hi
Is libvirt service running ? plz check the status of libvirt service.
Best Regards,
Qiu Zhigang
From: openstack-bounces+qiuzhigang=fronware@lists.launchpad.net
[mailto:openstack-bounces+qiuzhigang=fronware@lists.launchpad.net] On
Behalf Of 王鹏
Sent: Thursday, August 09, 2012
Hi all,
I'm trying to invoke Openstack Glance REST API s using a Java client, to
get image details. etc (Ultimately I need to upload an image)
When I invoke http://:/images/detail GET request in Java
code, I'm getting *HTTP 300 *as the response code.
4 < 300
4 < Date: Thu, 09 Aug 2012 08:56:29
Il giorno 09/ago/2012, alle ore 10:44, Alessandro Tagliapietra
ha scritto:
>
> Il giorno 09/ago/2012, alle ore 10:19, Kiall Mac Innes
> ha scritto:
>
>> That sounds like a kernel, kvm or dnsmasq issue, rather than OpenStack
>> itself. I think Quantal is on the 3.5 kernel, and I assume Open
Il giorno 09/ago/2012, alle ore 10:19, Kiall Mac Innes ha
scritto:
> That sounds like a kernel, kvm or dnsmasq issue, rather than OpenStack
> itself. I think Quantal is on the 3.5 kernel, and I assume OpenStack is
> working there..
>
> Maybe give it's dnsmasq package a go first as it's proba
j...@redhat.com wrote:
>> From: Dan Wendlandt
>> If someone (Bob?) has the immediate cycles to make rootwrap work in Folsom
>> with low to medium
>> risk of disruption, I'd be open to exploring that, even if it meant
>> inconsistent usage in quantum
>> vs. nova/cinder.
>
> Hi Dan. I've be
That sounds like a kernel, kvm or dnsmasq issue, rather than OpenStack
itself. I think Quantal is on the 3.5 kernel, and I assume OpenStack is
working there..
Maybe give it's dnsmasq package a go first as it's probably the easiest
thing to check...
Ubuntu also have some 3.5 packages for Precise,
Daniel,
Thanks for providing this insight, most useful. I'm interpreting this
as: block migration can be used in non-critical applications, mileage
will vary, thorough testing in the particular environment is
recommended. An alternative implementation will come, but the higher
level feature (live-
Hi Daniel,
Thanks for following this up!
On 8 August 2012 19:53, Daniel P. Berrange wrote:
> not tune this downtime setting, I don't see how you'd see 4 mins
> downtime unless it was not truely live migration, or there was
Yes, quite right. It turns out Nova is not passing/setting libvirt's
VIR
Hello guys,
i've just installed kernel 3.4 from Ubuntu kernel PPA archive and after this
upgrade VM aren't able to get the DHCP address but with tcpdump i see the
request and offer on the network.
Someone else experienced this? I've tried also with 3.3, same story. Rolling
back to 3.2 and every
HI everyone,
when I test live migration use NFS ,that's my sitting according to
http://docs.openstack.org/essex/openstack-compute/admin/content/configuring-live-migrations.html
1.add this line /var/lib/nova/instances *(rw,sync,fsid=0,no_root_squash) in
/etc/exports
2.mount -t nfs 172.18.32.7:/var/
65 matches
Mail list logo