Hello everyone,
Due to various release-critical issues detected in Keystone icehouse
RC1, a new release candidate was just generated. You can find a list of
the 8 bugs fixed and a link to the RC2 source tarball at:
https://launchpad.net/keystone/icehouse/icehouse-rc2
Unless new release-critical
Hi all
Anyone create binary repo for ubuntu ( i hear ubuntu is default for
openstack dev)
So, i can test it in binary and integration way
F
On Apr 8, 2014 4:03 PM, "Thierry Carrez" wrote:
> Hello everyone,
>
> Due to various release-critical issues detected in Keystone icehouse
> RC1, a new re
Dear All,
I have configured and running ceilometer, but in horizon in Resource
Usage > tab Stats it just didn't show anything.
How to display the stat?
also on other tab like Global Disk Usage , Global Network Traffic Usage
and Global Network Usage didn't show the number.
thanks
--
Regards,
Hi All
I am considering storage nodes for my small production deployment. I have
rejected Ceph as I cant get confidence that performance will be Ok without
SSD drives.
I need to be able to boot from block storage, do live migrations and create
snapshots which could be used to create new instances
Hi Ian,
Unless you're going to use SSD drives in your cinder-volume nodes, why do
you expect to get any better performance out of this setup, versus a ceph
cluster? If anything, performance would be worse since at least ceph has
the ability to stripe access across many nodes, and therefore many m
On 4/8/2014 7:05 AM, Darren Birkett wrote:
Hi Ian,
Unless you're going to use SSD drives in your cinder-volume nodes, why
do you expect to get any better performance out of this setup, versus
a ceph cluster? If anything, performance would be worse since at
least ceph has the ability to strip
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Ignacio,
yes we are running glusterfs for the image store and cinder.
GlusterFS for Swift is not really the recommended idea. Swift
service allows you to scale without a cluster or shared file system.
We are using glusterfs, instead of swif
Note:I am following the swift all in one installation on a single node.
I want to change the number of replicas of my objects fom three to two.I
tried but got the following error.
http://paste.openstack.org/show/75331/
Please Reply
Thanks
___
Mailing lis
Hi Qin,
This may be related to missing plugins in dom0 - check that you have the commit
from https://review.openstack.org/#/c/81849/
Otherwise, please upload the relevant portions of the logs to
paste.openstack.org
Thanks,
Bob
From: Qin Jia [jiaqin1...@gmail
Hi Darren
Thanks for your reply, I was thinking of running the LVM based cinder
volumes cross at least two nodes. This would be to at least stay with the
unified storage rather than a SAN for block storage and object storage
across a couple of servers or another SAN unit.
Regards
Ian
Regards
I
OpenStack Security Advisory: 2014-010
CVE: CVE-2014-0157
Date: April 08, 2014
Title: XSS in Horizon orchestration dashboard
Reporter: Cristian Fiorentino (Intel)
Products: Horizon
Versions: 2013.2 version up to 2013.2.3
Description:
Cristian Fiorentino from Intel reported a vulnerability in Horizo
Hi Y'all,
I have what I hope is a very simple question. Under Openstack with
CentOS 6.5 using KVM as my hypervisor, when I launch an instance on a
compute node, what directory does the image get copied into? I suspect
it's /var/lib/libvirt/qemu but I'm not sure, I'm not sure what I'm
lookin
On 8 April 2014 19:13, Erich Weiler wrote:
> Hi Y'all,
>
> I have what I hope is a very simple question. Under Openstack with CentOS
> 6.5 using KVM as my hypervisor, when I launch an instance on a compute node,
> what directory does the image get copied into? I suspect it's
> /var/lib/libvirt/q
Hello everyone,
Due to various release-critical issues detected in Horizon and
Ceilometer icehouse RC1 (including a security fix in Horizon), new
release candidates were just generated. You can find lists of the bugs
fixed and links to the RC2 source tarballs at:
https://launchpad.net/horizon/ice
Hey Y'all,
Thanks a bunch for all the help so far by the way - I'm nearly done with
standing up my POC OpenStack system.
This is Icehouse RDO on RedHat BTW.
I'm hvaing this odd thing happening with cinder. In horizon, I can see
the cinder volume storage and even create a volume, no problem.
What parts of openstack (if any) are vulnerable to heartbleed?
--
Aryeh M. Friedman, Lead Developer, http://www.PetiteCloud.org
___
Mailing list: http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack
Post to : openstack@lists.openstack.org
U
Arg, I think I see the issue... I created my cinder endpoints to point
to thew wrong internal hosts. I fixed the endpoints, but the volume
still sits in "Attaching" state, so I can't touch it.
Should I manually tweak mysql to fix this? I see:
mysql> use cinder;
mysql> select * from volumes
Tried that and Havana was giving complaints about versions
On Tue, Apr 8, 2014 at 7:14 PM, Clint Dilks wrote:
> If your systems have a vulnerable OpenSSL implementation then on a running
> instance
>
> lsof | grep ssl is a good place to start.
>
> Or you could try updating OpenSSL and the using
If your systems have a vulnerable OpenSSL implementation then on a running
instance
lsof | grep ssl is a good place to start.
Or you could try updating OpenSSL and the using lsof | grep -i ssl | grep
-i del
On Wed, Apr 9, 2014 at 10:49 AM, Aryeh Friedman wrote:
> What parts of openstack (if a
I just found it
force-delete
> Arg, I think I see the issue... I created my cinder endpoints to point to
> thew wrong internal hosts. I fixed the endpoints, but the volume still sits
> in "Attaching" state, so I can't touch it.
>
> Should I manually tweak mysql to fix this? I see:
>
you need to check your logs!!
> Whoops, I got too eager and tried to change the value to 'error'. Now I
> can't seem to do anything with nova or cinder...
>
> # nova list
> ERROR: (HTTP 500)
>
> # cinder list
> ERROR: (HTTP 500)
>
> I switched the value back to 'attaching' but I'm stuck wit
Whoops, I got too eager and tried to change the value to 'error'. Now I
can't seem to do anything with nova or cinder...
# nova list
ERROR: (HTTP
500)
# cinder list
ERROR: (HTTP
500)
I switched the value back to 'attaching' but I'm stuck with no command
line cinder or nova (I get the e
Hi,
I want to run devstack with just Glance (and Keystone because Glance
requires Keystone I guess). My localrc is pasted below. However, when
stack.sh completes, I don't see glance running. I looked at the
catalog returned by keystone and the only service reported by keystone
is the "identity ser
Yeah, I looked around all the logs and can't see anything out of the ordinary.
I must have messed up some timestamp on the MySQL tables when I modified it,
which is silently bothering some of the other services. I'll probably end up
wiping the cinder and nova databases and re-initializing them
Use enabled_services in your local.conf file, something like:
ENABLED_SERVICES=g-api,g-reg,key
Might work
On Tue, Apr 8, 2014 at 5:49 PM, Shrinand Javadekar
wrote:
> Hi,
>
> I want to run devstack with just Glance (and Keystone because Glance
> requires Keystone I guess). My localrc is pa
Hi all,
I'm going through the install in the docs with Ubuntu 14.04 and Icehouse. I'm
up to section 7, and am about to create my networks but this happens:
# neutron net-create ext-net --shared --router:external=True
Could not find Service or Region in Service Catalog.
verbose:
http://pastebin.c
my mistake. I created the service with type "neutron" not "network"
root@controller:~# keystone service-list
+--+--+--++
| id | name | type | description
|
+---
I get an "500 Internal Error" message and stack.sh fails :(.
On Tue, Apr 8, 2014 at 5:26 PM, John Griffith
wrote:
> Use enabled_services in your local.conf file, something like:
>
> ENABLED_SERVICES=g-api,g-reg,key
>
> Might work
>
>
>
> On Tue, Apr 8, 2014 at 5:49 PM, Shrinand Javadekar
> w
Hey ,
I want to know why do we need to maintain multiple copies of account
container and objects.Why dont we simply maintain the multiple replicas of
the account and the containers and objects inside the accounts would be
replicated as well.
Please reply
Thanks a lot.
_
29 matches
Mail list logo