[ceph-users] rbd snap rollback does not show progress since cuttlefish

2013-05-30 Thread Stefan Priebe - Profihost AG
Hi,

under bobtail rbd snap rollback shows the progress going on. Since
cuttlefish i see no progress anymore.

Listing the rbd help it only shows me a no-progress option but it seems
no pogress is the default so i need a progress option...

Greets,
Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.63 released

2013-05-30 Thread ??????
Hi, Sage.


I didn't find the 0.63 update for Debian/Unbuntu in 
http://ceph.com/docs/master/install/debian. 
The package version is still 0.61.2 .


Thanks!


-- Original --
From:  "Sage Weil";
Date:  Wed, May 29, 2013 12:05 PM
To:  "ceph-devel"; 
"ceph-users"; 

Subject:  [ceph-users] v0.63 released



Another sprint, and v0.63 is here.  This release features librbd 
improvements, mon fixes, osd robustness, and packaging fixes.

Notable features in this release include:

 * librbd: parallelize delete, rollback, flatten, copy, resize
 * librbd: ability to read from local replicas
 * osd: resurrect partially deleted PGs
 * osd: prioritize recovery for degraded PGs
 * osd: fix internal heartbeart timeouts when scrubbing very large objects
 * osd: close narrow journal race
 * rgw: fix usage log scanning for large, untrimmed logs
 * rgw: fix locking issue, user operation mask,
 * initscript: fix osd crush weight calculation when using -a
 * initscript: fix enumeration of local daemons
 * mon: several fixes to paxos, sync
 * mon: new --extract-monmap to aid disaster recovery
 * mon: fix leveldb compression, trimming
 * add 'config get' admin socket command
 * rados: clonedata command for cli
 * debian: stop daemons on uninstall; fix dependencies
 * debian wheezy: fix udev rules
 * many many small fixes from coverity scan

You can get v0.63 from the usual places:

 * Git at git://github.com/ceph/ceph.git
 * Tarball at http://ceph.com/download/ceph-0.63.tar.gz
 * For Debian/Ubuntu packages, see http://ceph.com/docs/master/install/debian
 * For RPMs, see http://ceph.com/docs/master/install/rpm

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] v0.63 released

2013-05-30 Thread Wido den Hollander

On 05/30/2013 03:26 PM, 大椿 wrote:


Hi, Sage.

I didn't find the 0.63 update for Debian/Unbuntu in
http://ceph.com/docs/master/install/debian.
The package version is still 0.61.2 .



Hi,

The packages are there already:

http://ceph.com/debian-testing/pool/main/c/ceph/
http://eu.ceph.com/debian-testing/pool/main/c/ceph/

You should use the debian-testing repository for this.

Wido


Thanks!

-- Original --
*From: * "Sage Weil";
*Date: * Wed, May 29, 2013 12:05 PM
*To: * "ceph-devel";
"ceph-users";
*Subject: * [ceph-users] v0.63 released

Another sprint, and v0.63 is here.  This release features librbd
improvements, mon fixes, osd robustness, and packaging fixes.

Notable features in this release include:

  * librbd: parallelize delete, rollback, flatten, copy, resize
  * librbd: ability to read from local replicas
  * osd: resurrect partially deleted PGs
  * osd: prioritize recovery for degraded PGs
  * osd: fix internal heartbeart timeouts when scrubbing very large objects
  * osd: close narrow journal race
  * rgw: fix usage log scanning for large, untrimmed logs
  * rgw: fix locking issue, user operation mask,
  * initscript: fix osd crush weight calculation when using -a
  * initscript: fix enumeration of local daemons
  * mon: several fixes to paxos, sync
  * mon: new --extract-monmap to aid disaster recovery
  * mon: fix leveldb compression, trimming
  * add 'config get' admin socket command
  * rados: clonedata command for cli
  * debian: stop daemons on uninstall; fix dependencies
  * debian wheezy: fix udev rules
  * many many small fixes from coverity scan

You can get v0.63 from the usual places:

  * Git at git://github.com/ceph/ceph.git
  * Tarball at http://ceph.com/download/ceph-0.63.tar.gz
  * For Debian/Ubuntu packages, see
http://ceph.com/docs/master/install/debian
  * For RPMs, see http://ceph.com/docs/master/install/rpm

___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com




--
Wido den Hollander
42on B.V.

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] ceph-deploy

2013-05-30 Thread John Wilkins
Dewan,

I encountered this too. I just did umount and reran the command and it
worked for me. I probably need to add a troubleshooting section for
ceph-deploy.

On Fri, May 24, 2013 at 4:00 PM, John Wilkins  wrote:
> ceph-deploy does have an ability to push the client keyrings. I
> haven't encountered this as a problem. However, I have created a
> monitor and not seen it return a keyring. In other words, it failed
> but didn't give me a warning message. So I just re-executed creating
> the monitor. The directory from where you execute "ceph-deploy mon
> create" should have a ceph.client.admin.keyring too. If it doesn't,
> you might have had a problem creating the monitor. I don't believe you
> have to push the ceph.client.admin.keyring to all the nodes. So it
> shouldn't be barking back unless you failed to create the monitor, or
> if gatherkeys failed.
>
> On Thu, May 23, 2013 at 9:09 PM, Dewan Shamsul Alam
>  wrote:
>> I just found that
>>
>> #ceph-deploy gatherkeys ceph0 ceph1 ceph2
>>
>> works only if I have bobtail. cuttlefish can't find ceph.client.admin.
>> keyring
>>
>> and then when I try this on bobtail, it says,
>>
>> root@cephdeploy:~/12.04# ceph-deploy osd create ceph0:/dev/sda3
>> ceph1:/dev/sda3 ceph2:/dev/sda3
>> ceph-disk: Error: Device is mounted: /dev/sda3
>> Traceback (most recent call last):
>>   File "/usr/bin/ceph-deploy", line 22, in 
>> main()
>>   File "/usr/lib/pymodules/python2.7/ceph_deploy/cli.py", line 112, in main
>> return args.func(args)
>>   File "/usr/lib/pymodules/python2.7/ceph_deploy/osd.py", line 293, in osd
>> prepare(args, cfg, activate_prepared_disk=True)
>>   File "/usr/lib/pymodules/python2.7/ceph_deploy/osd.py", line 177, in
>> prepare
>> dmcrypt_dir=args.dmcrypt_key_dir,
>>   File "/usr/lib/python2.7/dist-packages/pushy/protocol/proxy.py", line 255,
>> in 
>> (conn.operator(type_, self, args, kwargs))
>>   File "/usr/lib/python2.7/dist-packages/pushy/protocol/connection.py", line
>> 66, in operator
>> return self.send_request(type_, (object, args, kwargs))
>>   File "/usr/lib/python2.7/dist-packages/pushy/protocol/baseconnection.py",
>> line 323, in send_request
>> return self.__handle(m)
>>   File "/usr/lib/python2.7/dist-packages/pushy/protocol/baseconnection.py",
>> line 639, in __handle
>> raise e
>> pushy.protocol.proxy.ExceptionProxy: Command '['ceph-disk-prepare', '--',
>> '/dev/sda3']' returned non-zero exit status 1
>> root@cephdeploy:~/12.04#
>>
>>
>>
>>
>> On Thu, May 23, 2013 at 10:49 PM, Dewan Shamsul Alam
>>  wrote:
>>>
>>> Hi,
>>>
>>> I tried ceph-deploy all day. Found that it has a python-setuptools as
>>> dependency. I knew about python-pushy. But is there any other dependency
>>> that I'm missing?
>>>
>>> The problem I'm getting are as follows:
>>>
>>> #ceph-deploy gatherkeys ceph0 ceph1 ceph2
>>> returns the following error,
>>> Unable to find /etc/ceph/ceph.client.admin.keyring on ['ceph0', 'ceph1',
>>> 'ceph2']
>>>
>>> Once I got passed this, I don't know why it works sometimes. I have been
>>> following the exact steps as mentioned in the blog.
>>>
>>> Then when I try to do
>>>
>>> ceph-deploy osd create ceph0:/dev/sda3 ceph1:/dev/sda3 ceph2:/dev/sda3
>>>
>>> It gets stuck.
>>>
>>> I'm using Ubuntu 13.04 for ceph-deploy and 12.04 for ceph nodes. I just
>>> need to get the cuttlefish working and willing to change the OS if it is
>>> required. Please help. :)
>>>
>>> Best Regards,
>>> Dewan Shamsul Alam
>>
>>
>>
>> ___
>> ceph-users mailing list
>> ceph-users@lists.ceph.com
>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>
>
>
>
> --
> John Wilkins
> Senior Technical Writer
> Intank
> john.wilk...@inktank.com
> (415) 425-9599
> http://inktank.com



-- 
John Wilkins
Senior Technical Writer
Intank
john.wilk...@inktank.com
(415) 425-9599
http://inktank.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.

But I cannot boot from volumes.
I doesn't matter if use horizon or the cli, the vm goes to the error state.

>From the nova-compute.log I get this.

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
.
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
ENETUNREACH

What tries nova to reach? How could I debug that further?

Full Log included.

-martin

Log:

ceph --version
ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)

root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
ii  ceph-common  0.61-1precise
   common utilities to mount and interact with a ceph storage
cluster
ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
   python bindings to the OpenStack Volume API


nova-compute.log

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071,
in _prep_block_device
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return
self._setup_block_device_mapping(context, instance, bdms)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 721, in
_setup_block_device_mapping
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
self.volume_api.get(context, bdm['volume_id'])
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]
self._reraise_translated_volume_exception(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] item =
cinderclient(context).volumes.get(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py", line 180,
in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._get("/volumes/%s"
% volume_id, "volume")
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/base.py", line 141, in _get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] resp, body =
self.api.client.get(url)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 185, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._cs_request(url,
'GET', **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 153, in
_cs_request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 123, in
request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/requests/api.py", line 44, in request
20

Re: [ceph-users] RADOS Gateway Configuration

2013-05-30 Thread John Wilkins
Do you have your admin keyring in the /etc/ceph directory of your
radosgw host?  That sounds like step 1 here:
http://ceph.com/docs/master/start/quick-rgw/#generate-a-keyring-and-key

I think I encountered an issue there myself, and did a sudo chmod 644
on the keyring.

On Wed, May 29, 2013 at 1:17 PM, Daniel Curran  wrote:
> Unfortunately it seems like I messed up yesterday. I didn't have the
> client.radosgw.gateway section in my ceph.conf. I don't get the apache
> errors now but I still don't have access since the secret_key is still not
> being created or at least not showing up. I can try to auth but it just says
> 'Auth GET failed: http://192.168.1.100:80/auth/ 403 Forbidden' with
> everything I send it.
>
> This is what I have at the moment in the files you requested.
> ceph.conf:
> --
> --
> [global]
> fsid = 1ec4438a-3f59-4cfd-86b8-a89607401d81
> mon_initial_members = ceph0
> mon_host = 192.168.1.100
> auth_supported = cephx
> osd_journal_size = 1024
> filestore_xattr_use_omap = true
>
> [client.radosgw.gateway]
> host = ceph0
> keyring = /etc/ceph/keyring.radosgw.gateway
> rgw socket path = /tmp/radosgw.sock
> log file = /var/log/ceph/radosgw.log
> rgw dns name = ceph0
> 
>
> rgw.conf:
> 
> FastCgiExternalServer /var/www/s3gw.fcgi -socket /tmp/radosgw.sock
>
>
> 
> ServerName ceph0
> ServerAdmin admin@localhost
> DocumentRoot /var/www
>
>
> RewriteEngine On
> RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*)
> /s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING}
> [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
>
> 
> 
> Options +ExecCGI
> AllowOverride All
> SetHandler fastcgi-script
> Order allow,deny
> Allow from all
> AuthBasicAuthoritative Off
> 
> 
>
> AllowEncodedSlashes On
> ErrorLog /var/log/apache2/error.log
> CustomLog /var/log/apache2/access.log combined
> ServerSignature Off
>
> 
> 
>
> s3gw.fcgi
> 
>
> #!/bin/sh
> exec /usr/bin/radosgw -c /etc/ceph/ceph.conf -n client.radosgw.gateway
> 
>
> Here's what the /var/log/ceph/radosgw.log says when it denies me:
> 2013-05-29 16:00:29.118234 7f5f60cf6700  2 req 11:0.93:swift-auth:GET
> /auth/::getting op
> 2013-05-29 16:00:29.118237 7f5f60cf6700  2 req 11:0.96:swift-auth:GET
> /auth/:swift_auth_get:authorizing
> 2013-05-29 16:00:29.118239 7f5f60cf6700  2 req 11:0.98:swift-auth:GET
> /auth/:swift_auth_get:reading permissions
> 2013-05-29 16:00:29.118243 7f5f60cf6700  2 req 11:0.000102:swift-auth:GET
> /auth/:swift_auth_get:reading the cors attr
> 2013-05-29 16:00:29.118246 7f5f60cf6700 10 Going to read cors from attrs
> 2013-05-29 16:00:29.118248 7f5f60cf6700  2 req 11:0.000107:swift-auth:GET
> /auth/:swift_auth_get:verifying op permissions
> 2013-05-29 16:00:29.118250 7f5f60cf6700  2 req 11:0.000109:swift-auth:GET
> /auth/:swift_auth_get:verifying op params
> 2013-05-29 16:00:29.118252 7f5f60cf6700  2 req 11:0.000111:swift-auth:GET
> /auth/:swift_auth_get:executing
> 2013-05-29 16:00:29.118273 7f5f60cf6700 20 get_obj_state:
> rctx=0x7f5efc007630 obj=.users.swift:johndoe:swift state=0x7f5efc00c378
> s->prefetch_data=0
> 2013-05-29 16:00:29.118284 7f5f60cf6700 10 moving .users.swift+johndoe:swift
> to cache LRU end
> 2013-05-29 16:00:29.118286 7f5f60cf6700 10 cache get:
> name=.users.swift+johndoe:swift : hit
> 2013-05-29 16:00:29.118292 7f5f60cf6700 20 get_obj_state: s->obj_tag was set
> empty
> 2013-05-29 16:00:29.118298 7f5f60cf6700 10 moving .users.swift+johndoe:swift
> to cache LRU end
> 2013-05-29 16:00:29.118300 7f5f60cf6700 10 cache get:
> name=.users.swift+johndoe:swift : hit
> 2013-05-29 16:00:29.118316 7f5f60cf6700 20 get_obj_state:
> rctx=0x7f5efc0071f0 obj=.users.uid:johndoe state=0x7f5efc00c9f8
> s->prefetch_data=0
> 2013-05-29 16:00:29.118321 7f5f60cf6700 10 moving .users.uid+johndoe to
> cache LRU end
> 2013-05-29 16:00:29.118323 7f5f60cf6700 10 cache get:
> name=.users.uid+johndoe : hit
> 2013-05-29 16:00:29.118326 7f5f60cf6700 20 get_obj_state: s->obj_tag was set
> empty
> 2013-05-29 16:00:29.118330 7f5f60cf6700 10 moving .users.uid+johndoe to
> cache LRU end
> 2013-05-29 16:00:29.118332 7f5f60cf6700 10 cache get:
> name=.users.uid+johndoe : hit
> 2013-05-29 16:00:29.118358 7f5f60cf6700  0 NOTICE:
> RGW_SWIFT_Auth_Get::execute(): bad swift key
> 2

Re: [ceph-users] MDS dying on cuttlefish

2013-05-30 Thread Gregory Farnum
On Wed, May 29, 2013 at 11:20 PM, Giuseppe 'Gippa' Paterno'
 wrote:
> Hi Greg,
>> Oh, not the OSD stuff, just the CephFS stuff that goes on top. Look at
>> http://www.mail-archive.com/ceph-users@lists.ceph.com/msg00029.html
>> Although if you were re-creating pools and things, I think that would
>> explain the crash you're seeing.
>> -Greg
>>
> I was thinking about that  the problem is that with cuttlefish
> (0.61.2) seems that the command is no longer there.
> Has that moved?

Nope, still there.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] rbd snap rollback does not show progress since cuttlefish

2013-05-30 Thread Josh Durgin

On 05/30/2013 02:09 AM, Stefan Priebe - Profihost AG wrote:

Hi,

under bobtail rbd snap rollback shows the progress going on. Since
cuttlefish i see no progress anymore.

Listing the rbd help it only shows me a no-progress option but it seems
no pogress is the default so i need a progress option...


rbd progress reporting in cuttlefish moved from stdout to stderr,
perhaps that's why you're not seeing it?

It shows up when I try on the cuttlefish branch at least.

Josh
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 07:37 AM, Martin Mailand wrote:

Hi Josh,

I am trying to use ceph with openstack (grizzly), I have a multi host setup.
I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
Glance is working without a problem.
With cinder I can create and delete volumes without a problem.

But I cannot boot from volumes.
I doesn't matter if use horizon or the cli, the vm goes to the error state.

 From the nova-compute.log I get this.

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
.
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
ENETUNREACH

What tries nova to reach? How could I debug that further?


It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).

Josh


Full Log included.

-martin

Log:

ceph --version
ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)

root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
ii  ceph-common  0.61-1precise
common utilities to mount and interact with a ceph storage
cluster
ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
python bindings to the OpenStack Volume API


nova-compute.log

2013-05-30 16:08:45.224 ERROR nova.compute.manager
[req-5679ddfe-79e3-4adb-b220-915f4a38b532
8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
[instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
device setup
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071,
in _prep_block_device
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return
self._setup_block_device_mapping(context, instance, bdms)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 721, in
_setup_block_device_mapping
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
self.volume_api.get(context, bdm['volume_id'])
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]
self._reraise_translated_volume_exception(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] item =
cinderclient(context).volumes.get(volume_id)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py", line 180,
in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._get("/volumes/%s"
% volume_id, "volume")
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/base.py", line 141, in _get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] resp, body =
self.api.client.get(url)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 185, in get
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] return self._cs_request(url,
'GET', **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 153, in
_cs_request
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc] **kwargs)
2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
059589a3-72fc-444d-b1f0-ab1567c725fc]   File
"/usr/lib/python2.7/dist-packages/cinderclient/client.py", line 123, 

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread w sun
I would suggest on nova compute host (particularly if you have separate compute 
nodes),
(1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is readable by user 
nova!!(2) make sure you can start up a regular ephemeral instance on the same 
nova node (ie, nova-compute is working correctly)(3) if you are using cephx, 
make sure libvirt secret is set up correct per instruction at ceph.com(4) look 
at /var/lib/nova/instance/x/libvirt.xml and the disk file is 
pointing to the rbd volume(5) If all above look fine and you still couldn't 
perform nova boot with the volume,  you can try last thing to manually start up 
a kvm session with the volume similar to below. At least this will tell you if 
you qemu has the correct rbd enablement.
  /usr/bin/kvm -m 2048 -drive 
file=rbd:ceph-openstack-volumes/volume-3f964f79-febe-4251-b2ba-ac9423af419f,index=0,if=none,id=drive-virtio-disk0
 -boot c -net nic -net user -nographic  -vnc :1000 -device 
piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
--weiguo
> Date: Thu, 30 May 2013 16:37:40 +0200
> From: mar...@tuxadero.com
> To: ceph-us...@ceph.com
> CC: openst...@lists.launchpad.net
> Subject: [ceph-users] Openstack with Ceph, boot from volume
> 
> Hi Josh,
> 
> I am trying to use ceph with openstack (grizzly), I have a multi host setup.
> I followed the instruction http://ceph.com/docs/master/rbd/rbd-openstack/.
> Glance is working without a problem.
> With cinder I can create and delete volumes without a problem.
> 
> But I cannot boot from volumes.
> I doesn't matter if use horizon or the cli, the vm goes to the error state.
> 
> From the nova-compute.log I get this.
> 
> 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> device setup
> .
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
> ENETUNREACH
> 
> What tries nova to reach? How could I debug that further?
> 
> Full Log included.
> 
> -martin
> 
> Log:
> 
> ceph --version
> ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)
> 
> root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
> ii  ceph-common  0.61-1precise
>common utilities to mount and interact with a ceph storage
> cluster
> ii  python-cinderclient  1:1.0.3-0ubuntu1~cloud0
>python bindings to the OpenStack Volume API
> 
> 
> nova-compute.log
> 
> 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> device setup
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc]   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071,
> in _prep_block_device
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc] return
> self._setup_block_device_mapping(context, instance, bdms)
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc]   File
> "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 721, in
> _setup_block_device_mapping
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
> self.volume_api.get(context, bdm['volume_id'])
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc]   File
> "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193, in get
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc]
> self._reraise_translated_volume_exception(volume_id)
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc]   File
> "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190, in get
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc] item =
> cinderclient(context).volumes.get(volume_id)
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc]   File
> "/usr/lib/python2.7/dist-packages/cinderclient/v1/volumes.py", line 180,
> in get
> 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> 059589a3-72fc-444d-b1f0-ab1567c725fc] return self._get("/volumes/%s"
> % volume_id, "volume")
> 

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Weiguo,

my answers are inline.

-martin

On 30.05.2013 21:20, w sun wrote:
> I would suggest on nova compute host (particularly if you have
> separate compute nodes),
>
> (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is
> readable by user nova!!
yes to both
> (2) make sure you can start up a regular ephemeral instance on the
> same nova node (ie, nova-compute is working correctly)
an ephemeral instance is working
> (3) if you are using cephx, make sure libvirt secret is set up correct
> per instruction at ceph.com
I do not use cephx
> (4) look at /var/lib/nova/instance/x/libvirt.xml and the
> disk file is pointing to the rbd volume
For an ephemeral instance the folder is create, for a volume bases
instance the folder is not created.

> (5) If all above look fine and you still couldn't perform nova boot
> with the volume,  you can try last thing to manually start up a kvm
> session with the volume similar to below. At least this will tell you
> if you qemu has the correct rbd enablement.
>
>   /usr/bin/kvm -m 2048 -drive
> file=rbd:ceph-openstack-volumes/volume-3f964f79-febe-4251-b2ba-ac9423af419f,index=0,if=none,id=drive-virtio-disk0
> -boot c -net nic -net user -nographic  -vnc :1000 -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>
If I start kvm by hand it is working.

> --weiguo
>
> > Date: Thu, 30 May 2013 16:37:40 +0200
> > From: mar...@tuxadero.com
> > To: ceph-us...@ceph.com
> > CC: openst...@lists.launchpad.net
> > Subject: [ceph-users] Openstack with Ceph, boot from volume
> >
> > Hi Josh,
> >
> > I am trying to use ceph with openstack (grizzly), I have a multi
> host setup.
> > I followed the instruction
> http://ceph.com/docs/master/rbd/rbd-openstack/.
> > Glance is working without a problem.
> > With cinder I can create and delete volumes without a problem.
> >
> > But I cannot boot from volumes.
> > I doesn't matter if use horizon or the cli, the vm goes to the error
> state.
> >
> > From the nova-compute.log I get this.
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > .
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
> > ENETUNREACH
> >
> > What tries nova to reach? How could I debug that further?
> >
> > Full Log included.
> >
> > -martin
> >
> > Log:
> >
> > ceph --version
> > ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)
> >
> > root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
> > ii ceph-common 0.61-1precise
> > common utilities to mount and interact with a ceph storage
> > cluster
> > ii python-cinderclient 1:1.0.3-0ubuntu1~cloud0
> > python bindings to the OpenStack Volume API
> >
> >
> > nova-compute.log
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] Traceback (most recent call last):
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 1071,
> > in _prep_block_device
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] return
> > self._setup_block_device_mapping(context, instance, bdms)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/compute/manager.py", line 721, in
> > _setup_block_device_mapping
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] volume =
> > self.volume_api.get(context, bdm['volume_id'])
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 193,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc]
> > self._reraise_translated_volume_exception(volume_id)
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] File
> > "/usr/lib/python2.7/dist-packages/nova/volume/cinder.py", line 190,
> in get
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] item =
> > cinderclien

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi,
telnet is working. But how does nova know where to find the cinder-api?
I have no cinder conf on the compute node, just nova.

telnet 192.168.192.2 8776
Trying 192.168.192.2...
Connected to 192.168.192.2.
Escape character is '^]'.
get

Error response


Error response
Error code 400.
Message: Bad request syntax ('get').
Error code explanation: 400 = Bad request syntax or unsupported method.

Connection closed by foreign host.

On 30.05.2013 21:59, w sun wrote:
> Josh has suggested the cinder API service blocking issue in another
> reply. If you do have the cinder API service running on a different
> node, you want to make sure you can talk to it as Nova-compute need that.
> 
> You can try telnet from the nova-compute node to the cinder service port
> to rule out ip-table issue.
> 
> --weiguo
> 
> 
> Date: Thu, 30 May 2013 21:47:31 +0200
> From: mar...@tuxadero.com
> To: ws...@hotmail.com
> CC: ceph-us...@ceph.com; openst...@lists.launchpad.net
> Subject: Re: [ceph-users] Openstack with Ceph, boot from volume
> 
> Hi Weiguo,
> 
> my answers are inline.
> 
> -martin
> 
> On 30.05.2013 21:20, w sun wrote:
> 
> I would suggest on nova compute host (particularly if you have
> separate compute nodes),
> 
> (1) make sure "rbd ls -l -p " works and /etc/ceph/ceph.conf is
> readable by user nova!!
> 
> yes to both
> 
> (2) make sure you can start up a regular ephemeral instance on the
> same nova node (ie, nova-compute is working correctly)
> 
> an ephemeral instance is working
> 
> (3) if you are using cephx, make sure libvirt secret is set up
> correct per instruction at ceph.com
> 
> I do not use cephx
> 
> (4) look at /var/lib/nova/instance/x/libvirt.xml and the
> disk file is pointing to the rbd volume
> 
> For an ephemeral instance the folder is create, for a volume bases
> instance the folder is not created.
> 
> (5) If all above look fine and you still couldn't perform nova boot
> with the volume,  you can try last thing to manually start up a kvm
> session with the volume similar to below. At least this will tell
> you if you qemu has the correct rbd enablement.
> 
>   /usr/bin/kvm -m 2048 -drive
> 
> file=rbd:ceph-openstack-volumes/volume-3f964f79-febe-4251-b2ba-ac9423af419f,index=0,if=none,id=drive-virtio-disk0
> -boot c -net nic -net user -nographic  -vnc :1000 -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device
> 
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
> 
> If I start kvm by hand it is working.
> 
> --weiguo
> 
> > Date: Thu, 30 May 2013 16:37:40 +0200
> > From: mar...@tuxadero.com 
> > To: ceph-us...@ceph.com 
> > CC: openst...@lists.launchpad.net
> 
> > Subject: [ceph-users] Openstack with Ceph, boot from volume
> >
> > Hi Josh,
> >
> > I am trying to use ceph with openstack (grizzly), I have a multi
> host setup.
> > I followed the instruction
> http://ceph.com/docs/master/rbd/rbd-openstack/.
> > Glance is working without a problem.
> > With cinder I can create and delete volumes without a problem.
> >
> > But I cannot boot from volumes.
> > I doesn't matter if use horizon or the cli, the vm goes to the
> error state.
> >
> > From the nova-compute.log I get this.
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > .
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 059589a3-72fc-444d-b1f0-ab1567c725fc] ConnectionError: [Errno 101]
> > ENETUNREACH
> >
> > What tries nova to reach? How could I debug that further?
> >
> > Full Log included.
> >
> > -martin
> >
> > Log:
> >
> > ceph --version
> > ceph version 0.61 (237f3f1e8d8c3b85666529860285dcdffdeda4c5)
> >
> > root@compute1:~# dpkg -l|grep -e ceph-common -e cinder
> > ii ceph-common 0.61-1precise
> > common utilities to mount and interact with a ceph storage
> > cluster
> > ii python-cinderclient 1:1.0.3-0ubuntu1~cloud0
> > python bindings to the OpenStack Volume API
> >
> >
> > nova-compute.log
> >
> > 2013-05-30 16:08:45.224 ERROR nova.compute.manager
> > [req-5679ddfe-79e3-4adb-b220-915f4a38b532
> > 8f9630095810427d865bc90c5ea04d35 43b2bbbf5daf4badb15d67d87ed2f3dc]
> > [instance: 059589a3-72fc-444d-b1f0-ab1567c725fc] Instance failed block
> > device setup
> > 2013-05-30 16:08:45.224 19614 TRACE nova.compute.manager [instance:
> > 

Re: [ceph-users] rbd snap rollback does not show progress since cuttlefish

2013-05-30 Thread Stefan Priebe

Am 30.05.2013 21:10, schrieb Josh Durgin:

On 05/30/2013 02:09 AM, Stefan Priebe - Profihost AG wrote:

Hi,

under bobtail rbd snap rollback shows the progress going on. Since
cuttlefish i see no progress anymore.

Listing the rbd help it only shows me a no-progress option but it seems
no pogress is the default so i need a progress option...


rbd progress reporting in cuttlefish moved from stdout to stderr,
perhaps that's why you're not seeing it?

It shows up when I try on the cuttlefish branch at least.


ah that's it thanks. Didn't found this info in the release notes.

Stefan
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:
> It's trying to talk to the cinder api, and failing to connect at all.
> Perhaps there's a firewall preventing that on the compute host, or
> it's trying to use the wrong endpoint for cinder (check the keystone
> service and endpoint tables for the volume service).

the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for "customer" of the cluster?
Am I wrong here?

-martin

On 30.05.2013 22:22, Martin Mailand wrote:
> Hi Josh,
> 
> On 30.05.2013 21:17, Josh Durgin wrote:
>> It's trying to talk to the cinder api, and failing to connect at all.
>> Perhaps there's a firewall preventing that on the compute host, or
>> it's trying to use the wrong endpoint for cinder (check the keystone
>> service and endpoint tables for the volume service).
> 
> the keystone endpoint looks like this:
> 
> | dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
> http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
> http://192.168.192.2:8776/v1/$(tenant_id)s |
> http://192.168.192.2:8776/v1/$(tenant_id)s |
> 5ad684c5a0154c13b54283b01744181b
> 
> where 192.168.192.2 is the IP from the controller node.
> 
> And from the compute node a telnet 192.168.192.2 8776 is working.
> 
> -martin
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 01:50 PM, Martin Mailand wrote:

Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for "customer" of the cluster?
Am I wrong here?


I'd expect that too, but it's determined in nova by the 
cinder_catalog_info option, which defaults to volume:cinder:publicURL.


You can also override it explicitly with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.

Josh


-martin

On 30.05.2013 22:22, Martin Mailand wrote:

Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:

It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).


the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com



___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] increasing stability

2013-05-30 Thread Sage Weil
Hi everyone,

I wanted to mention just a few things on this thread.

The first is obvious: we are extremely concerned about stability.  
However, Ceph is a big project with a wide range of use cases, and it is 
difficult to cover them all.  For that reason, Inktank is (at least for 
the moment) focusing in specific areas (rados, librbd, rgw) and certain 
platforms.  We have a number of large production customers and 
non-customers now who have stable environments, and we are committed to a 
solid experience for them.

We are investing heavily in testing infrastructure and automation tools to 
maximize our ability to test with limited resources.  Our lab is currently 
around 14 racks, with most of the focus now on utilizing those resources 
as effectively as possible.  The teuthology testing framework continues to 
evolve and our test suites continue to grow.  Unfortunatley, this has been 
an area where it has been difficult for others to contribute.  We are 
eager to talk to anyone who is interested in helping.

Overall, the cuttlefish release has gone much more smoothly than bobtail 
did.  That said, there are a few lingering problems, particularly with the 
monitor's use of leveldb.  We're waiting on some QA on the pending fixes 
now before we push out a 0.61.3 that I believe will resolve the remaining 
problems for most users.

However, as overall adoption of ceph increases, we move past the critical 
bugs and start seeing a larger number of "long-tail" issues that affect 
smaller sets of users.  Overall this is a good thing, even if it means a 
harder job for the engineers to triage and track down obscure problems. 
The mailing list is going to attract a high number of bug reports because 
that's what it is for.  Although we believe the quality is getting better 
based on our internal testing and our commercial interactions, we'd like 
to turn this into a more metrics driven analysis.  We welcome any ideas on 
how to do this, as the obvious ideas (like counting bugs) tend to scale 
with the number of users, and we have no way of telling how many users 
there really are.

Thanks-
sage



On Thu, 30 May 2013, Youd, Douglas wrote:

> Completely agree as well. I'm very keen to see widespread adoption of Ceph, 
> but battling against the major vendors is a massive challenge not helped by 
> even a small amount of instability.
> 
> Douglas Youd
> Direct  +61 8 9488 9571
> 
> 
> -Original Message-
> From: ceph-users-boun...@lists.ceph.com 
> [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Chen, Xiaoxi
> Sent: Thursday, 30 May 2013 1:40 AM
> To: Wolfgang Hennerbichler
> Cc: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] increasing stability
> 
> Cannot agree more,when I trying to promote ceph to internal state holder,they 
> always complaining the stability of ceph,especially when they are evaluating 
> ceph with high enough pressure, ceph cannot stay heathy during the test.
> 
> 
> 
>  iPhone
> 
> ? 2013-5-29?19:13?"Wolfgang Hennerbichler" 
>  ???
> 
> > Hi,
> >
> > as most on the list here I also see the future of storage in ceph. I
> > think it is a great system and overall design, and sage with the rest
> > of inktank and the community are doing their best to make ceph great.
> > Being a part-time developer myself I know how awesome new features
> > are, and how great it is to implement them.
> > On the other hand I think cuttlefish is in a state where I am not
> > feeling easy when saying: ceph is stable, go ahead, use it. I do
> > happen to have to do a lot of presentations on ceph recently, and I'm
> > doing a lot of lobbying for it.
> > I also realize that it's not easy to develop a distributed system like
> > ceph, and I know it needs time and a community to test. I'm just
> > wondering if it might be better for the devs to keep their focus right
> > now on fixing nasty bugs (even more as they do already), and make the
> > mon's and osd's super-stable.
> > I have no insight on the development cycles, so chances are you're
> > doing this right now already. I'm just saying: I'd love to see ceph
> > take over the storage world, and for that we need it in super stable states.
> >
> > Then ceph can succeed big time.
> >
> > Sorry for the noise, but I really wanted to get rid of this :)
> > Wolfgang ___
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 
> 
> 
> ZettaServe Disclaimer: This email and any files transmitted with it are 
> confidential and intended solely for the use of the individual or entity to 
> whom they are addressed. If you are not the named addressee you should not 
> disseminate, distribute or copy this e-mail. Please notify the sender 
> immediately if you h

Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

that's working.

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?

2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.

That's what I did.

root@controller:~/vm_images# !1228
glance add name="Precise Server" is_public=true container_format=ovf
disk_format=raw < ./precise-server-cloudimg-amd64-disk1.raw
Added new image with ID: 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8
root@controller:~/vm_images# rbd -p images -l ls
NAMESIZE PARENT FMT PROT LOCK
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8   2048M  2
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8@snap  2048M  2 yes
root@controller:~/vm_images# cinder create --image-id
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 --display-name volcli1
10+-+--+
|   Property  |Value |
+-+--+
| attachments |  []  |
|  availability_zone  | nova |
|   bootable  |false |
|  created_at |  2013-05-30T21:08:16.506094  |
| display_description | None |
| display_name|   volcli1|
|  id | 34838911-6613-4140-93e0-e1565054a2d3 |
|   image_id  | 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 |
|   metadata  |  {}  |
| size|  10  |
| snapshot_id | None |
| source_volid| None |
|status   |   creating   |
| volume_type | None |
+-+--+
root@controller:~/vm_images# cinder list
+--+-+--+--+-+--+-+
|  ID  |Status   | Display Name |
Size | Volume Type | Bootable | Attached to |
+--+-+--+--+-+--+-+
| 34838911-6613-4140-93e0-e1565054a2d3 | downloading |   volcli1|
10  | None|  false   | |
+--+-+--+--+-+--+-+
root@controller:~/vm_images# rbd -p volumes -l ls
NAME   SIZE PARENT FMT PROT LOCK
volume-34838911-6613-4140-93e0-e1565054a2d3  10240M  2

root@controller:~/vm_images#

-martin

On 30.05.2013 22:56, Josh Durgin wrote:
> On 05/30/2013 01:50 PM, Martin Mailand wrote:
>> Hi Josh,
>>
>> I found the problem, nova-compute tries to connect to the publicurl
>> (xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
>> the management network.
>> I thought the internalurl is the one, which is used for the internal
>> communication of the openstack components and the publicurl is the ip
>> for "customer" of the cluster?
>> Am I wrong here?
> 
> I'd expect that too, but it's determined in nova by the
> cinder_catalog_info option, which defaults to volume:cinder:publicURL.
> 
> You can also override it explicitly with
> cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
> in your nova.conf.
> 
> Josh
> 
>> -martin
>>
>> On 30.05.2013 22:22, Martin Mailand wrote:
>>> Hi Josh,
>>>
>>> On 30.05.2013 21:17, Josh Durgin wrote:
 It's trying to talk to the cinder api, and failing to connect at all.
 Perhaps there's a firewall preventing that on the compute host, or
 it's trying to use the wrong endpoint for cinder (check the keystone
 service and endpoint tables for the volume service).
>>>
>>> the keystone endpoint looks like this:
>>>
>>> | dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
>>> http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
>>> http://192.168.192.2:8776/v1/$(tenant_id)s |
>>> http://192.168.192.2:8776/v1/$(tenant_id)s |
>>> 5ad684c5a0154c13b54283b01744181b
>>>
>>> where 192.168.192.2 is the IP from the controller node.
>>>
>>> And from the compute node a telnet 192.168.192.2 8776 is working.
>>>
>>> -martin
>>> ___
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
> 
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 02:18 PM, Martin Mailand wrote:

Hi Josh,

that's working.

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?


cinder.volume.drivers.rbd.RBDDriver


2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.


Set glance_api_version=2 in cinder.conf. The default was changed in
Grizzly.


That's what I did.

root@controller:~/vm_images# !1228
glance add name="Precise Server" is_public=true container_format=ovf
disk_format=raw < ./precise-server-cloudimg-amd64-disk1.raw
Added new image with ID: 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8
root@controller:~/vm_images# rbd -p images -l ls
NAMESIZE PARENT FMT PROT LOCK
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8   2048M  2
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8@snap  2048M  2 yes
root@controller:~/vm_images# cinder create --image-id
6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 --display-name volcli1
10+-+--+
|   Property  |Value |
+-+--+
| attachments |  []  |
|  availability_zone  | nova |
|   bootable  |false |
|  created_at |  2013-05-30T21:08:16.506094  |
| display_description | None |
| display_name|   volcli1|
|  id | 34838911-6613-4140-93e0-e1565054a2d3 |
|   image_id  | 6fbf4dfd-adce-470b-87fe-9b6ddb3993c8 |
|   metadata  |  {}  |
| size|  10  |
| snapshot_id | None |
| source_volid| None |
|status   |   creating   |
| volume_type | None |
+-+--+
root@controller:~/vm_images# cinder list
+--+-+--+--+-+--+-+
|  ID  |Status   | Display Name |
Size | Volume Type | Bootable | Attached to |
+--+-+--+--+-+--+-+
| 34838911-6613-4140-93e0-e1565054a2d3 | downloading |   volcli1|
10  | None|  false   | |
+--+-+--+--+-+--+-+
root@controller:~/vm_images# rbd -p volumes -l ls
NAME   SIZE PARENT FMT PROT LOCK
volume-34838911-6613-4140-93e0-e1565054a2d3  10240M  2

root@controller:~/vm_images#

-martin

On 30.05.2013 22:56, Josh Durgin wrote:

On 05/30/2013 01:50 PM, Martin Mailand wrote:

Hi Josh,

I found the problem, nova-compute tries to connect to the publicurl
(xxx.xxx.240.10) of the keytone endpoints, this ip is not reachable from
the management network.
I thought the internalurl is the one, which is used for the internal
communication of the openstack components and the publicurl is the ip
for "customer" of the cluster?
Am I wrong here?


I'd expect that too, but it's determined in nova by the
cinder_catalog_info option, which defaults to volume:cinder:publicURL.

You can also override it explicitly with
cinder_endpoint_template=http://192.168.192.2:8776/v1/$(tenant_id)s
in your nova.conf.

Josh


-martin

On 30.05.2013 22:22, Martin Mailand wrote:

Hi Josh,

On 30.05.2013 21:17, Josh Durgin wrote:

It's trying to talk to the cinder api, and failing to connect at all.
Perhaps there's a firewall preventing that on the compute host, or
it's trying to use the wrong endpoint for cinder (check the keystone
service and endpoint tables for the volume service).


the keystone endpoint looks like this:

| dd21ed74a9ac4744b2ea498609f0a86e | RegionOne |
http://xxx.xxx.240.10:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
http://192.168.192.2:8776/v1/$(tenant_id)s |
5ad684c5a0154c13b54283b01744181b

where 192.168.192.2 is the IP from the controller node.

And from the compute node a telnet 192.168.192.2 8776 is working.

-martin
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com





___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Martin Mailand
Hi Josh,

now everything is working, many thanks for your help, great work.

-martin

On 30.05.2013 23:24, Josh Durgin wrote:
>> I have to more things.
>> 1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
>> update your configuration to the new path. What is the new path?
> 
> cinder.volume.drivers.rbd.RBDDriver
> 
>> 2. I have in the glance-api.conf show_image_direct_url=True, but the
>> volumes are not clones of the original which are in the images pool.
> 
> Set glance_api_version=2 in cinder.conf. The default was changed in
> Grizzly.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Openstack with Ceph, boot from volume

2013-05-30 Thread Josh Durgin

On 05/30/2013 02:50 PM, Martin Mailand wrote:

Hi Josh,

now everything is working, many thanks for your help, great work.


Great! I added those settings to 
http://ceph.com/docs/master/rbd/rbd-openstack/ so it's easier to figure 
out in the future.



-martin

On 30.05.2013 23:24, Josh Durgin wrote:

I have to more things.
1. The volume_driver=cinder.volume.driver.RBDDriver is deprecated,
update your configuration to the new path. What is the new path?


cinder.volume.drivers.rbd.RBDDriver


2. I have in the glance-api.conf show_image_direct_url=True, but the
volumes are not clones of the original which are in the images pool.


Set glance_api_version=2 in cinder.conf. The default was changed in
Grizzly.


___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] cephfs file system snapshots?

2013-05-30 Thread K Richard Pixley
Hi.  I've been following ceph from a distance for several years now.  
Kudos on the documentation improvements and quick start stuff since the 
last time I looked.


However, I'm a little confused about something.

I've been making heavy use of btrfs file system snapshots for several 
years now and couldn't live without them at this point.  I was looking 
forward to a distributed system that would support them as well but I 
don't see anything in the documentation currently about them.


I see the stuff about rbd snapshots, including the ability to create COW 
clones, which is interesting, (especially given the possibility of 
placing a btrfs file system on top of rbd), but it's not really the 
system of my dreams.


What's the current status/story about file system snapshots for cephfs?

--rich
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] cephfs file system snapshots?

2013-05-30 Thread Gregory Farnum
On Thu, May 30, 2013 at 3:10 PM, K Richard Pixley  wrote:
> Hi.  I've been following ceph from a distance for several years now.  Kudos
> on the documentation improvements and quick start stuff since the last time
> I looked.
>
> However, I'm a little confused about something.
>
> I've been making heavy use of btrfs file system snapshots for several years
> now and couldn't live without them at this point.  I was looking forward to
> a distributed system that would support them as well but I don't see
> anything in the documentation currently about them.
>
> I see the stuff about rbd snapshots, including the ability to create COW
> clones, which is interesting, (especially given the possibility of placing a
> btrfs file system on top of rbd), but it's not really the system of my
> dreams.
>
> What's the current status/story about file system snapshots for cephfs?

Filesystem snapshots exist and you can experiment with them on CephFS
(there's a hidden ".snaps" folder; you can create or remove snapshots
by creating directories in that folder; navigate up and down it, etc).
However, CephFS isn't completely documented as we don't officially
recommend it for production yet, and snapshots further reduce the
stability of the filesystem. Our first officially supported release of
CephFS will probably not include snapshots under that banner. :(
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Fwd: Fwd: some problem install ceph-deploy(china)

2013-05-30 Thread Dan Mick

I think you meant this to go to ceph-users:

 Original Message 
Subject:Fwd: some problem install ceph-deploy(china)
Date:   Fri, 31 May 2013 02:54:56 +0800
From:   张鹏 
To: dan.m...@inktank.com



hello everyone
I come from china,when i install ceph-deploy in my server i find some
problem
when i run ./bootstrap i find  i canot get the argparse ,i find the url
is a http address

when i write the same address in my webbrowser with https://  before the
address

it can down load it ,
but when i use http in my browser  it cannot down it  ,it maybe only in
china has this problem,so  i want to know  how can i changge the http
address to a https  address ,thank you
内嵌图片 1




-- 
Dan Mick, Filesystem Engineering
Inktank Storage, Inc.   http://inktank.com
Ceph docs: http://ceph.com/docs
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS dying on cuttlefish

2013-05-30 Thread Giuseppe 'Gippa' Paterno'
Hi Greg,
just for your own information, ceph mds newfs has disappeared from the
help screen of the "ceph" command and it was a nightmare to understand
the syntax (that has changed)... luckily sources were there :)

For the "flight log":
ceph mds newfs   --yes-i-really-mean-it

Cheers,
Gippa
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] MDS dying on cuttlefish

2013-05-30 Thread Giuseppe 'Gippa' Paterno'
... and BTW, I know it's my fault that I haven't done the mds newfs, but
I think it would be better to print an error rather that going in core
dump with a trace.
Just my eur 0.02 :)
Cheers,
Giuseppe
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com