Hi all,

due to a problem with ceph-deploy I currently use

deb http://gitbuilder.ceph.com/ceph-deb-raring-x86_64-basic/ref/wip-4924/ 
raring main
(ceph version 0.67.2-16-gd41cf86 (d41cf866ee028ef7b821a5c37b991e85cbf3637f))

Now the initialization of the cluster works like a charm,
ceph health is okay, 
just the mapping of the created rbd is failing.

---------------------
root@ping[/1]:~ # ceph osd pool delete kvm-pool kvm-pool 
--yes-i-really-really-mean-it
pool 'kvm-pool' deleted
root@ping[/1]:~ # ceph osd lspools

0 data,1 metadata,2 rbd,
root@ping[/1]:~ # 
root@ping[/1]:~ # ceph osd pool create kvm-pool 1000
pool 'kvm-pool' created
root@ping[/1]:~ # ceph osd lspools
0 data,1 metadata,2 rbd,4 kvm-pool,
root@ping[/1]:~ # ceph osd pool set kvm-pool min_size 2
set pool 4 min_size to 2
root@ping[/1]:~ # ceph osd dump | grep 'rep size'
pool
 0 'data' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0 crash_replay_interval 45
pool 1 'metadata' rep size 2 min_size 1 crush_ruleset 1 object_hash rjenkins 
pg_num 64 pgp_num 64 last_change 1 owner 0
pool 2 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins pg_num 
64 pgp_num 64 last_change 1 owner 0
pool 4 'kvm-pool' rep size 2 min_size 2 crush_ruleset 0 object_hash rjenkins 
pg_num 1000 pgp_num 1000 last_change 33 owner 0
root@ping[/1]:~ # rbd create atom03.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd create atom04.cimg --size 4000 --pool kvm-pool
root@ping[/1]:~ # rbd ls kvm-pool
atom03.cimg
atom04.cimg
root@ping[/1]:~ # rbd --image atom03.cimg --pool kvm-pool info
rbd image 'atom03.cimg':
        size 4000 MB in 1000 objects
        order 22 (4096 KB objects)
        block_name_prefix: rb.0.114d.2ae8944a
        format: 1
root@ping[/1]:~ # rbd --image atom04.cimg --pool kvm-pool info
rbd image 'atom04.cimg':
        size 4000 MB in 1000 objects
        order 22 (4096 KB objects)
        block_name_prefix: rb.0.127d.74b0dc51
        format: 1
root@ping[/1]:~ # rbd map atom03.cimg --pool kvm-pool --id admin
rbd: '/sbin/udevadm settle' failed! (256)
root@ping[/1]:~ # rbd map --pool kvm-pool --image atom03.cimg --id admin 
--keyring /etc/ceph/ceph.client.admin.keyring 
^Crbd: '/sbin/udevadm settle' failed! (2)
root@ping[/1]:~ # rbd map kvm-pool/atom03.cimg --id admin --keyring 
/etc/ceph/ceph.client.admin.keyring 
rbd: '/sbin/udevadm settle' failed! (256)
---------------------

Do I miss something? 
But I think this set of commands worked perfectly with cuttlefish?

TIA

Bernhard

-- 


  
         
    
        
      
        
          
            Bernhard Glomm

            IT Administration


          
            
                  Phone:
                
                
                  +49 (30) 86880 134
                
              
                  Fax:
                
                
                  +49 (30) 86880 100
                
              
                  Skype:
                
                
                  bernhard.glomm.ecologic
                
              
        
      
    
            
            
            
            
            
            
            
            
      
    
        
          Ecologic Institut gemeinnützige GmbH | Pfalzburger Str. 43/44 | 10717 
Berlin | Germany

          GF: R. Andreas Kraemer | AG: Charlottenburg HRB 57947 | 
USt/VAT-IdNr.: DE811963464

          Ecologic™ is a Trade Mark (TM) of Ecologic Institut gemeinnützige GmbH
        
      
    
         

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to