Wido,

Please see information below

How big is the Ceph cluster? (ceph -s).
[root@RDR02S01 ~]# ceph -s
   health HEALTH_OK
   monmap e1: 1 mons at {a=10.0.0.41:6789/0}, election epoch 1, quorum 0 a
   osdmap e35: 2 osds: 2 up, 2 in
    pgmap v17450: 714 pgs: 714 active+clean; 2522 MB data, 10213 MB used,
126 GB / 136 GB avail
   mdsmap e557: 1/1/1 up {0=a=up:active}

And what does libvirt say?

1. Log on to a hypervisor
2. virsh pool-list
root@ubuntu:~# virsh pool-list
Name                 State      Autostart
-----------------------------------------
18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c active     no
ac2949d9-16fb-4c54-a551-1cdbeb501f05 active     no
d9474b9d-afa1-3737-a13b-df333dae295f active     no
f81769c1-a31b-4202-a3aa-bdf616eef019 active     no

3. virsh pool-info <uuid of ceph pool>
root@ubuntu:~# virsh pool-info 18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
Name:           18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
UUID:           18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
State:          running
Persistent:     yes
Autostart:      no
Capacity:       136.60 GiB
*Allocation:     30723.02 TiB*
Available:      126.63 GiB


On Fri, Apr 19, 2013 at 4:43 PM, Wido den Hollander <w...@widodh.nl> wrote:

> Hi,
>
>
> On 04/19/2013 08:48 AM, Guangjian Liu wrote:
>
>> Ceph file system can added as Primary storage by RBD, Thanks.
>>
>>
> Great!
>
>
>  Now I meet the problem in creating VM to Ceph.
>>
>> First I create computer offer and storage offer with tag "ceph", and
>> define Ceph Primary storage with tag "ceph".
>> Then I create VM use "ceph" computer offer, It display exception in Log.
>> It seems the storage size out off usage.
>>
>>
> That is indeed odd. How big is the Ceph cluster? (ceph -s).
>
> And what does libvirt say?
>
> 1. Log on to a hypervisor
> 2. virsh pool-list
> 3. virsh pool-info <uuid of ceph pool>
>
> I'm curious about what libvirt reports as pool size.
>
> Wido
>
>  Create VM fail
>> 2013-04-18 10:23:27,559 DEBUG
>> [storage.allocator.**AbstractStoragePoolAllocator]
>> (Job-Executor-1:job-24)
>> Is storage pool shared? true
>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.**StorageManagerImpl]
>> (Job-Executor-1:job-24) Checking pool 209 for storage, totalSize:
>> 146673582080, *usedBytes: 33780317214998658*, usedPct:
>>
>> 230309.48542985672, disable threshold: 0.85
>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.**StorageManagerImpl]
>> (Job-Executor-1:job-24) Insufficient space on pool: 209 since its usage
>> percentage: 230309.48542985672 has crossed the
>> pool.storage.capacity.**disablethreshold: 0.85
>> 2013-04-18 10:23:27,561 DEBUG
>> [storage.allocator.**FirstFitStoragePoolAllocator]
>> (Job-Executor-1:job-24)
>> FirstFitStoragePoolAllocator returning 0 suitable storage pools
>> 2013-04-18 10:23:27,561 DEBUG [cloud.deploy.FirstFitPlanner]
>> (Job-Executor-1:job-24) No suitable pools found for volume:
>> Vol[8|vm=8|ROOT] under cluster: 1
>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>> (Job-Executor-1:job-24) No suitable pools found
>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>> (Job-Executor-1:job-24) No suitable storagePools found under this
>> Cluster: 1
>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>> (Job-Executor-1:job-24) Could not find suitable Deployment Destination
>> for this VM under any clusters, returning.
>> 2013-04-18 10:23:27,640 DEBUG [cloud.capacity.**CapacityManagerImpl]
>> (Job-Executor-1:job-24) VM state transitted from :Starting to Stopped
>> with event: OperationFailedvm's original host id: null new host id: null
>> host id before state transition: null
>> 2013-04-18 10:23:27,645 DEBUG [cloud.vm.UserVmManagerImpl]
>> (Job-Executor-1:job-24) Destroying vm VM[User|ceph-1] as it failed to
>> create
>> management-server.7z 2013-04-18 10:02
>>
>> I check CS Web GUI and Database, the storage Allocated 30723TB, but
>> actually the total size is 138GB.
>> Inline image 1
>>
>>
>> mysql> select * from storage_pool;
>> +-----+---------+-------------**-------------------------+----**
>> ---------------+------+-------**---------+--------+-----------**
>> -+-------------------+--------**--------+--------------+------**
>> -----+-----------------+------**---------------+---------+----**
>> ---------+--------+
>> | id  | name    | uuid                                 | pool_type
>>    | port | data_center_id | pod_id | cluster_id | available_bytes   |
>> capacity_bytes | host_address | user_info | path            | created
>>            | removed | update_time | status |
>> +-----+---------+-------------**-------------------------+----**
>> ---------------+------+-------**---------+--------+-----------**
>> -+-------------------+--------**--------+--------------+------**
>> -----+-----------------+------**---------------+---------+----**
>> ---------+--------+
>> | 200 | primary | d9474b9d-afa1-3737-a13b-**df333dae295f |
>> NetworkFilesystem | 2049 |              1 |      1 |          1 |
>> 10301210624 |    61927849984 | 10.0.0.42    | NULL      |
>> /export/primary | 2013-04-17 06:25:47 | NULL    | NULL        | Up     |
>> | 209 | ceph    | 18559bbf-cd1a-3fcc-a4d2-**e95cd4f2d78c | RBD
>>    | 6789 |              1 |      1 |          1 | 33780317214998658 |
>> 146673582080 | 10.0.0.41    | :         | rbd             | 2013-04-18
>> 02:02:00 | NULL    | NULL        | Up     |
>> +-----+---------+-------------**-------------------------+----**
>> ---------------+------+-------**---------+--------+-----------**
>> -+-------------------+--------**--------+--------------+------**
>> -----+-----------------+------**---------------+---------+----**
>> ---------+--------+
>> 2 rows in set (0.00 sec)
>>
>>
>>
>>
>>
>> On Thu, Apr 18, 2013 at 3:54 AM, Wido den Hollander <w...@widodh.nl
>> <mailto:w...@widodh.nl>> wrote:
>>
>>     Hi,
>>
>>
>>     On 04/17/2013 03:01 PM, Guangjian Liu wrote:
>>
>>         I still meet the same result.
>>
>>         In ubuntu 12.04,
>>         1. I install libvirt-dev as below,
>>              apt-get install libvirt-dev
>>         2. rebuild libvirt, see detail build log in attach.
>>         root@ubuntu:~/install/libvirt-**__0.10.2# ./autogen.sh
>>
>>         running CONFIG_SHELL=/bin/bash /bin/bash ./configure --enable-rbd
>>         --no-create --no-recursion
>>         configure: WARNING: unrecognized options: --enable-rbd
>>
>>
>>     The correct option is "--with-storage-rbd"
>>
>>     But check the output of configure, it should tell you whether RBD
>>     was enabled or not.
>>
>>     Then verify again if you can create a RBD storage pool manually via
>>     libvirt.
>>
>>     Wido
>>
>>         .....
>>         make
>>         make install
>>
>>
>>
>>         On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander
>>         <w...@widodh.nl <mailto:w...@widodh.nl>
>>         <mailto:w...@widodh.nl <mailto:w...@widodh.nl>>> wrote:
>>
>>              Hi,
>>
>>
>>              On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>>
>>                  Thanks for your mail, you suggest compile libvirt with
>>         RBD enable.
>>                  I already build libvirt-0.10.2.tar.gz as document
>>         
>> http://ceph.com/docs/master/__**__rbd/libvirt/<http://ceph.com/docs/master/____rbd/libvirt/>
>>         
>> <http://ceph.com/docs/master/_**_rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>> >
>>
>>
>>                  
>> <http://ceph.com/docs/master/_**_rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>         
>> <http://ceph.com/docs/master/**rbd/libvirt/<http://ceph.com/docs/master/rbd/libvirt/>>>
>> in my SERVER C(Ubuntu
>>                  12.04),
>>                     Shall I build libvirt-0.10.2.tar.gz with RBD enable?
>>           use
>>                  ./configure
>>                  --enable-rbd instead autogen.sh?
>>
>>
>>              Well, you don't have to add --enable-rbd to configure nor
>>              autogen.sh, but you have to make sure the development
>>         libraries for
>>              librbd are installed.
>>
>>              On CentOS do this:
>>
>>              yum install librbd-devel
>>
>>              And retry autogen.sh for libvirt, it should tell you RBD is
>>         enabled.
>>
>>              Wido
>>
>>                  cd libvirt
>>                  ./autogen.sh
>>                  make
>>                  sudo make install
>>
>>
>>
>>                  On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander
>>                  <w...@widodh.nl <mailto:w...@widodh.nl>
>>         <mailto:w...@widodh.nl <mailto:w...@widodh.nl>>> wrote:
>>
>>                      Hi,
>>
>>
>>                      On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>>
>>                          Create rbd primary storage fail in CS 4.0.1
>>                          Anybody can help about it!
>>
>>                          Environment:
>>                          1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>>                          2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>>                          3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>                                 compile libvirt and Qemu as document
>>                          root@ubuntu:/usr/local/lib# virsh version
>>                          Compiled against library: libvirt 0.10.2
>>                          Using library: libvirt 0.10.2
>>                          Using API: QEMU 0.10.2
>>                          Running hypervisor: QEMU 1.0.0
>>
>>
>>                      Are you sure both libvirt and Qemu are compiled
>>         with RBD
>>                      enabled?
>>
>>                      On your CentOS system you should make sure
>>         librbd-dev is
>>                      installed during
>>                      compilation of libvirt and Qemu.
>>
>>                      The most important part is the RBD storage pool
>>         support in
>>                      libvirt, that
>>                      should be enabled.
>>
>>                      In the e-mail you send me directly I saw this:
>>
>>                      root@ubuntu:~/scripts# virsh pool-define
>>         rbd-pool.xml error:
>>                      Failed to
>>                      define pool from rbd-pool.xml error: internal error
>>         missing
>>                      backend for
>>                      pool type 8
>>
>>                      That suggest RBD storage pool support is not
>>         enabled in libvirt.
>>
>>                      Wido
>>
>>
>>                         Problem:
>>
>>                          create primary storage fail with rbd device.
>>
>>                          Fail log:
>>                          2013-04-16 16:27:14,224 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) createPool Params @
>>         scheme - rbd
>>                          storageHost -
>>                          10.0.0.41 hostPath - /cloudstack port - -1
>>                          2013-04-16 16:27:14,270 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) In createPool Setting
>>         poolId -
>>                          218 uuid -
>>                          5924a2df-d658-3119-8aba-**____**f90307683206
>>
>>         zoneId - 4
>>
>>                          podId - 4 poolName -
>>                          ceph
>>                          2013-04-16 16:27:14,318 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) creating pool ceph on
>>           host 18
>>                          2013-04-16 16:27:14,320 DEBUG
>>         [agent.transport.Request]
>>                          (catalina-exec-9:null) Seq 18-1625162275:
>>         Sending  { Cmd
>>                          , MgmtId:
>>                          37528005876872, via: 18, Ver: v1, Flags: 100011,
>>
>>         [{"CreateStoragePoolCommand":{**____**"add":true,"pool":{"id":**
>> __218,__"**
>>
>>         uuid":"5924a2df-d658-3119-**__**__8aba-f90307683206","host":"**
>> 10.____**
>>
>>         0.0.41","path":"cloudstack","***____*userInfo":":","port":**
>> 6789,"__**
>>
>>         type":"RBD"},"localPath":"/**_**___mnt//3cf4f0e8-781d-39d8-**
>> b81c-__*__*
>>
>>
>>
>>                          9896da212335","wait":0}}]
>>                          }
>>                          2013-04-16 16:27:14,323 DEBUG
>>         [agent.transport.Request]
>>                          (AgentManager-Handler-2:null) Seq 18-1625162275:
>>                          Processing:  { Ans: ,
>>                          MgmtId: 37528005876872, via: 18, Ver: v1,
>>         Flags: 10,
>>
>>         [{"Answer":{"result":true,"**_**___details":"success","wait":**
>> 0}}__]
>>
>>
>>                          }
>>
>>                          2013-04-16 16:27:14,323 DEBUG
>>         [agent.transport.Request]
>>                          (catalina-exec-9:null) Seq 18-1625162275:
>>         Received:  {
>>                          Ans: , MgmtId:
>>                          37528005876872, via: 18, Ver: v1, Flags: 10, {
>>         Answer } }
>>                          2013-04-16 16:27:14,323 DEBUG
>>                          [agent.manager.**____**AgentManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) Details from executing
>> class
>>
>>         com.cloud.agent.api.**____**CreateStoragePoolCommand: success
>>
>>
>>                          2013-04-16 16:27:14,323 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) In createPool Adding the
>>         pool to
>>                          each of the hosts
>>                          2013-04-16 16:27:14,323 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) Adding pool ceph to  host
>> 18
>>                          2013-04-16 16:27:14,326 DEBUG
>>         [agent.transport.Request]
>>                          (catalina-exec-9:null) Seq 18-1625162276:
>>         Sending  { Cmd
>>                          , MgmtId:
>>                          37528005876872, via: 18, Ver: v1, Flags: 100011,
>>
>>         [{"ModifyStoragePoolCommand":{**____**"add":true,"pool":{"id":**
>> __218,__"**
>>
>>         uuid":"5924a2df-d658-3119-**__**__8aba-f90307683206","host":"**
>> 10.____**
>>
>>         0.0.41","path":"cloudstack","***____*userInfo":":","port":**
>> 6789,"__**
>>
>>         type":"RBD"},"localPath":"/**_**___mnt//3cf4f0e8-781d-39d8-**
>> b81c-__*__*
>>
>>
>>
>>                          9896da212335","wait":0}}]
>>                          }
>>                          2013-04-16 16:27:14,411 DEBUG
>>         [agent.transport.Request]
>>                          (AgentManager-Handler-6:null) Seq 18-1625162276:
>>                          Processing:  { Ans: ,
>>                          MgmtId: 37528005876872, via: 18, Ver: v1,
>>         Flags: 10,
>>
>>         [{"Answer":{"result":false,"****____details":"java.lang.**
>>                          NullPointerException\n\tat
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> LibvirtStorageAdaptor.____**
>>
>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**
>> ___462)\n\tat
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> KVMStoragePoolManager.____**
>>
>>         createStoragePool(**____**KVMStoragePoolManager.java:57)**
>> ____**\n\tat
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**___**_execute(
>>                          **LibvirtComputingResource.___**
>> _java:**2087)\n\tat
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**
>>
>>         executeRequest(**____**LibvirtComputingResource.java:**
>> ____**1053)\n\tat
>>
>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)**
>> ____**\n\tat
>>
>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>                          Agent.java:831)\n\tat
>>
>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)\n\tat
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>> ____*
>>                          ThreadPoolExecutor.java:1146)\**____**n\tat
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>> ____**
>>                          ThreadPoolExecutor.java:615)\***____*n\tat
>>
>>         java.lang.Thread.run(Thread.****____java:679)\n","wait":0}}] }
>>
>>
>>
>>                          2013-04-16 16:27:14,412 DEBUG
>>         [agent.transport.Request]
>>                          (catalina-exec-9:null) Seq 18-1625162276:
>>         Received:  {
>>                          Ans: , MgmtId:
>>                          37528005876872, via: 18, Ver: v1, Flags: 10, {
>>         Answer } }
>>                          2013-04-16 16:27:14,412 DEBUG
>>                          [agent.manager.**____**AgentManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) Details from executing
>> class
>>                          com.cloud.agent.api.**____**
>> ModifyStoragePoolCommand:
>>                          java.lang.NullPointerException
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> LibvirtStorageAdaptor.____**
>>
>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**___462)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> KVMStoragePoolManager.____**
>>
>>         createStoragePool(**____**KVMStoragePoolManager.java:57)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**___**_execute(
>>                          **LibvirtComputingResource.___**_java:**2087)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**
>>
>>         executeRequest(**____**LibvirtComputingResource.java:**
>> ____**1053)
>>                                     at
>>
>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)
>>                                     at
>>
>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>                          Agent.java:831)
>>                                     at
>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>> ____*
>>                          ThreadPoolExecutor.java:1146)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>> ____**
>>                          ThreadPoolExecutor.java:615)
>>                                     at
>>         java.lang.Thread.run(Thread.****____java:679)
>>
>>
>>
>>                          2013-04-16 16:27:14,451 WARN
>>                            [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) Unable to establish a
>>         connection
>>                          between
>>                          Host[-18-Routing] and Pool[218|RBD]
>>
>>         com.cloud.exception.**____**StorageUnavailableException:
>>
>>
>>                          Resource
>>
>>                          [StoragePool:218]
>>                          is unreachable: Unable establish connection
>>         from storage
>>                          head to storage
>>                          pool 218 due to java.lang.NullPointerException
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> LibvirtStorageAdaptor.____**
>>
>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**___462)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> KVMStoragePoolManager.____**
>>
>>         createStoragePool(**____**KVMStoragePoolManager.java:57)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**___**_execute(
>>                          **LibvirtComputingResource.___**_java:**2087)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**
>>
>>         executeRequest(**____**LibvirtComputingResource.java:**
>> ____**1053)
>>                                     at
>>
>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)
>>                                     at
>>
>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>                          Agent.java:831)
>>                                     at
>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>> ____*
>>                          ThreadPoolExecutor.java:1146)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>> ____**
>>                          ThreadPoolExecutor.java:615)
>>                                     at
>>         java.lang.Thread.run(Thread.****____java:679)
>>
>>                                     at
>>
>>         com.cloud.storage.**____**StorageManagerImpl.**____**
>> connectHostToSharedPool(**
>>                          StorageManagerImpl.java:1685)
>>                                     at
>>
>>         com.cloud.storage.**____**StorageManagerImpl.createPool(**____**
>>                          StorageManagerImpl.java:1450)
>>                                     at
>>
>>         com.cloud.storage.**____**StorageManagerImpl.createPool(**____**
>>                          StorageManagerImpl.java:215)
>>                                     at
>>
>>         com.cloud.api.commands.**____**CreateStoragePoolCmd.execute(***
>> ____*
>>                          CreateStoragePoolCmd.java:120)
>>                                     at
>>
>>         com.cloud.api.ApiDispatcher.****____dispatch(ApiDispatcher.**
>> java:__**
>>                          138)
>>                                     at
>>
>>         com.cloud.api.ApiServer.**____**queueCommand(ApiServer.java:****
>> ____543)
>>                                     at
>>
>>         com.cloud.api.ApiServer.**____**handleRequest(ApiServer.java:***
>> ____*422)
>>                                     at
>>
>>         com.cloud.api.ApiServlet.**___**_processRequest(ApiServlet.**
>>                          java:304)
>>                                     at
>>
>>         com.cloud.api.ApiServlet.**___**_doGet(ApiServlet.java:63)
>>                                     at
>>         javax.servlet.http.**____**HttpServlet.service(**
>>                          HttpServlet.java:617)
>>                                     at
>>         javax.servlet.http.**____**HttpServlet.service(**
>>                          HttpServlet.java:717)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_ApplicationFilterChain.**____**
>> internalDoFilter(**
>>                          ApplicationFilterChain.java:****____290)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_ApplicationFilterChain.**____**
>> doFilter(**
>>                          ApplicationFilterChain.java:****____206)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_StandardWrapperValve.invoke(****
>>                          StandardWrapperValve.java:233)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_StandardContextValve.invoke(****
>>                          StandardContextValve.java:191)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_StandardHostValve.invoke(**
>>                          StandardHostValve.java:127)
>>                                     at
>>
>>         org.apache.catalina.valves.**_**___ErrorReportValve.invoke(**
>>                          ErrorReportValve.java:102)
>>                                     at
>>
>>         org.apache.catalina.valves.**_**___AccessLogValve.invoke(**
>>                          AccessLogValve.java:555)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_StandardEngineValve.invoke(**
>>                          StandardEngineValve.java:109)
>>                                     at
>>
>>         org.apache.catalina.connector.**____**CoyoteAdapter.service(**
>>                          CoyoteAdapter.java:298)
>>                                     at
>>
>>         org.apache.coyote.http11.**___**_Http11NioProcessor.process(**
>>                          Http11NioProcessor.java:889)
>>                                     at
>>
>>         org.apache.coyote.http11.**___**_Http11NioProtocol$**____**
>> Http11ConnectionHandler.**
>>                          process(Http11NioProtocol.**__**__java:721)
>>                                     at
>>
>>         org.apache.tomcat.util.net.**_**___NioEndpoint$**
>> SocketProcessor.*__*
>>                          run(NioEndpoint.java:2260)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>> ____*
>>                          ThreadPoolExecutor.java:1110)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>> ____**
>>                          ThreadPoolExecutor.java:603)
>>                                     at
>>         java.lang.Thread.run(Thread.****____java:679)
>>
>>
>>                          2013-04-16 16:27:14,452 WARN
>>                            [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) No host can access
>>         storage pool
>>                          Pool[218|RBD] on
>>                          cluster 5
>>                          2013-04-16 16:27:14,504 WARN
>>           [cloud.api.ApiDispatcher]
>>                          (catalina-exec-9:null) class
>>                          com.cloud.api.**____**ServerApiException :
>> Failed
>>
>>
>>                          to
>>                          add storage pool
>>                          2013-04-16 16:27:15,293 DEBUG
>>                          [agent.manager.**____**AgentManagerImpl]
>>
>>
>>
>>                          (AgentManager-Handler-12:null) Ping from 18
>>                          ^C
>>                          [root@RDR02S02 management]#
>>
>>
>>
>>
>>
>>
>>
>>         --
>>         Guangjian
>>
>>
>>
>>
>> --
>> Guangjian
>>
>


-- 
Guangjian

Reply via email to