Hi Wido,

Thank you for your comment.

> What I see is "No such file or directory", so that RBD image does not
exist.
> It seems like a copy didn't succeed but now CloudStack thinks that the
image does exist.
> Does "libvirt-pool" have a RBD image with the name
5e5d9b40-270b-44af-9479-782175556c47 ?
No, it does not.

There is no "5e5d9b40-270b-44af-9479-782175556c47" as below.

virsh # pool-list
Name                 State      Autostart
-----------------------------------------
3900a5bf-3362-392b-8bd0-57b10ef47bb5 active     no
b39ca2cd-65ea-46d5-8a71-c3a4ef95028e active     no
cd6520d6-bfc3-3537-9600-7f044e11ddb1 active     no

virsh # vol-list 3900a5bf-3362-392b-8bd0-57b10ef47bb5
Name                 Path
-----------------------------------------
6ff9719f-3e4d-4ff5-ab67-154e30c936c2
libvirt-pool/6ff9719f-3e4d-4ff5-ab67-154e30c936c2
8555f35f-3ed8-436b-895a-04e88e7327e0
libvirt-pool/8555f35f-3ed8-436b-895a-04e88e7327e0
cd3688ab-e37b-4866-9ea7-4051b670a323
libvirt-pool/cd3688ab-e37b-4866-9ea7-4051b670a323


> What I see is "No such file or directory", so that RBD image does not
exist.
> It seems like a copy didn't succeed but now CloudStack thinks that the
image does exist.
I agree, but I don't understand why it occured...

- VMs(Root disk) on NFS : It works.
- VMs(Root disk) on RBD : It doesn't work
- Data Disk on RBD (attach to VMs on NFS) : It works.

Is there any other points to be checked?


Thanks,
Satoshi Shimazaki



2013/7/23 Wido den Hollander <w...@widodh.nl>

> Hi,
>
>
> On 07/22/2013 07:56 PM, Satoshi Shimazaki wrote:
>
>> Hi Wido,
>>
>> I'm in the project with Kimi and Nakajima-san.
>>
>> [root@rx200s7-07m ~]# ceph -v
>> ceph version 0.61.4 (**1669132fcfc27d0c0b5e5bb93ade59**d147e23404)
>>
>> Same version is installed into all the hosts (KVM host and Ceph nodes).
>>
>> Here is KVM Agent log.
>> http://pastebin.com/5yG1uBuj
>> I had set the log level "DEBUG" and failed to create 2 instances
>> ,"RBDVM-shimazaki-1" and "RBDVM-shimazaki-2".
>>
>>
> What I see is "No such file or directory", so that RBD image does not
> exist.
>
> It seems like a copy didn't succeed but now CloudStack thinks that the
> image does exist.
>
> Does "libvirt-pool" have a RBD image with the name 5e5d9b40-270b-44af-9479-
> **782175556c47 ?
>
> Wido
>
>
>  Thanks,
>> Satoshi Shimazaki
>>
>>
>>
>>
>> 2013/7/23 Kimihiko Kitase <kimihiko.kit...@citrix.co.jp>
>>
>>  Hi Wido
>>>
>>> Thanks for you comment.
>>>
>>> If we create vm on the NFS primary storage and mount additional disk on
>>> the RBD storage, it works fine.
>>> If we check vm from virt manager, there is no virtual disk. So we believe
>>> the problem should be vm configuration...
>>>
>>> We will check ceph version tomorrow.
>>>
>>> Thanks
>>> Kimi
>>>
>>> -----Original Message-----
>>> From: Wido den Hollander [mailto:w...@widodh.nl]
>>> Sent: Monday, July 22, 2013 11:43 PM
>>> To: dev@cloudstack.apache.org
>>> Subject: Re: Problem in adding Ceph RBD storage to CloudStack
>>>
>>> Hi,
>>>
>>> On 07/22/2013 02:25 PM, Kimihiko Kitase wrote:
>>>
>>>> Wido, Thank you very much.
>>>>
>>>> CloudStack: 4.1.0
>>>> QEMU: 1.5.50
>>>> Libvirt: 0.10.2
>>>>
>>>
>>> What version of Ceph on the nodes?
>>>
>>> $ ceph -v
>>>
>>>
>>>> We will set "DEBUG" on the agent tomorrow. But the following is command
>>>>
>>> CloudStack issue. We got this command at KVM host.
>>>
>>>>
>>>> [root@rx200s7-07m ~]# ps -ef|grep 1517
>>>> root     16099     1 27 19:36 ?        00:00:12 /usr/libexec/qemu-kvm
>>>>
>>> -name i-2-1517-VM -S -M pc-i440fx-1.6 -enable-kvm -m 256 -smp
>>> 1,sockets=1,cores=1,threads=1 -uuid e67f1707-fe92-3426-978d-**
>>> 0441d5000d6a
>>> -no-user-config -nodefaults -chardev
>>> socket,id=charmonitor,path=/**var/lib/libvirt/qemu/i-2-1517-**
>>> VM.monitor,server,nowait
>>> -mon chardev=charmonitor,id=**monitor,mode=control -rtc base=utc
>>> -no-shutdown
>>> -boot dc -drive
>>> file=rbd:libvirt-pool/**cd3688ab-e37b-4866-9ea7-**
>>> 4051b670a323:id=libvirt:key=**AQC7OuZReMndFxAAY/**
>>> qUwLbvfod6EMvgVWU21g==:auth_**supported=cephx\;none:mon_**
>>> host=192.168.10.20\:6789,if=**none,id=drive-virtio-disk0,**
>>> format=raw,cache=none
>>> -device
>>> virtio-blk-pci,bus=pci.0,addr=**0x4,drive=drive-virtio-disk0,**
>>> id=virtio-disk0
>>> -drive
>>> if=none,media=cdrom,id=drive-**ide0-1-0,readonly=on,format=**
>>> raw,cache=none
>>> -device ide-drive,bus=ide.1,unit=0,**drive=drive-ide0-1-0,id=ide0-**1-0
>>> -netdev
>>> tap,fd=27,id=hostnet0,vhost=**on,vhostfd=29 -device
>>> virtio-net-pci,netdev=**hostnet0,id=net0,mac=02:00:0a:**b9:00:16
>>>   ,bus=pci.
>>> 0,addr=0x3 -chardev pty,id=charserial0 -device
>>> isa-serial,chardev=**charserial0,id=serial0 -usb -device
>>> usb-tablet,id=input0
>>> -vnc 0.0.0.0:3 -vga cirrus -device
>>> virtio-balloon-pci,id=**balloon0,bus=pci.0,addr=0x5
>>>
>>>>
>>>>
>>> The argument to Qemu seems just fine, so I think the problem is not in
>>> CloudStack.
>>>
>>> Wido
>>>
>>>  Thanks
>>>> Kimi
>>>>
>>>> -----Original Message-----
>>>> From: Wido den Hollander [mailto:w...@widodh.nl]
>>>> Sent: Monday, July 22, 2013 7:47 PM
>>>> To: dev@cloudstack.apache.org
>>>> Subject: Re: Problem in adding Ceph RBD storage to CloudStack
>>>>
>>>> Hi,
>>>>
>>>> On 07/22/2013 12:43 PM, Kimihiko Kitase wrote:
>>>>
>>>>> It seems secondary storage vm could copy template to primary storage
>>>>>
>>>> successfully, but created VM doesn't point this vol..
>>>
>>>> If we create vm manually and add this vol as boot vol, it works fine..
>>>>>
>>>>>
>>>> Which version of CloudStack are you using?
>>>>
>>>> What is the Qemu version running on your hypervisor and what libvirt
>>>>
>>> version?
>>>
>>>>
>>>> If you set the logging level on the Agent to "DEBUG", does it show
>>>>
>>> deploying the VM with the correct XML parameters?
>>>
>>>>
>>>> I haven't seen the things you are reporting.
>>>>
>>>> Wido
>>>>
>>>>  So it seems cloudstack cannot configure VM correctly in ceph rbd
>>>>>
>>>> environment.
>>>
>>>>
>>>>> Any idea?
>>>>>
>>>>> Thanks
>>>>> Kimi
>>>>>
>>>>> -----Original Message-----
>>>>> From: Kimihiko Kitase 
>>>>> [mailto:Kimihiko.Kitase@**citrix.co.jp<kimihiko.kit...@citrix.co.jp>
>>>>> ]
>>>>> Sent: Monday, July 22, 2013 7:11 PM
>>>>> To: dev@cloudstack.apache.org
>>>>> Subject: RE: Problem in adding Ceph RBD storage to CloudStack
>>>>>
>>>>> Hello
>>>>>
>>>>> I am in the project with Nakajima san.
>>>>>
>>>>> We succeeded to add RBD storage to primary storage.
>>>>> But when we try to boot centos as user instance, it fail during system
>>>>>
>>>> logger process.
>>>
>>>> It works fine when we boot centos using NFS storage.
>>>>> It works fine when we boot centos using NFS storage and add additional
>>>>>
>>>> disk from RBD storage.
>>>
>>>>
>>>>> Do you have any idea to resolve this issue?
>>>>>
>>>>> Thanks
>>>>> Kimi
>>>>>
>>>>> -----Original Message-----
>>>>> From: Takuma Nakajima 
>>>>> [mailto:penguin.trance.2716@**gmail.com<penguin.trance.2...@gmail.com>
>>>>> ]
>>>>> Sent: Saturday, July 20, 2013 12:23 PM
>>>>> To: dev@cloudstack.apache.org
>>>>> Subject: Re: Problem in adding Ceph RBD storage to CloudStack
>>>>>
>>>>> I'm sorry but I forgot to tell you that the environment does not have
>>>>>
>>>> the internet connection.
>>>
>>>> It is not allowed to make a direct connection to the internet because
>>>>>
>>>> of the security policy.
>>>
>>>>
>>>>> Wido,
>>>>>
>>>>>> No, it works for me like a charm :)
>>>>>>
>>>>>> Could you set the Agent logging to DEBUG as well and show the output
>>>>>> of
>>>>>>
>>>>> that log? Maybe paste the log on pastebin.
>>>>>
>>>>>>
>>>>>> I'm interested in the XMLs the Agent is feeding to libvirt when
>>>>>> adding
>>>>>>
>>>>> the RBD pool.
>>>>>
>>>>> I thought the new libvirt overwrites the old one, but actually both
>>>>>
>>>> libvirt (with RBD and without RBD) were installed to the system. qemu
>>> was
>>> installed from the package and so it might have the dependency to the
>>> libvirt installed from the package. After deleting the both libvirt
>>> installed from source and package, then installed it from rpm package
>>> with
>>> RBD support, RBD storage was registered to the CloudStack successfully.
>>>
>>>>
>>>>> David,
>>>>>
>>>>>> Why not 6.4?
>>>>>>
>>>>>
>>>>> Because of no internet connection, packages in the local mirror
>>>>>
>>>> repository may be old.
>>>
>>>> I checked /etc/redhat-release and it showed the version is 6.3.
>>>>>
>>>>> In current state, although the RBD storage was installed, system VMs
>>>>> won't start with "Unable to get vms
>>>>> org.libvirt.LibvirtException: Domain not found: no domain with
>>>>> matching uuid 'xxxxxxxx-xxxx-xxxx-xxxx-**xxxxxxxxxxxx'" error like
>>>>> http://mail-archives.apache.**org/mod_mbox/cloudstack-users/**
>>>>> 201303.mbox<http://mail-archives.apache.org/mod_mbox/cloudstack-users/201303.mbox>
>>>>> /
>>>>> %**3CD2EE6B3265AD864EB3EA4F5C670D**256F3546C1@EXMBX01L-CRP-03.**
>>>>> webmdhealth.
>>>>> net%3E The uuid in the error message was not in the database of the
>>>>> management server nor in ceph storage node.
>>>>>
>>>>> I tried removing host from the CloudStack and cleaning up the computing
>>>>>
>>>> node, but it cannot be added again to the CloudStack.
>>>
>>>> agent log says it attempted to connect to localhost:8250 though the
>>>>>
>>>> management server address is set to 10.40.1.190 in global settings.
>>>
>>>> management server log is here: http://pastebin.com/muGz73c0
>>>>> (10.40.1.24 is the address of the computing node)
>>>>>
>>>>> Now the computing node is under rebuilding.
>>>>>
>>>>> Takuma Nakajima
>>>>>
>>>>> 2013/7/19 David Nalley <da...@gnsa.us>:
>>>>>
>>>>>> On Thu, Jul 18, 2013 at 12:09 PM, Takuma Nakajima
>>>>>> <penguin.trance.2...@gmail.com**> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I'm building a CloudStack 4.1 with Ceph RBD storage using RHEL 6.3
>>>>>>>
>>>>>> recently
>>>>>
>>>>>> but it fails when adding RBD storage to primary storage.
>>>>>>> Does anybody know about the problem?
>>>>>>>
>>>>>>
>>>>>>
>>>>>> Why not 6.4?
>>>>>>
>>>>>
>>>
>>

Reply via email to