On 15 May 2014 04:05, Maciej Gałkiewicz wrote:
> On 28 April 2014 16:11, Sebastien Han wrote:
>
>> Yes yes, just restart cinder-api and cinder-volume.
>> It worked for me.
>
>
> In my case the image is still downloaded:(
>
Option "show_image_direct_url = Tr
G cinder.volume.drivers.rbd
[req-05877879-f875-4a69-893b-dde93c2a9267 3abc796d9c544d039fe7d5b90b206a30
e466feaf9a58472a86989156edc9acf4 - - -] creating volume
'volume-f9b21cc6-db73-41c4-9c3b-04ef6217fb3c'
create_volume
/usr/lib/python2.7/dist-packages/cinder/volume/drivers/rbd.py:469
On 28 April 2014 15:58, Sebastien Han wrote:
> FYI It’s fixed here: https://review.openstack.org/#/c/90644/1
I already have this patch and it didn't help. Have it fixed the problem in
your cluster?
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Co-founder, Sysadmin
http://shellyc
e a volume from an image?
>
The image is downloaded and then imported to ceph. Detailed log:
https://gist.github.com/mgalkiewicz/e0558939e435cb6d5d28
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Co-founder, Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 44035
;t it a copy-on-write clone?
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Co-founder, Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.
stored within the same ceph cluster. It worked
great in Havana. Do you have any idea how to workaround this? My instances
used to start within seconds and now it is a matter of minutes:/
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Co-founder, Sysadmin
http://shellycloud.com/, mac
Hi
http://ceph.com/debian-emperor/dists/sid/main/binary-amd64/Packages
is empty.
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Co-founder, Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph
time. Out of curiosity, why don't you upgrade
to cuttlefish and then to dumpling?
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph-users mailing li
Hi guys
Do you have any list of companies that use Ceph in production?
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph-users mailing list
ceph
n access in a POSIXly way
> from multiple VMs. CephFS is a relatively easy way to give them that,
> though I don't consider it "production-ready" - mostly because secure
> isolation between different tenants is hard to achieve.
For now GlusterFS may fits bett
Hi
Just out of curiosity. Why you are using cephfs instead of rbd?
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph-users mailing list
ceph-users
r
> settings you had to add to nova to make it work?
Nova does not require any changes. Only cinder.
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
Hi
I dont know which version of openstack you are using but I ensure you
that everything works fine with Grizzly. Let me know if you need some
help.
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
,
"empty": 0,
"dne": 0,
"incomplete": 0,
"last_epoch_started": 6429},
"recovery_state": [
{ "name": "Started\/Primary\/Active",
"enter_time": "2013-09-0
ould be best for reliability. In terms of performance is it a good
idea to store one's journal on the other ssd and the other way around? Both
ssds are in different pools for different purposes.
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@sh
I am also experiencing this problem.
http://tracker.ceph.com/issues/2476
regards
Maciej
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 18 Jul 2013 20:25, "Josh Durgin" wrote:
> Setting rbd_cache=true in ceph.conf will make librbd turn on the cache
> regardless of qemu. Setting qemu to cache=none tells qemu that it
> doesn't need to send flush requests to the underlying storage, so it
> does not do so. This means librbd is cach
3a873d
9ab3e9b3-e153-447c-ab1d-2f8f9bae095c
Config settings received from admin socket show that cache is enabled. I
thought that without configuring libvirt with cache options it is not
possible to force kvm to use it. Can you expl
Hello
Is there any way to verify that cache is enabled? My machine is running
with following parameters:
qemu-system-x86_64 -machine accel=kvm:tcg -name instance-0302 -S
-machine pc-i440fx-1.5,accel=kvm,usb=off -cpu
Westmere,+rdtscp,+avx,+osxsave,+xsave,+tsc-deadline,+pcid,+pdcm,+xtpr,+tm2,+e
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 12 March 2013 21:38, Josh Durgin wrote:
> Yes, it works with true live migration just fine (even with caching). You
> can use "virsh migrate" or even do it through the virt-manager gui.
> Nova is just doing a check that doesn't make sense for volume-backed
> instances with live migration there
ad of file in
filesystem?
regards
--
Maciej Gałkiewicz
Shelly Cloud Sp. z o. o., Sysadmin
http://shellycloud.com/, mac...@shellycloud.com
KRS: 440358 REGON: 101504426
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
to specify another
keyring? I would not like to have admin keyring on every machine with
cinder-volume.
regards
Maciej Gałkiewicz
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
23 matches
Mail list logo