Re: [ceph-users] RBD clone for OpenStack Nova ephemeral volumes

2014-05-28 Thread Jens-Christian Fischer
We are currently starting to set up a new Icehouse/Ceph based cluster and will help to get this patch in shape as well. I am currently collecting the information needed that allow us to patch Nova and I have this: https://github.com/angdraug/nova/tree/rbd-ephemeral-clone-stable-icehouse on my

[ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-21 Thread Jens-Christian Fischer
d_flatten_volume_from_snapshot=False rbd_user=cinder rbd_ceph_conf=/etc/ceph/ceph.conf rbd_secret_uuid=1234-5678-ABCD-…-DEF rbd_max_clone_depth=5 volume_driver=cinder.volume.drivers.rbd.RBDDriver — cut --- any ideas? cheers Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdst

Re: [ceph-users] Ceph Cinder Capabilities reports wrong free size

2014-08-22 Thread Jens-Christian Fischer
=cinder.volume.drivers.rbd.RBDDriver cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/stories On 21.08.2014, at 17:55, Gregory

Re: [ceph-users] NFS interaction with RBD

2015-05-23 Thread Jens-Christian Fischer
e have removed the 100TB volume from the nfs server (we used the downtime to migrate the last data off of it to one of the smaller volumes). The NFS server has been running for 30 minutes now (with close to no load) but we don’t really expect it to make it until tomorrow. send help Jens-Christia

Re: [ceph-users] NFS interaction with RBD

2015-05-26 Thread Jens-Christian Fischer
every mounted volume - exceeding the 1024 FD limit. So no deep scrubbing etc, but simply to many connections… cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch

Re: [ceph-users] NFS interaction with RBD

2015-05-27 Thread Jens-Christian Fischer
binaries (x86) ii qemu-utils 2.0.0+dfsg-2ubuntu1.11 amd64QEMU utilities cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens

Re: [ceph-users] XFS or btrfs for production systems with modern Kernel?

2013-08-01 Thread Jens-Christian Fischer
untu 12.10 servers, but experienced btrfs related kernel panics and have migrated the offending servers to 13.04. Yesterday one of these machines locked up with btrfs issues (that weren't easily diagnosed) I have now started on migrating our OSD to xfs … (taking them out, making new filesy

[ceph-users] one pg stuck with 2 unfound pieces

2013-08-13 Thread Jens-Christian Fischer
"status": "querying"}, { "osd": 50, "status": "already probed"}], "recovery_progress": { "backfill_target": 50, "waiting_on_backfill": 0,

[ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
ervice ceph -a start osd.$OSD --- cut --- cheers Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedi

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
> > Why wait for the data to migrate away? Normally you have replicas of the > whole osd data, so you can simply stop the osd, reformat the disk and restart > it again. It'll join the cluster and automatically get all data it's missing. > Of course the risk of dataloss is a bit higher during th

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-02 Thread Jens-Christian Fischer
Hi Martin > On 2013-09-02 19:37, Jens-Christian Fischer wrote: >> we have a Ceph Cluster with 64 OSD drives in 10 servers. We originally >> formatted the OSDs with btrfs but have had numerous problems (server kernel >> panics) that we could point back to btrfs. We are the

[ceph-users] adding SSD only pool to existing ceph cluster

2013-09-02 Thread Jens-Christian Fischer
SSD drives, create a separate pool with them, don't upset the current pools (We don't want the "regular/existing" data to migrate towards the SSD pool, and no disruption of service? thanks Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. B

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-03 Thread Jens-Christian Fischer
> Why wait for the data to migrate away? Normally you have replicas of the > whole osd data, so you can simply stop the osd, reformat the disk and restart > it again. It'll join the cluster and automatically get all data it's missing. > Of course the risk of dataloss is a bit higher during that

Re: [ceph-users] Best way to reformat OSD drives?

2013-09-03 Thread Jens-Christian Fischer
On 03.09.2013, at 16:27, Sage Weil wrote: >> ceph osd create # this should give you back the same osd number as the one >> you just removed > > OSD=`ceph osd create` # may or may not be the same osd id good point - so far it has been good to us! > >> >> umount ${PART}1 >> parted $PART rm 1

Re: [ceph-users] adding SSD only pool to existing ceph cluster

2013-09-04 Thread Jens-Christian Fischer
Hi Greg > If you saw your existing data migrate that means you changed its > hierarchy somehow. It sounds like maybe you reorganized your existing > nodes slightly, and that would certainly do it (although simply adding > single-node higher levels would not). It's also possible that you > introduc

[ceph-users] Inconsistent view on mounted CephFS

2013-09-13 Thread Jens-Christian Fischer
13 08:54 /mnt/instances/instance-076b/disk -rw-r--r-- 1 libvirt-qemu kvm 336789504 Sep 13 08:54 /mnt/instances/instance-077d/disk -rw-r--r-- 1 libvirt-qemu kvm 219152384 Sep 13 08:54 /mnt/instances/instance-0792/disk} --- cut --- -- SWITCH Jens-Christian Fischer, Peta

Re: [ceph-users] Inconsistent view on mounted CephFS

2013-09-13 Thread Jens-Christian Fischer
> > All servers mount the same filesystem. Needless to say, that we are a bit > worried… > > The bug was introduced in 3.10 kernel, will be fixed in 3.12 kernel by commit > 590fb51f1c (vfs: call d_op->d_prune() before unhashing dentry). Sage may > backport the fix to 3.11 and 3.10 kernel soon.

Re: [ceph-users] Inconsistent view on mounted CephFS

2013-09-13 Thread Jens-Christian Fischer
> Just out of curiosity. Why you are using cephfs instead of rbd? Two reasons: - we are still on Folsom - Experience with "shared storage" as this is something our customers are asking for all the time cheers jc ___ ceph-users mailing list ceph-users@

[ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer
), but now we used Kernel 3.10 and recently ceph-fuse to mount the CephFS. Are we doing something wrong, or is this not supported by CephFS? cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268

Re: [ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer
> For cephfs, the size reported by 'ls -s' is the same as file size. see > http://ceph.com/docs/next/dev/differences-from-posix/ ah! So if I understand correctly, the files are indeed sparse on CephFS? thanks /jc ___ ceph-users mailing list ceph-users@

Re: [ceph-users] Sparse files copied to CephFS not sparse

2013-09-16 Thread Jens-Christian Fischer
> >> For cephfs, the size reported by 'ls -s' is the same as file size. see >> http://ceph.com/docs/next/dev/differences-from-posix/ > > ...but the files are still in fact stored sparsely. It's just hard to > tell. perfect - thanks! /jc ___ ceph-use

Re: [ceph-users] one pg stuck with 2 unfound pieces

2013-09-23 Thread Jens-Christian Fischer
nd objects This stuck pg seems to fill up our mons (they need to keep old data, right?) which makes starting a new mon a task of seemingly herculean proportions. Any ideas on how to proceed? thanks Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8

Re: [ceph-users] interested questions

2013-10-30 Thread Jens-Christian Fischer
of implementing things, but it works reasonably well for testing purposes. We are planning/building our next cluster now (a production cluster) and plan to separate OSD/MON servers from OpenStack compute servers. cheers Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2,

[ceph-users] Havana & RBD - a few problems

2013-11-07 Thread Jens-Christian Fischer
could pipe in here… thanks Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia

Re: [ceph-users] Havana & RBD - a few problems

2013-11-08 Thread Jens-Christian Fischer
Hi Josh > Using libvirt_image_type=rbd to replace ephemeral disks is new with > Havana, and unfortunately some bug fixes did not make it into the > release. I've backported the current fixes on top of the stable/havana > branch here: > > https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd

Re: [ceph-users] Havana & RBD - a few problems

2013-11-08 Thread Jens-Christian Fischer
>> Using libvirt_image_type=rbd to replace ephemeral disks is new with >> Havana, and unfortunately some bug fixes did not make it into the >> release. I've backported the current fixes on top of the stable/havana >> branch here: >> >> https://github.com/jdurgin/nova/tree/havana-ephemeral-rbd > >

Re: [ceph-users] Havana & RBD - a few problems

2013-11-08 Thread Jens-Christian Fischer
f the glance -> cinder RBD improvements) cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia On 08

Re: [ceph-users] Ephemeral RBD with Havana and Dumpling

2013-11-14 Thread Jens-Christian Fischer
Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia On 14.11.2013, at 13:18, Haomai Wang wrote: > Yes, we still need a pa

Re: [ceph-users] Ephemeral RBD with Havana and Dumpling

2013-11-14 Thread Jens-Christian Fischer
> On Thu, Nov 14, 2013 at 9:12 PM, Jens-Christian Fischer > wrote: > We have migration working partially - it works through Horizon (to a random > host) and sometimes through the CLI. > > random host? Do you mean cold-migration? Live-migration should be specified > dest

Re: [ceph-users] Openstack Havana, boot from volume fails

2013-11-21 Thread Jens-Christian Fischer
tween the volumes…. I re-snapshotted the instance whose volume wouldn't boot, and made a volume out of it, and this instance booted nicely from the volume. weirder and weirder… /jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41

[ceph-users] OpenStack, Boot from image (create volume) failed with volumes in rbd

2013-11-21 Thread Jens-Christian Fischer
ed raw volumes to get the boot process working. Why is the volume created as a qcow2 volume? cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch h

[ceph-users] Openstack Havana, boot from volume fails

2013-11-21 Thread Jens-Christian Fischer
ell will now be started. CONTROL-D will terminate this shell and reboot the system. root@box-web1:~# The console is stuck, I can't get to the rescue shell I can "rbd map" the volume and mount it from a physical host - the filesystem etc all is in good order. Any ideas? cheers jc --

Re: [ceph-users] Openstack Havana, boot from volume fails

2013-11-25 Thread Jens-Christian Fischer
virt/libvirt/imagebackend.py virt/libvirt/utils.py good luck :) cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.swit

Re: [ceph-users] Openstack Havana, boot from volume fails

2013-11-25 Thread Jens-Christian Fischer
Hi Steffen the virsh secret is defined on all compute hosts. Booting from a volume works (it's the "boot from image (create volume)" part that doesn't work cheers jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland ph

[ceph-users] Number of threads for osd processes

2013-11-26 Thread Jens-Christian Fischer
/log/ceph# ceph osd pool get images pg_num pg_num: 1000 root@h2:/var/log/ceph# ceph osd pool get volumes pg_num pg_num: 128 That could possibly have been on the day, the number of treads started to rise. Feedback appreciated! thanks Jens-Christian -- SWITCH Jens-Christian Fischer, Peta Solution

Re: [ceph-users] Number of threads for osd processes

2013-11-27 Thread Jens-Christian Fischer
> The largest group of threads is those from the network messenger — in > the current implementation it creates two threads per process the > daemon is communicating with. That's two threads for each OSD it > shares PGs with, and two threads for each client which is accessing > any data on that OSD

Re: [ceph-users] Openstack Havana, boot from volume fails

2013-11-27 Thread Jens-Christian Fischer
om/6424/back-from-the-summit-cephopenstack-integration - it cleared a bunch of things for me cheers jc > > Thanks again! > Narendra > > From: Jens-Christian Fischer [mailto:jens-christian.fisc...@switch.ch] > Sent: Monday, November 25, 2013 8:19 AM > To: Trivedi, Narendra

Re: [ceph-users] how to Testing cinder and glance with CEPH

2013-11-27 Thread Jens-Christian Fischer
? good luck jc -- SWITCH Jens-Christian Fischer, Peta Solutions Werdstrasse 2, P.O. Box, 8021 Zurich, Switzerland phone +41 44 268 15 15, direct +41 44 268 15 71 jens-christian.fisc...@switch.ch http://www.switch.ch http://www.switch.ch/socialmedia On 27.11.2013, at 08:51, Karan Singh wrote

[ceph-users] aborted downloads from Radosgw when multiple clients access same object

2013-12-05 Thread Jens-Christian Fischer
, server1 osdmap e6645: 24 osds: 24 up, 24 in pgmap v2541337: 7368 pgs: 7368 active+clean; 2602 GB data, 5213 GB used, 61822 GB / 67035 GB avail; 31013KB/s rd, 151KB/s wr, 34op/s mdsmap e1: 0/0/1 up root@server1:/etc# ceph --version ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e825