[ceph-users] ceph-disk activate /dev/sda1 seem to get stuck?

2015-06-05 Thread Jelle de Jong
going wrong and how to fix it? Kind regards, Jelle de Jong ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] ceph-disk activate /dev/sda1 seem to get stuck?

2015-06-08 Thread Jelle de Jong
On 05/06/15 21:50, Jelle de Jong wrote: > I am new to ceph and I am trying to build a cluster for testing. > > after running: > ceph-deploy osd prepare --zap-disk ceph02:/dev/sda > > It seems udev rules find the disk and try to activate them, but then > gets stuck: > &

Re: [ceph-users] ceph-disk activate /dev/sda1 seem to get stuck?

2015-06-08 Thread Jelle de Jong
version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047) (from https://packages.debian.org/jessie/ceph) I will try to purge everything and retry to make sure there is no "old" date intervening. Does anyone knows what is going on? Kind regards, Jelle de Jong On 08/06/15 09:58, Christian Balzer wrote: > > Hello, >

[ceph-users] how do i install ceph from apt on debian jessie?

2015-06-08 Thread Jelle de Jong
://paste.debian.net/211955/ How do I install ceph on Debian Jessie (8.1)? Kind regards, Jelle de Jong ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] how do i install ceph from apt on debian jessie?

2015-06-08 Thread Jelle de Jong
On 08/06/15 13:22, Jelle de Jong wrote: > I could not get ceph to work with the ceph packages shipped with debian > jessie: http://paste.debian.net/211771/ > > So I tried to use apt-pinning to use the eu.ceph.com apt repository, but > there are to many dependencies that are unreso

[ceph-users] SSD test results with Plextor M6 Pro, HyperX Fury, Kingston V300, ADATA SP90

2015-06-18 Thread Jelle de Jong
er loss of all nodes at the same time should not be possible (or has an extreme low probability) #4 how to benchmarks the OSD (disk+ssd-journal) combination so I can compare them. I got some other benchmarks question, but I will make an separate mail for them. Kind regards, Jel

[ceph-users] reversing the removal of an osd (re-adding osd)

2015-06-19 Thread Jelle de Jong
exist. create it before updating the crush map failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.5 --keyring=/var/lib/ceph/osd/ceph-5/keyring osd crush create-or-move -- 5 0.91 host=ceph03 root=default' Can somebody show me some examples of the right commands to re

Re: [ceph-users] reversing the removal of an osd (re-adding osd)

2015-06-19 Thread Jelle de Jong
On 19/06/15 16:07, Jelle de Jong wrote: > Hello everybody, > > I'm doing some experiments and I am trying to re-add an removed osd. I > removed it with the bellow five commands. > > http://ceph.com/docs/master/rados/operations/add-or-rm-osds/ > > ceph osd out 5

[ceph-users] how to recover from: 1 pgs down; 10 pgs incomplete; 10 pgs stuck inactive; 10 pgs stuck unclean

2015-07-13 Thread Jelle de Jong
ow could I figure out in what pool the data was lost and in what rbd volume (so what kvm guest lost data). Kind regards, Jelle de Jong ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] how to recover from: 1 pgs down; 10 pgs incomplete; 10 pgs stuck inactive; 10 pgs stuck unclean

2015-07-15 Thread Jelle de Jong
On 13/07/15 15:40, Jelle de Jong wrote: > I was testing a ceph cluster with osd_pool_default_size = 2 and while > rebuilding the OSD on one ceph node a disk in an other node started > getting read errors and ceph kept taking the OSD down, and instead of me > executing ceph osd set nodo

Re: [ceph-users] how to recover from: 1 pgs down; 10 pgs incomplete; 10 pgs stuck inactive; 10 pgs stuck unclean

2015-07-22 Thread Jelle de Jong
On 15/07/15 10:55, Jelle de Jong wrote: > On 13/07/15 15:40, Jelle de Jong wrote: >> I was testing a ceph cluster with osd_pool_default_size = 2 and while >> rebuilding the OSD on one ceph node a disk in an other node started >> getting read errors and ceph kept taking the OSD

Re: [ceph-users] SSD test results with Plextor M6 Pro, HyperX Fury, Kingston V300, ADATA SP90

2015-09-01 Thread Jelle de Jong
sible scheduler (noop) changes persistent (cmd in rc.local or special udev rules, examples?) Kind regards, Jelle de Jong On 23/06/15 12:41, Jan Schermer wrote: > Those are interesting numbers - can you rerun the test with write cache > enabled this time? I wonder how much your d

Re: [ceph-users] [sepia] debian jessie repository ?

2015-09-05 Thread Jelle de Jong
ian.org/jessie-backports/ceph Kind regards, Jelle de Jong ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] How to use cgroup to bind ceph-osd to a specific cpu core?

2015-09-10 Thread Jelle de Jong
Hello Jan, I want to test your pincpus I got from github. I have a 2x CPU (X5550) with 4 core 16 threads system I have four OSD (4x WD1003FBYX) with SSD (SHFS37A) journal . I got three nodes like that. I am not sure how to configure prz-pincpus.conf # prz-pincpus.conf https://paste.debian.net/pl

[ceph-users] how to replace journal ssd in one node ceph-deploy setup

2017-11-23 Thread Jelle de Jong
] /dev/sdg6 ceph journal, for /dev/sda1 [ceph04][DEBUG ] /dev/sdg7 ceph journal, for /dev/sdd1 Thank you in advance, Kind regards, Jelle de Jong ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] 10Gbit switch advice for small ceph cluster upgrade

2016-10-27 Thread Jelle de Jong
X520-SR1 Kind regards, Jelle de Jong GNU/Linux Consultant ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

[ceph-users] Intel SSD DC P3520 PCIe for OSD 1480 TBW good idea?

2018-06-25 Thread Jelle de Jong
I want to try using NUMA to also run KVM guests besides the OSD. I should have enough cores and only have a few osd processes. Kind regards, Jelle de Jong ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph

[ceph-users] virsh, virt-filesystems, guestmount, virt-install not working well with ceph rbd yet?

2016-02-01 Thread Jelle de Jong
ze $quest sleep 2 virsh domblklist $quest rbd snap create --snap snapshot $blkdevice virsh domfsthaw $quest rbd export $blkdevice@snapshot - | xz -1 | ssh -p 222 $user@$server "dd of=/$location/$blkdevice$snapshot-$daystamp.dd.disk.gz" rbd snap rm $blkdevice@snapshot Kind regards, Jelle

[ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-06 Thread Jelle de Jong
Hello everybody, I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's with 32GB Intel Optane NVMe journal, 10GB networking. I wanted to move to bluestore due to dropping support of file store, our cluster was working fine with bluestore and we could take complete nodes out

[ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-06 Thread Jelle de Jong
Hello everybody, [fix confusing typo] I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's with 32GB Intel Optane NVMe journal, 10GB networking. I wanted to move to bluestore due to dropping support of filestore, our cluster was working fine with filestore and we could tak

[ceph-users] help! pg inactive and slow requests after filestore to bluestore migration, version 12.2.12

2019-12-12 Thread Jelle de Jong
Hello everybody, I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's with 32GB Intel Optane NVMe journal, 10GB networking. I wanted to move to bluestore due to dropping support of filestore, our cluster was working fine with filestore and we could take complete nodes out

[ceph-users] slow request and unresponsive kvm guests after upgrading ceph cluster and os, please help debugging

2020-01-06 Thread Jelle de Jong
Hello everybody, I have issues with very slow requests a simple tree node cluster here, four WDC enterprise disks and Intel Optane NVMe journal on identical high memory nodes, with 10GB networking. It was working all good with Ceph Hammer on Debian Wheezy, but I wanted to upgrade to a suppor

Re: [ceph-users] Random slow requests without any load

2020-01-06 Thread Jelle de Jong
Hi, What are the full commands you used to setup this iptables config? iptables --table raw --append OUTPUT --jump NOTRACK iptables --table raw --append PREROUTING --jump NOTRACK Does not create the same output, it needs some more. Kind regards, Jelle de Jong On 2019-07-17 14:59, Kees

Re: [ceph-users] slow request and unresponsive kvm guests after upgrading ceph cluster and os, please help debugging

2020-01-07 Thread Jelle de Jong
75 Min IOPS: 61 Average Latency(s): 0.227538 Stddev Latency(s): 0.0843661 Max latency(s): 0.48464 Min latency(s): 0.0467124 On 2020-01-06 20:44, Jelle de Jong wrote: Hello everybody, I have issues with very slow requests a simple tree node cluster here,