going wrong and how to fix it?
Kind regards,
Jelle de Jong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 05/06/15 21:50, Jelle de Jong wrote:
> I am new to ceph and I am trying to build a cluster for testing.
>
> after running:
> ceph-deploy osd prepare --zap-disk ceph02:/dev/sda
>
> It seems udev rules find the disk and try to activate them, but then
> gets stuck:
>
&
version 0.80.9 (b5a67f0e1d15385bc0d60a6da6e7fc810bde6047)
(from https://packages.debian.org/jessie/ceph)
I will try to purge everything and retry to make sure there is no "old"
date intervening.
Does anyone knows what is going on?
Kind regards,
Jelle de Jong
On 08/06/15 09:58, Christian Balzer wrote:
>
> Hello,
>
://paste.debian.net/211955/
How do I install ceph on Debian Jessie (8.1)?
Kind regards,
Jelle de Jong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 08/06/15 13:22, Jelle de Jong wrote:
> I could not get ceph to work with the ceph packages shipped with debian
> jessie: http://paste.debian.net/211771/
>
> So I tried to use apt-pinning to use the eu.ceph.com apt repository, but
> there are to many dependencies that are unreso
er loss
of all nodes at the same time should not be possible (or has an extreme
low probability)
#4 how to benchmarks the OSD (disk+ssd-journal) combination so I can
compare them.
I got some other benchmarks question, but I will make an separate mail
for them.
Kind regards,
Jel
exist. create it before updating the crush map
failed: 'timeout 30 /usr/bin/ceph -c /etc/ceph/ceph.conf --name=osd.5
--keyring=/var/lib/ceph/osd/ceph-5/keyring osd crush create-or-move -- 5
0.91 host=ceph03 root=default'
Can somebody show me some examples of the right commands to re
On 19/06/15 16:07, Jelle de Jong wrote:
> Hello everybody,
>
> I'm doing some experiments and I am trying to re-add an removed osd. I
> removed it with the bellow five commands.
>
> http://ceph.com/docs/master/rados/operations/add-or-rm-osds/
>
> ceph osd out 5
ow could I figure out in what pool the data
was lost and in what rbd volume (so what kvm guest lost data).
Kind regards,
Jelle de Jong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
On 13/07/15 15:40, Jelle de Jong wrote:
> I was testing a ceph cluster with osd_pool_default_size = 2 and while
> rebuilding the OSD on one ceph node a disk in an other node started
> getting read errors and ceph kept taking the OSD down, and instead of me
> executing ceph osd set nodo
On 15/07/15 10:55, Jelle de Jong wrote:
> On 13/07/15 15:40, Jelle de Jong wrote:
>> I was testing a ceph cluster with osd_pool_default_size = 2 and while
>> rebuilding the OSD on one ceph node a disk in an other node started
>> getting read errors and ceph kept taking the OSD
sible scheduler (noop)
changes persistent (cmd in rc.local or special udev rules, examples?)
Kind regards,
Jelle de Jong
On 23/06/15 12:41, Jan Schermer wrote:
> Those are interesting numbers - can you rerun the test with write cache
> enabled this time? I wonder how much your d
ian.org/jessie-backports/ceph
Kind regards,
Jelle de Jong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello Jan,
I want to test your pincpus I got from github.
I have a 2x CPU (X5550) with 4 core 16 threads system
I have four OSD (4x WD1003FBYX) with SSD (SHFS37A) journal .
I got three nodes like that.
I am not sure how to configure prz-pincpus.conf
# prz-pincpus.conf
https://paste.debian.net/pl
] /dev/sdg6 ceph journal, for /dev/sda1
[ceph04][DEBUG ] /dev/sdg7 ceph journal, for /dev/sdd1
Thank you in advance,
Kind regards,
Jelle de Jong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
X520-SR1
Kind regards,
Jelle de Jong
GNU/Linux Consultant
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
I want to try using NUMA to also run KVM guests besides the OSD. I
should have enough cores and only have a few osd processes.
Kind regards,
Jelle de Jong
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph
ze $quest
sleep 2
virsh domblklist $quest
rbd snap create --snap snapshot $blkdevice
virsh domfsthaw $quest
rbd export $blkdevice@snapshot - | xz -1 | ssh -p 222 $user@$server "dd
of=/$location/$blkdevice$snapshot-$daystamp.dd.disk.gz"
rbd snap rm $blkdevice@snapshot
Kind regards,
Jelle
Hello everybody,
I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's
with 32GB Intel Optane NVMe journal, 10GB networking.
I wanted to move to bluestore due to dropping support of file store, our
cluster was working fine with bluestore and we could take complete nodes
out
Hello everybody,
[fix confusing typo]
I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's
with 32GB Intel Optane NVMe journal, 10GB networking.
I wanted to move to bluestore due to dropping support of filestore, our
cluster was working fine with filestore and we could tak
Hello everybody,
I got a tree node ceph cluster made of E3-1220v3, 24GB ram, 6 hdd osd's
with 32GB Intel Optane NVMe journal, 10GB networking.
I wanted to move to bluestore due to dropping support of filestore, our
cluster was working fine with filestore and we could take complete nodes
out
Hello everybody,
I have issues with very slow requests a simple tree node cluster here,
four WDC enterprise disks and Intel Optane NVMe journal on identical
high memory nodes, with 10GB networking.
It was working all good with Ceph Hammer on Debian Wheezy, but I wanted
to upgrade to a suppor
Hi,
What are the full commands you used to setup this iptables config?
iptables --table raw --append OUTPUT --jump NOTRACK
iptables --table raw --append PREROUTING --jump NOTRACK
Does not create the same output, it needs some more.
Kind regards,
Jelle de Jong
On 2019-07-17 14:59, Kees
75
Min IOPS: 61
Average Latency(s): 0.227538
Stddev Latency(s): 0.0843661
Max latency(s): 0.48464
Min latency(s): 0.0467124
On 2020-01-06 20:44, Jelle de Jong wrote:
Hello everybody,
I have issues with very slow requests a simple tree node cluster here,
24 matches
Mail list logo