[ceph-users] osd_recovery_delay_start ignored in Hammer?

2016-01-16 Thread Max A. Krasilnikov
Hello!

In my cluster running Hammer 0.94.5-0ubuntu0.15.04.1~cloud0 when starting OSD it
starts recovery immediatelly. I have changed osd_recovery_delay_start to 60
seconds, but this setting is ignored during osd bootup.

root@storage001:~# ceph -n osd.9 --show-config |grep osd_recovery_delay_start
osd_recovery_delay_start = 60

I would like to delay recovery because it increases load on cluster leading to 
slow
request on start. After 1-2 minutes after startup of osd slow requests desapear
and all things doing ok.

-- 
WBR, Max A. Krasilnikov
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] problem deploy ceph

2016-01-16 Thread jiangdahui
[davy@localhost my-cluster]$ ceph-deploy new node1
[ceph_deploy.conf][DEBUG ] found configuration file at: 
/home/davy/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.25): /usr/bin/ceph-deploy new node1
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: localhost.localdomain 
[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1
[ceph_deploy][ERROR ] RuntimeError: connecting to host: node1 resulted in 
errors: AssertionError




why??___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


[ceph-users] Again - state of Ceph NVMe and SSDs

2016-01-16 Thread David
Hi!

We’re planning our third ceph cluster and been trying to find how to maximize 
IOPS on this one.

Our needs:
* Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM servers)
* Pool for storage of many small files, rbd (probably dovecot maildir and 
dovecot index etc)

So I’ve been reading up on:

https://communities.intel.com/community/itpeernetwork/blog/2015/11/20/the-future-ssd-is-here-pcienvme-boosts-ceph-performance

and ceph-users from october 2015:

http://www.spinics.net/lists/ceph-users/msg22494.html

We’re planning something like 5 OSD servers, with:

* 4x 1.2TB Intel S3510
* 8st 4TB HDD
* 2x Intel P3700 Series HHHL PCIe 400GB (one for SSD Pool Journal and one for 
HDD pool journal)
* 2x 80GB Intel S3510 raid1 for system
* 256GB RAM
* 2x 8 core CPU Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz or better

This cluster will probably run Hammer LTS unless there are huge improvements in 
Infernalis when dealing 4k IOPS.

The first link above hints at awesome performance. The second one from the list 
not so much yet.. 

Is anyone running Hammer or Infernalis with a setup like this?
Is it a sane setup?
Will we become CPU constrained or can we just throw more RAM on it? :D

Kind Regards,
David Majchrzak___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Re: [ceph-users] Again - state of Ceph NVMe and SSDs

2016-01-16 Thread Wido den Hollander
On 01/16/2016 07:06 PM, David wrote:
> Hi!
> 
> We’re planning our third ceph cluster and been trying to find how to
> maximize IOPS on this one.
> 
> Our needs:
> * Pool for MySQL, rbd (mounted as /var/lib/mysql or equivalent on KVM
> servers)
> * Pool for storage of many small files, rbd (probably dovecot maildir
> and dovecot index etc)
> 

Not completely NVMe related, but in this case, make sure you use
multiple disks.

For MySQL for example:

- Root disk for OS
- Disk for /var/lib/mysql (data)
- Disk for /var/log/mysql (binary log)
- Maybe even a InnoDB logfile disk

With RBD you gain more performance by sending I/O into the cluster in
parallel. So when ever you can, do so!

Regarding small files, it might be interesting to play with the stripe
count and stripe size there. By default this is 1 and 4MB. But maybe 16
and 256k work better here.

With Dovecot as well, use a different RBD disk for the indexes and a
different one for the Maildir itself.

Ceph excels at parallel performance. That is what you want to aim for.

> So I’ve been reading up on:
> 
> https://communities.intel.com/community/itpeernetwork/blog/2015/11/20/the-future-ssd-is-here-pcienvme-boosts-ceph-performance
> 
> and ceph-users from october 2015:
> 
> http://www.spinics.net/lists/ceph-users/msg22494.html
> 
> We’re planning something like 5 OSD servers, with:
> 
> * 4x 1.2TB Intel S3510
> * 8st 4TB HDD
> * 2x Intel P3700 Series HHHL PCIe 400GB (one for SSD Pool Journal and
> one for HDD pool journal)
> * 2x 80GB Intel S3510 raid1 for system
> * 256GB RAM
> * 2x 8 core CPU Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz or better
> 
> This cluster will probably run Hammer LTS unless there are huge
> improvements in Infernalis when dealing 4k IOPS.
> 
> The first link above hints at awesome performance. The second one from
> the list not so much yet.. 
> 
> Is anyone running Hammer or Infernalis with a setup like this?
> Is it a sane setup?
> Will we become CPU constrained or can we just throw more RAM on it? :D
> 
> Kind Regards,
> David Majchrzak
> 
> 
> ___
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> 


-- 
Wido den Hollander
42on B.V.
Ceph trainer and consultant

Phone: +31 (0)20 700 9902
Skype: contact42on
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com