[ceph-users] FW: RGW performance issue

2015-11-12 Thread Максим Головков
Hello, We are building a cluster for archive storage. We plan to use an Object Storage (RGW) only, no Block Devices and File System. We doesn't require high speed, so we are using old weak servers (4 cores, 3 GB RAM) with new huge but slow HDDs (8TB, 5900rpm). We have 3 storage nodes with 24 OS

Re: [ceph-users] Building a Pb EC cluster for a cheaper cold storage

2015-11-12 Thread Mike Almateia
12-Nov-15 03:33, Mike Axford пишет: On 10 November 2015 at 10:29, Mike Almateia wrote: Hello. For our CCTV storing streams project we decided to use Ceph cluster with EC pool. Input requirements is not scary: max. 15Gbit/s input traffic from CCTV, 30 day storing, 99% write operations, a cluste

Re: [ceph-users] raid0 and ceph?

2015-11-12 Thread Marius Vaitiekunas
>> We have write cache enabled on raid0. Everything is good while it works, but >> we had one strange incident with cluster. Looks like SSD disk failed and >> linux didn't remove it from the system. All data disks which are using this >> SSD for journaling started to flap (up/down). Cluster perform

[ceph-users] mon osd downout subtree limit

2015-11-12 Thread Nick Fisk
Should:- mon osd downout subtree limit in:- http://docs.ceph.com/docs/master/rados/configuration/mon-osd-interaction/ not be:- mon osd down out subtree limit Note the space between down and out in the 2nd example Nick ___ ceph-users mailing l

[ceph-users] (no subject)

2015-11-12 Thread James Gallagher
Hi, I'm having issues activating my OSDs. I have provided the output of the fault. I can see that the error message has said that the connection is timing out however, I am struggling to understand why as I have followed each stage within the quick start guide. For example, I can ping node1 (which

[ceph-users] can not create rbd image

2015-11-12 Thread min fang
Hi cepher, I tried to use the following command to create a img, but unfortunately, the command hung for a long time until I broken it by crtl-z. rbd -p hello create img-003 --size 512 so I checked the cluster status, and showed: cluster 0379cebd-b546-4954-b5d6-e13d08b7d2f1 health HEALT

[ceph-users] unsubscribe

2015-11-12 Thread Sergejs . Glusnevs
unsubscribe___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] (no subject)

2015-11-12 Thread Robert LeBlanc
-BEGIN PGP SIGNED MESSAGE- Hash: SHA256 On the monitor node, `netstat | grep 6789` show the monitor process running? On the OSD node, `telnet 192.168.43.11 6789` and `telnet 192.168.107.11 6789` work? It is not enough to just ping, that does not test if you have properly opened up the fir

[ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Bogdan SOLGA
Hello everyone! We have a recently installed Ceph cluster (v 0.94.5, Ubuntu 14.04), and today I noticed a lot of 'attempt to access beyond end of device' messages in the /var/log/syslog file. They are related to a mounted RBD image, and have the following format: *Nov 12 21:06:44 ceph-client-01

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Jan Schermer
How did you create filesystems and/or partitions on this RBD block device? The obvious causes would be 1) you partitioned it and the partition on which you ran mkfs points or pointed during mkfs outside the block device size (happens if you for example automate this and confuse sectors x cylinder

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Bogdan SOLGA
Hello Jan! Thank you for your advices, first of all! The filesystem was created using mkfs.xfs, after creating the RBD block device and mapping it on the Ceph client. I haven't specified any parameters when I created the filesystem, I just ran mkfs.xfs on the image name. As you mentioned the fil

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Jan Schermer
> On 12 Nov 2015, at 20:49, Bogdan SOLGA wrote: > > Hello Jan! > > Thank you for your advices, first of all! > > The filesystem was created using mkfs.xfs, after creating the RBD block > device and mapping it on the Ceph client. I haven't specified any parameters > when I created the filesys

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Bogdan SOLGA
By running rbd resize and then 'xfs_growfs -d' on the filesystem. Is there a better way to resize an RBD image and the filesystem? On Thu, Nov 12, 2015 at 10:35 PM, Jan Schermer wrote: > > On 12 Nov 2015, at 20:49, Bogdan SOLGA wrote: > >

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Jan Schermer
Can you post the output of: blockdev --getsz --getss --getbsz /dev/rbd5 and xfs_info /dev/rbd5 rbd resize can actually (?) shrink the image as well - is it possible that the device was actually larger and you shrunk it? Jan > On 12 Nov 2015, at 21:46, Bogdan SOLGA wrote: > > By running rbd r

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Bogdan SOLGA
Unfortunately I can no longer execute those commands for that rbd5, as I had to delete it; I couldn't 'resurrect' it, at least not in a decent time. Here is the output for another image, which is 2TB big: *ceph-admin@ceph-client-01:~$ sudo blockdev --getsz --getss --getbsz /dev/rbd14

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Jan Schermer
xfs_growfs "autodetects" the block device size. You can force re-read of the block device to refresh this info but might not do anything at all. There are situations when block device size will not reflect reality - for example you can't (or at least couldn't) resize partition that is in use (m

Re: [ceph-users] RBD - 'attempt to access beyond end of device'

2015-11-12 Thread Jan Schermer
Apologies, it seems that to shrink the device a parameter --allow-shrink must be used. > On 12 Nov 2015, at 22:49, Jan Schermer wrote: > > xfs_growfs "autodetects" the block device size. You can force re-read of the > block device to refresh this info but might not do anything at all. > > The

[ceph-users] ms crc header: seeking info?

2015-11-12 Thread Artie Ziff
Greetings Ceph Users everywhere! I was hoping to locate an entry for this Ceph configuration setting: ms_crc_header Would it be here: http://docs.ceph.com/docs/master/rados/configuration/ms-ref/ Or perhaps it is deprecated? I have searched Google but I am not satisfied. ;) Does the "ms crc header

[ceph-users] rbd create => seg fault

2015-11-12 Thread Artie Ziff
When I run `rbd create` => seg fault It worked on previous pulls/builds. I would need to regress/rebuild to provide version info that worked last. Ubuntu 14.04.3 LTS Linux ceph-mon-node 3.13.0-65-generic #106-Ubuntu SMP / x86_64 x86_64 x86_64 GNU/Linux configure --prefix=/usr/local --sysconfdir=

Re: [ceph-users] ms crc header: seeking info?

2015-11-12 Thread Haomai Wang
On Fri, Nov 13, 2015 at 8:31 AM, Artie Ziff wrote: > Greetings Ceph Users everywhere! > > I was hoping to locate an entry for this Ceph configuration setting: > ms_crc_header > Would it be here: > http://docs.ceph.com/docs/master/rados/configuration/ms-ref/ > Or perhaps it is deprecated? > I have

Re: [ceph-users] ms crc header: seeking info?

2015-11-12 Thread Artie Ziff
> > > > I think we expect to enable header crc at least. If you want to > disable it, you need to make all osd/client to disable it. > > Thanks so much for the feedback. It helps as I can see where I should add clarifications. In my case, I saw it fail with one single monitor node when bringing up

Re: [ceph-users] rbd create => seg fault

2015-11-12 Thread Jason Dillaman
I've seen this issue before when you (somehow) mix-and-match librbd, librados, and rbd builds on the same machine. The packaging should prevent you from mixing versions, but perhaps somehow you have different package versions installed. -- Jason Dillaman - Original Message - > F

Re: [ceph-users] rbd create => seg fault

2015-11-12 Thread Mark Kirkwood
When you do: $ rbd create You are using the kernel (i.e 3.13) code for rbd, and this is likely much older than the code you just built for the rest of Ceph. You *might* have better luck installing the vivid kernel (3.19) on your trusty system are trying again. Having said that - seg fault i

Re: [ceph-users] FW: RGW performance issue

2015-11-12 Thread Pavan Rallabhandi
If you are on >=hammer builds, you might want to consider the option of using 'rgw_num_rados_handles', which opens up more handles to the cluster from RGW. This would help in scenarios, where you have enough number of OSDs to drive the cluster bandwidth, which I guess is the case with you. Than

[ceph-users] SL6/Centos6 rebuild question

2015-11-12 Thread Goncalo Borges
Dear Ceph Gurus... I have tried to rebuild Ceph (9.2.0) in Centos6 with GCC 4.8 using the SRPM for Centos7. I could easily start rebuilding Ceph after solving some dependencies issues. However, it fails right at the end with systemd related messages: # rpmbuild --rebuild ceph-9.2.0-0.el7

[ceph-users] about PG_Number

2015-11-12 Thread wah peng
Hello, what's the disadvantage if setup PG_Number too large or too small against OSD number? Thanks. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com