[ceph-users] switch pool from replicated to erasure coded

2014-06-19 Thread Pavel V. Kaygorodov
Hi! May be I have missed something in docs, but is there a way to switch a pool from replicated to erasure coded? Or I have to create a new pool an somehow manually transfer data from old pool to new one? Pavel. ___ ceph-users mailing list ceph-users

[ceph-users] erasure pool & crush ruleset

2014-06-19 Thread Pavel V. Kaygorodov
Hi! I want to make erasure-coded pool with k=3 and m=3. Also, I want to distribute data between two hosts, having 3 osd from host1 and 3 from host2. I have created a ruleset: rule ruleset_3_3 { ruleset 0 type replicated min_size 6 max_size 6 step take host

Re: [ceph-users] erasure pool & crush ruleset

2014-06-19 Thread Pavel V. Kaygorodov
This ruleset works well for replicated pools with size 6 (I have tested it on data and metadata pools, which I cannot delete). The erasure pool with k=3 and m=3 must have size 6? Pavel. > On 19/06/2014 18:17, Pavel V. Kaygorodov wrote: >> Hi! >> >> I want to make erasur

Re: [ceph-users] erasure pool & crush ruleset

2014-06-19 Thread Pavel V. Kaygorodov
> You need: > > type erasure > It works! Thanks a lot! Pavel. min_size 6 max_size 6 step take host1 step chooseleaf firstn 3 type osd step emit step take host2 step chooseleaf firstn 3 type osd step emit >

[ceph-users] Error 95: Operation not supported

2014-06-20 Thread Pavel V. Kaygorodov
Hi! I'm getting a strange error, trying to create rbd image: # rbd -p images create --size 10 test rbd: create error: (95) Operation not supported 2014-06-20 18:28:39.537889 7f32af795780 -1 librbd: error adding image to directory: (95) Operation not supported The images -- erasure encoded pool

[ceph-users] Error initializing cluster client: Error

2014-07-05 Thread Pavel V. Kaygorodov
Hi! I still have the same problem with "Error initializing cluster client: Error" on all monitor nodes: root@bastet-mon2:~# ceph -w Error initializing cluster client: Error root@bastet-mon2:~# ceph --admin-daemon /var/run/ceph/ceph-mon.2.asok mon_status { "name": "2", "rank": 1, "state":

Re: [ceph-users] Error initializing cluster client: Error

2014-07-07 Thread Pavel V. Kaygorodov
0" and see if > it outputs more useful error logs. > -Greg > Software Engineer #42 @ http://inktank.com | http://ceph.com > > > On Sat, Jul 5, 2014 at 2:23 AM, Pavel V. Kaygorodov wrote: >> Hi! >> >> I still have the same problem with "Error i

Re: [ceph-users] v0.80.4 Firefly released

2014-07-16 Thread Pavel V. Kaygorodov
Hi! I'm trying to install ceph on Debian wheezy (from deb http://ceph.com/debian/ wheezy main) and getting following error: # apt-get update && apt-get dist-upgrade -y && apt-get install -y ceph ... The following packages have unmet dependencies: ceph : Depends: ceph-common (>= 0.78-500) but

Re: [ceph-users] Question Blackout

2015-03-20 Thread Pavel V. Kaygorodov
Hi! We have experienced several blackouts on our small ceph cluster. Most annoying problem is time desync just after a blackout: mons are not starting to work before time sync, after resync and manual restart of monitors, some of pgs can stuck in "inactive" or "peering" state for a significant p

Re: [ceph-users] ceph cluster on docker containers

2015-03-23 Thread Pavel V. Kaygorodov
Hi! I'm using ceph cluster, packed to a number of docker containers. There are two things, which you need to know: 1. Ceph OSDs are using FS attributes, which may not be supported by filesystem inside docker container, so you need to mount external directory inside a container to store OSD data

[ceph-users] decrease pg number

2015-04-21 Thread Pavel V. Kaygorodov
Hi! I have updated my cluster to Hammer and got a warning "too many PGs per OSD (2240 > max 300)". I know, that there is no way to decrease number of page groups, so I want to re-create my pools with less pg number, move all my data to them, delete old pools and rename new pools as the old ones

Re: [ceph-users] rados cppool

2015-04-23 Thread Pavel V. Kaygorodov
Hi! I have copied two of my pools recently, because old ones has too many pgs. Both of them contains RBD images, with 1GB and ~30GB of data. Both pools was copied without errors, RBD images are mountable and seems to be fine. CEPH version is 0.94.1 Pavel. > 7 апр. 2015 г., в 18:29, Kapil Shar

[ceph-users] RBD images -- parent snapshot missing (help!)

2015-05-12 Thread Pavel V. Kaygorodov
Hi! I have an RBD image (in pool "volumes"), made by openstack from parent image (in pool "images"). Recently, I have tried to decrease number of PG-s, to avoid new Hammer warning. I have copied pool "images" to another pool, deleted original pool and renamed new pool to "images". Ceph allowed m

Re: [ceph-users] RBD images -- parent snapshot missing (help!)

2015-05-12 Thread Pavel V. Kaygorodov
to get the data out of them. > > Br, > Tuomas > > -Original Message----- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Pavel V. Kaygorodov > Sent: 12. toukokuuta 2015 20:41 > To: ceph-users > Subject: [ceph-users] RBD images -- parent s

Re: [ceph-users] RBD images -- parent snapshot missing (help!)

2015-05-13 Thread Pavel V. Kaygorodov
on how to install development packages [1]. > > [1] > http://docs.ceph.com/docs/master/install/get-packages/#add-ceph-development > > -- > > Jason Dillaman > Red Hat > dilla...@redhat.com > http://www.redhat.com > > > - Original Message -

Re: [ceph-users] RBD images -- parent snapshot missing (help!)

2015-05-16 Thread Pavel V. Kaygorodov
images. > > Thanks > > Tuomas > > -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Pavel V. Kaygorodov > Sent: 13. toukokuuta 2015 18:24 > To: Jason Dillaman > Cc: ceph-users > Subject: Re: [ceph-users] RBD images

[ceph-users] clock skew detected

2015-06-10 Thread Pavel V. Kaygorodov
Hi! Immediately after a reboot of mon.3 host its clock was unsynchronized and "clock skew detected on mon.3" warning is appeared. But now (more then 1 hour of uptime) the clock is synced, but the warning still showing. Is this ok? Or I have to restart monitor after clock synchronization? Pavel.

[ceph-users] time out of sync after power failure

2014-09-24 Thread Pavel V. Kaygorodov
Hi! We have experienced some problems with power supply and whole our ceph cluster was rebooted several times. After a reboot the clocks on different monitor nodes becomes slightly desynchronized and ceph won't go up before time sync. But even after a time sync the ceph cluster also shows that a

[ceph-users] pgs stuck in active+clean+replay state

2014-09-25 Thread Pavel V. Kaygorodov
Hi! 16 pgs in our ceph cluster are in active+clean+replay state more then one day. All clients are working fine. Is this ok? root@bastet-mon1:/# ceph -w cluster fffeafa2-a664-48a7-979a-517e3ffa0da1 health HEALTH_OK monmap e3: 3 mons at {1=10.92.8.80:6789/0,2=10.92.8.81:6789/0,3=10.

Re: [ceph-users] pgs stuck in active+clean+replay state

2014-09-25 Thread Pavel V. Kaygorodov
ose pools), but it's not going to hurt anything as long as > you aren't using them. Thanks a lot, restarting of osds helps! BTW, I tried to delete data and metadata pools just after setup, but ceph refused me to do this. With best regards, Pavel. > On Thu, Sep 25, 2014

[ceph-users] Federated gateways (our planning use case)

2014-10-06 Thread Pavel V. Kaygorodov
Hi! Our institute now planning to deploy a set of robotic telescopes across a country. Most of the telescopes will have low bandwidth and high latency, or even not permanent internet connectivity. I think, we can set up synchronization of observational data with ceph, using federated gateways:

Re: [ceph-users] Advantages of using Ceph with LXC

2014-11-24 Thread Pavel V. Kaygorodov
Hi! > What are a few advantages of using Ceph with LXC ? I'm using ceph daemons, packed in docker containers (http://docker.io). The main advantages is security and reliability, the software don't interact between each other, all daemons has different IP addresses, different filesystems, etc. A

[ceph-users] osd down

2014-02-16 Thread Pavel V. Kaygorodov
Hi, All! I am trying to setup ceph from scratch, without dedicated drive, with one mon and one osd. After all, I see following output of ceph osd tree: # idweight type name up/down reweight -1 1 root default -2 1 host host1 0 1

Re: [ceph-users] osd down

2014-02-16 Thread Pavel V. Kaygorodov
gt; > Finally your both the OSD should be IN and UP , so that your cluster can > store data. > > Regards > Karan > > > On 16 Feb 2014, at 20:06, Pavel V. Kaygorodov wrote: > >> Hi, All! >> >> I am trying to setup ceph from scratch, without dedicat

[ceph-users] ceph-mon segmentation fault

2014-02-18 Thread Pavel V. Kaygorodov
Hi! Playing with ceph, I found a bug: I have compiled and installed ceph from sources on debian/jessie: git clone --recursive -b v0.75 https://github.com/ceph/ceph.git cd ceph/ && ./autogen.sh && ./configure && make && make install /usr/local/bin/ceph-authtool --create-keyring /data/ceph.mon.ke

[ceph-users] smart replication

2014-02-19 Thread Pavel V. Kaygorodov
Hi! I have two sorts of storage hosts: small number of reliable hosts with a number of big drives on each (reliable zone of the cluster), and a much larger set of less reliable hosts, some with big drives, some with relatively small ones (non-reliable zone of the cluster). Non-reliable hosts ar

[ceph-users] monitor data

2014-02-20 Thread Pavel V. Kaygorodov
Hi! May be it is a dumb question, but anyway: If I lose all monitors (mon data dirs), does it possible to recover cluster with data from OSDs only? Pavel. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-

Re: [ceph-users] ceph-mon segmentation fault

2014-02-20 Thread Pavel V. Kaygorodov
taking this UUID into account, so it cannot connect to the monitor after all. Removing uuid parameter from "ceph osd create" fixes the problem. If this is not a bug, may be it will be better to document this behavior. With best regards, Pavel. >> Pavel. >> >>

[ceph-users] ceph osd create with uuid & ceph-osd --mkfs

2014-02-22 Thread Pavel V. Kaygorodov
Hi! I have found strange behavior of ceph-osd, which must be documented, in my opinion: While creating osd fs (with ceph-osd --mkfs), ceph-osd looking for UUID in ceph.conf only, if there are no "osd uuid = ..." line, it not asking monitor for uuid and just generates random one. If one has pre

[ceph-users] ceph osd create with uuid & ceph-osd --mkfs

2014-02-22 Thread Pavel V. Kaygorodov
Hi! I have found strange behavior of ceph-osd, which must be documented, in my opinion: While creating osd fs (with ceph-osd --mkfs), ceph-osd looking for UUID in ceph.conf only, if there are no "osd uuid = ..." line, it not asking monitor for uuid and just generates random one. If one has pre

[ceph-users] questions about monitor data and ceph recovery

2014-02-24 Thread Pavel V. Kaygorodov
Hi! My first question will be about monitor data directory. How much space I need to reserve for it? Can monitor-fs be corrupted if monitor goes out of storage space? I also have questions about ceph auto-recovery process. For example, I have two nodes with 8 drives on each, each drive is pres

Re: [ceph-users] Upgrading ceph

2014-02-25 Thread Pavel V. Kaygorodov
25, 2014 at 2:40 PM, Pavel V. Kaygorodov wrote: > Hi! > > Is it possible to have monitors and osd daemons running different versions of > ceph in one cluster? > > Pavel. > > > > > 25 февр. 2014 г., в 10:56, Srinivasa Rao Ragolu > написал(а): > >

Re: [ceph-users] questions about monitor data and ceph recovery

2014-02-25 Thread Pavel V. Kaygorodov
Hi! > 2. One node (with 8 osds) goes offline. Will ceph automatically replicate all > objects on the remaining node to maintain number of replicas = 2? > No, because it can no longer satisfy your CRUSH rules. Your crush rule states > 1x copy pr. node and it will keep it that way. The cluster wil

Re: [ceph-users] Encryption/Multi-tennancy

2014-03-10 Thread Pavel V. Kaygorodov
Hi! I think, it is impossible to hide crypto keys from admin, who have access to host machine where VM guest running. Admin can always make snapshot of running VM and extract all keys just from memory. May be, you can achieve enough level of security providing a dedicated real server holding cr

[ceph-users] Error initializing cluster client: Error

2014-03-22 Thread Pavel V. Kaygorodov
Hi! I have two nodes with 8 OSDs on each. First node running 2 monitors on different virtual machines (mon.1 and mon.2), second node runing mon.3 After several reboots (I have tested power failure scenarios) "ceph -w" on node 2 always fails with message: root@bes-mon3:~# ceph --verbose -w Error

Re: [ceph-users] Error initializing cluster client: Error

2014-03-22 Thread Pavel V. Kaygorodov
> You have file config sync? > ceph.conf are same on all servers, keys also not differs. I have checked the problem now and see ceph -w working fine on all hosts. Mysterious :-/ Pavel. > 22 марта 2014 г. 16:11 пользователь "Pavel V. Kaygorodov" > написал: > Hi! >

Re: [ceph-users] Error initializing cluster client: Error

2014-03-29 Thread Pavel V. Kaygorodov
Hi! Now I have the same situation on al monitors without any reboot: root@bes-mon3:~# ceph --verbose -w Error initializing cluster client: Error root@bes-mon3:~# ceph --admin-daemon /var/run/ceph/ceph-mon.3.asok mon_status { "name": "3", "rank": 2, "state": "peon", "election_epoch": 86,

[ceph-users] RBD kernel module / Centos 6.5

2014-03-29 Thread Pavel V. Kaygorodov
Hi! I have followed the instructions on http://ceph.com/docs/master/start/quick-rbd/ , "ceph-deploy install localhost" finished without errors, but modprobe rbd returns "FATAL: Module rbd not found.". How to install the module? [root@taurus ~]# lsb_release -a LSB Version: :base-4.0-amd64:

Re: [ceph-users] RBD kernel module / Centos 6.5

2014-03-29 Thread Pavel V. Kaygorodov
> HTH, > Arne > > On Mar 29, 2014, at 10:36 AM, "Pavel V. Kaygorodov" > wrote: > >> Hi! >> >> I have followed the instructions on >> http://ceph.com/docs/master/start/quick-rbd/ , "ceph-deploy install >> localhost" fin

[ceph-users] ceph cluster health monitoring

2014-04-11 Thread Pavel V. Kaygorodov
Hi! I want to receive email notifications for any ceph errors/warnings and for osd/mon disk full/near_full states. For example, I want to know it immediately if free space on any osd/mon becomes less then 10%. How to properly monitor ceph cluster health? Pavel.

[ceph-users] RBD as a hot spare

2014-04-17 Thread Pavel V. Kaygorodov
Hi! How do you think, is it a good idea, to add RBD block device as a hot spare drive to a linux software raid? Pavel. ___ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Re: [ceph-users] RBD as a hot spare

2014-04-17 Thread Pavel V. Kaygorodov
17 апр. 2014 г., в 16:41, Wido den Hollander написал(а): > On 04/17/2014 02:37 PM, Pavel V. Kaygorodov wrote: >> Hi! >> >> How do you think, is it a good idea, to add RBD block device as a hot spare >> drive to a linux software raid? >> > > Well, it

[ceph-users] RBD on Mac OS X

2014-05-06 Thread Pavel V. Kaygorodov
Hi! I want to use ceph for time machine backups on Mac OS X. Is it possible to map RBD or mount CephFS on mac directly, for example, using osxfuse? Or it is only way to do this -- make an intermediate linux server? Pavel. ___ ceph-users mailing list c

Re: [ceph-users] Replace journals disk

2014-05-06 Thread Pavel V. Kaygorodov
Hi! I'm not a specialist, but I think it will be better to move journals to other place first (stopping each OSD, moving it journal file to a HDD, and starting again), replace SSD and move journals to a new drive, again, one-by-one. The "no-out" mode can help. Pavel. 06 мая 2014 г., в 14:34

Re: [ceph-users] Advanced CRUSH map rules

2014-05-14 Thread Pavel V. Kaygorodov
Hi! > CRUSH can do this. You'd have two choose ...emit sequences; > the first of which would descend down to a host and then choose n-1 > devices within the host; the second would descend once. I think > something like this should work: > > step take default > step choose firstn 1 datacenter > st