Hi!
May be I have missed something in docs, but is there a way to switch a pool
from replicated to erasure coded?
Or I have to create a new pool an somehow manually transfer data from old pool
to new one?
Pavel.
___
ceph-users mailing list
ceph-users
Hi!
I want to make erasure-coded pool with k=3 and m=3. Also, I want to distribute
data between two hosts, having 3 osd from host1 and 3 from host2.
I have created a ruleset:
rule ruleset_3_3 {
ruleset 0
type replicated
min_size 6
max_size 6
step take host
This ruleset works well for replicated pools with size 6 (I have tested it on
data and metadata pools, which I cannot delete).
The erasure pool with k=3 and m=3 must have size 6?
Pavel.
> On 19/06/2014 18:17, Pavel V. Kaygorodov wrote:
>> Hi!
>>
>> I want to make erasur
> You need:
>
> type erasure
>
It works!
Thanks a lot!
Pavel.
min_size 6
max_size 6
step take host1
step chooseleaf firstn 3 type osd
step emit
step take host2
step chooseleaf firstn 3 type osd
step emit
>
Hi!
I'm getting a strange error, trying to create rbd image:
# rbd -p images create --size 10 test
rbd: create error: (95) Operation not supported
2014-06-20 18:28:39.537889 7f32af795780 -1 librbd: error adding image to
directory: (95) Operation not supported
The images -- erasure encoded pool
Hi!
I still have the same problem with "Error initializing cluster client: Error"
on all monitor nodes:
root@bastet-mon2:~# ceph -w
Error initializing cluster client: Error
root@bastet-mon2:~# ceph --admin-daemon /var/run/ceph/ceph-mon.2.asok
mon_status
{ "name": "2",
"rank": 1,
"state":
0" and see if
> it outputs more useful error logs.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Sat, Jul 5, 2014 at 2:23 AM, Pavel V. Kaygorodov wrote:
>> Hi!
>>
>> I still have the same problem with "Error i
Hi!
I'm trying to install ceph on Debian wheezy (from deb http://ceph.com/debian/
wheezy main) and getting following error:
# apt-get update && apt-get dist-upgrade -y && apt-get install -y ceph
...
The following packages have unmet dependencies:
ceph : Depends: ceph-common (>= 0.78-500) but
Hi!
We have experienced several blackouts on our small ceph cluster.
Most annoying problem is time desync just after a blackout: mons are not
starting to work before time sync, after resync and manual restart of monitors,
some of pgs can stuck in "inactive" or "peering" state for a significant p
Hi!
I'm using ceph cluster, packed to a number of docker containers.
There are two things, which you need to know:
1. Ceph OSDs are using FS attributes, which may not be supported by filesystem
inside docker container, so you need to mount external directory inside a
container to store OSD data
Hi!
I have updated my cluster to Hammer and got a warning "too many PGs per OSD
(2240 > max 300)".
I know, that there is no way to decrease number of page groups, so I want to
re-create my pools with less pg number, move all my data to them, delete old
pools and rename new pools as the old ones
Hi!
I have copied two of my pools recently, because old ones has too many pgs.
Both of them contains RBD images, with 1GB and ~30GB of data.
Both pools was copied without errors, RBD images are mountable and seems to be
fine.
CEPH version is 0.94.1
Pavel.
> 7 апр. 2015 г., в 18:29, Kapil Shar
Hi!
I have an RBD image (in pool "volumes"), made by openstack from parent image
(in pool "images").
Recently, I have tried to decrease number of PG-s, to avoid new Hammer warning.
I have copied pool "images" to another pool, deleted original pool and renamed
new pool to "images". Ceph allowed m
to get the data out of them.
>
> Br,
> Tuomas
>
> -Original Message-----
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Pavel V. Kaygorodov
> Sent: 12. toukokuuta 2015 20:41
> To: ceph-users
> Subject: [ceph-users] RBD images -- parent s
on how to install development packages [1].
>
> [1]
> http://docs.ceph.com/docs/master/install/get-packages/#add-ceph-development
>
> --
>
> Jason Dillaman
> Red Hat
> dilla...@redhat.com
> http://www.redhat.com
>
>
> - Original Message -
images.
>
> Thanks
>
> Tuomas
>
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Pavel V. Kaygorodov
> Sent: 13. toukokuuta 2015 18:24
> To: Jason Dillaman
> Cc: ceph-users
> Subject: Re: [ceph-users] RBD images
Hi!
Immediately after a reboot of mon.3 host its clock was unsynchronized and
"clock skew detected on mon.3" warning is appeared.
But now (more then 1 hour of uptime) the clock is synced, but the warning still
showing.
Is this ok?
Or I have to restart monitor after clock synchronization?
Pavel.
Hi!
We have experienced some problems with power supply and whole our ceph cluster
was rebooted several times.
After a reboot the clocks on different monitor nodes becomes slightly
desynchronized and ceph won't go up before time sync.
But even after a time sync the ceph cluster also shows that a
Hi!
16 pgs in our ceph cluster are in active+clean+replay state more then one day.
All clients are working fine.
Is this ok?
root@bastet-mon1:/# ceph -w
cluster fffeafa2-a664-48a7-979a-517e3ffa0da1
health HEALTH_OK
monmap e3: 3 mons at
{1=10.92.8.80:6789/0,2=10.92.8.81:6789/0,3=10.
ose pools), but it's not going to hurt anything as long as
> you aren't using them.
Thanks a lot, restarting of osds helps!
BTW, I tried to delete data and metadata pools just after setup, but ceph
refused me to do this.
With best regards,
Pavel.
> On Thu, Sep 25, 2014
Hi!
Our institute now planning to deploy a set of robotic telescopes across a
country.
Most of the telescopes will have low bandwidth and high latency, or even not
permanent internet connectivity.
I think, we can set up synchronization of observational data with ceph, using
federated gateways:
Hi!
> What are a few advantages of using Ceph with LXC ?
I'm using ceph daemons, packed in docker containers (http://docker.io).
The main advantages is security and reliability, the software don't interact
between each other, all daemons has different IP addresses, different
filesystems, etc.
A
Hi, All!
I am trying to setup ceph from scratch, without dedicated drive, with one mon
and one osd.
After all, I see following output of ceph osd tree:
# idweight type name up/down reweight
-1 1 root default
-2 1 host host1
0 1
gt;
> Finally your both the OSD should be IN and UP , so that your cluster can
> store data.
>
> Regards
> Karan
>
>
> On 16 Feb 2014, at 20:06, Pavel V. Kaygorodov wrote:
>
>> Hi, All!
>>
>> I am trying to setup ceph from scratch, without dedicat
Hi!
Playing with ceph, I found a bug:
I have compiled and installed ceph from sources on debian/jessie:
git clone --recursive -b v0.75 https://github.com/ceph/ceph.git
cd ceph/ && ./autogen.sh && ./configure && make && make install
/usr/local/bin/ceph-authtool --create-keyring /data/ceph.mon.ke
Hi!
I have two sorts of storage hosts: small number of reliable hosts with a number
of big drives on each (reliable zone of the cluster), and a much larger set of
less reliable hosts, some with big drives, some with relatively small ones
(non-reliable zone of the cluster). Non-reliable hosts ar
Hi!
May be it is a dumb question, but anyway:
If I lose all monitors (mon data dirs), does it possible to recover cluster
with data from OSDs only?
Pavel.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-
taking this UUID into account, so it cannot connect to the monitor after all.
Removing uuid parameter from "ceph osd create" fixes the problem.
If this is not a bug, may be it will be better to document this behavior.
With best regards,
Pavel.
>> Pavel.
>>
>>
Hi!
I have found strange behavior of ceph-osd, which must be documented, in my
opinion:
While creating osd fs (with ceph-osd --mkfs), ceph-osd looking for UUID in
ceph.conf only, if there are no "osd uuid = ..." line, it not asking monitor
for uuid and just generates random one.
If one has pre
Hi!
I have found strange behavior of ceph-osd, which must be documented, in my
opinion:
While creating osd fs (with ceph-osd --mkfs), ceph-osd looking for UUID in
ceph.conf only, if there are no "osd uuid = ..." line, it not asking monitor
for uuid and just generates random one.
If one has pre
Hi!
My first question will be about monitor data directory. How much space I need
to reserve for it? Can monitor-fs be corrupted if monitor goes out of storage
space?
I also have questions about ceph auto-recovery process.
For example, I have two nodes with 8 drives on each, each drive is pres
25, 2014 at 2:40 PM, Pavel V. Kaygorodov wrote:
> Hi!
>
> Is it possible to have monitors and osd daemons running different versions of
> ceph in one cluster?
>
> Pavel.
>
>
>
>
> 25 февр. 2014 г., в 10:56, Srinivasa Rao Ragolu
> написал(а):
>
>
Hi!
> 2. One node (with 8 osds) goes offline. Will ceph automatically replicate all
> objects on the remaining node to maintain number of replicas = 2?
> No, because it can no longer satisfy your CRUSH rules. Your crush rule states
> 1x copy pr. node and it will keep it that way. The cluster wil
Hi!
I think, it is impossible to hide crypto keys from admin, who have access to
host machine where VM guest running. Admin can always make snapshot of running
VM and extract all keys just from memory. May be, you can achieve enough level
of security providing a dedicated real server holding cr
Hi!
I have two nodes with 8 OSDs on each. First node running 2 monitors on
different virtual machines (mon.1 and mon.2), second node runing mon.3
After several reboots (I have tested power failure scenarios) "ceph -w" on node
2 always fails with message:
root@bes-mon3:~# ceph --verbose -w
Error
> You have file config sync?
>
ceph.conf are same on all servers, keys also not differs.
I have checked the problem now and see ceph -w working fine on all hosts.
Mysterious :-/
Pavel.
> 22 марта 2014 г. 16:11 пользователь "Pavel V. Kaygorodov"
> написал:
> Hi!
>
Hi!
Now I have the same situation on al monitors without any reboot:
root@bes-mon3:~# ceph --verbose -w
Error initializing cluster client: Error
root@bes-mon3:~# ceph --admin-daemon /var/run/ceph/ceph-mon.3.asok mon_status
{ "name": "3",
"rank": 2,
"state": "peon",
"election_epoch": 86,
Hi!
I have followed the instructions on
http://ceph.com/docs/master/start/quick-rbd/ , "ceph-deploy install localhost"
finished without errors, but modprobe rbd returns "FATAL: Module rbd not
found.".
How to install the module?
[root@taurus ~]# lsb_release -a
LSB Version:
:base-4.0-amd64:
> HTH,
> Arne
>
> On Mar 29, 2014, at 10:36 AM, "Pavel V. Kaygorodov"
> wrote:
>
>> Hi!
>>
>> I have followed the instructions on
>> http://ceph.com/docs/master/start/quick-rbd/ , "ceph-deploy install
>> localhost" fin
Hi!
I want to receive email notifications for any ceph errors/warnings and for
osd/mon disk full/near_full states. For example, I want to know it immediately
if free space on any osd/mon becomes less then 10%.
How to properly monitor ceph cluster health?
Pavel.
Hi!
How do you think, is it a good idea, to add RBD block device as a hot spare
drive to a linux software raid?
Pavel.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
17 апр. 2014 г., в 16:41, Wido den Hollander написал(а):
> On 04/17/2014 02:37 PM, Pavel V. Kaygorodov wrote:
>> Hi!
>>
>> How do you think, is it a good idea, to add RBD block device as a hot spare
>> drive to a linux software raid?
>>
>
> Well, it
Hi!
I want to use ceph for time machine backups on Mac OS X.
Is it possible to map RBD or mount CephFS on mac directly, for example, using
osxfuse?
Or it is only way to do this -- make an intermediate linux server?
Pavel.
___
ceph-users mailing list
c
Hi!
I'm not a specialist, but I think it will be better to move journals to other
place first (stopping each OSD, moving it journal file to a HDD, and starting
again), replace SSD and move journals to a new drive, again, one-by-one. The
"no-out" mode can help.
Pavel.
06 мая 2014 г., в 14:34
Hi!
> CRUSH can do this. You'd have two choose ...emit sequences;
> the first of which would descend down to a host and then choose n-1
> devices within the host; the second would descend once. I think
> something like this should work:
>
> step take default
> step choose firstn 1 datacenter
> st
45 matches
Mail list logo