Re: [ceph-users] pgs stuck unclean in a pool without name

2014-04-18 Thread Ирек Фасихов
This pools are created automatically when there is a start S3 (ceph-radosgw). By default, your configuration file, indicate the number of pgs = 333. But it's a lot for your configuration. 2014-04-18 15:28 GMT+04:00 Cedric Lemarchand : > Hi, > > Le 18/04/2014 13:14, Ирек Ф

Re: [ceph-users] pgs stuck unclean in a pool without name

2014-04-18 Thread Ирек Фасихов
Show command please: ceph osd tree. 2014-04-18 14:51 GMT+04:00 Cedric Lemarchand : > Hi, > > I am facing a strange behaviour where a pool is stucked, I have no idea > how this pool appear in the cluster in the way I have not played with pool > creation, *yet*. > > # root@node1:~# ceph -s >

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
Is there any data to: ls -lsa /var/lib/ceph/osd/ceph-82/current/14.7c8_*/ ls -lsa /var/lib/ceph/osd/ceph-26/current/14.7c8_*/ 2014-04-18 14:36 GMT+04:00 Ta Ba Tuan : > Hi Ирек Фасихов > > I send it to you :D, > Thank you! > > { "state": "incompl

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
> .. > > > > On 04/18/2014 03:42 PM, Ирек Фасихов wrote: > > You OSD restarts all disks on which is your unfinished pgs? (22,23,82) > > > > 2014-04-18 12:35 GMT+04:00 Ta Ba Tuan : > >> Thank Ирек Фасихов for my reply. >> I restarted osds that conta

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
You OSD restarts all disks on which is your unfinished pgs? (22,23,82) 2014-04-18 12:35 GMT+04:00 Ta Ba Tuan : > Thank Ирек Фасихов for my reply. > I restarted osds that contains incomplete pgs, but still false :( > > > > On 04/18/2014 03:16 PM, Ирек Фасихов wrote: > &

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
Ceph detects that a placement group is missing a necessary period of history from its log. If you see this state, report a bug, and try to start any failed OSDs that may contain the needed information. 2014-04-18 12:15 GMT+04:00 Ирек Фасихов : > Oh, sorry, confused with inconsist

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
Oh, sorry, confused with inconsistent. :) 2014-04-18 12:13 GMT+04:00 Ирек Фасихов : > You need to repair pg. This is the first sign that your hard drive was > fail under. > ceph pg repair *14.a5a * > ceph pg repair *14.aa8* > > > 2014-04-18 12:09 GMT+04:00 Ta Ba Tuan

Re: [ceph-users] pg incomplete in .rgw.buckets pool

2014-04-18 Thread Ирек Фасихов
You need to repair pg. This is the first sign that your hard drive was fail under. ceph pg repair *14.a5a * ceph pg repair *14.aa8* 2014-04-18 12:09 GMT+04:00 Ta Ba Tuan : > Dear everyone, > > I lost 2 osd(s) and my '.rgw.buckets' pool is using 2 replicate, Therefore > has some incomplete pgs >

Re: [ceph-users] Russian-speaking community CephRussian!

2014-04-16 Thread Ирек Фасихов
Loic,thanks for the link! 2014-04-16 18:46 GMT+04:00 Loic Dachary : > Hi Ирек, > > If you organize meetups, feel free to add yourself to > https://wiki.ceph.com/Community/Meetups :-) > > Cheers > > On 16/04/2014 13:22, Ирек Фасихов wrote: > > Hi,All. > >

[ceph-users] Russian-speaking community CephRussian!

2014-04-16 Thread Ирек Фасихов
Hi,All. I created the Russian-speaking community CephRussian in Google+! Welcome! URL: https://plus.google.com/communities/104570726102090628516 -- С уважением, Фасихов Ирек Нургаязович Моб.: +79229045757 ___

Re: [ceph-users] rbd: add failed: (34) Numerical result out of range ( Please help me)

2014-04-16 Thread Ирек Фасихов
Show command output rbd ls -l. 2014-04-16 13:59 GMT+04:00 Srinivasa Rao Ragolu : > Hi Wido, > > Output of info command is given below > > root@mon:/etc/ceph# > * rbd info samplerbd: error opening image sample: (95) Operation not > supported2014-04-16 09:57:24.575279 7f661c6e5780 -1 librbd: Error

Re: [ceph-users] Errors while mapping the created image (Numerical result out of range)

2014-04-16 Thread Ирек Фасихов
Show command output dmesg. 2014-04-16 12:18 GMT+04:00 Srinivasa Rao Ragolu : > Hi All, > > I could successfully able to create ceph cluster on our proprietary > distribution with manual ceph commands > > *ceph.conf* > > [global] > fsid = a7f64266-0894-4f1e-a635-d0aeaca0e993 > mon initial members

[ceph-users] CephS3 and s3fs.

2014-04-14 Thread Ирек Фасихов
Hi,All. Does anyone have experience with s3fs+CephS3? I shows an error when uploading a file: kataklysm@linux-41gj:~> s3fs infas /home/kataklysm/s3/ -o url=" http://s3.x-.ru"; kataklysm @ linux-41gj: ~> rsync-av - progress temp / s3 sending incremental file list rsync: failed to set time

Re: [ceph-users] Dell R515/510 with H710 PERC RAID | JBOD

2014-04-03 Thread Ирек Фасихов
You need to use Dell OpenManage: https://linux.dell.com/repo/hardware/. 2014-04-04 7:26 GMT+04:00 Punit Dambiwal : > Hi, > > I want to use Dell R515/R510 for the OSD node purpose > > 1. 2*SSD for OS purpose (Raid 1) > 2. 10* Segate 3.5' HDDx 3TB for OSD purpose (No RAID...JBOD) > > To crea

Re: [ceph-users] Backport rbd.ko to 2.6.32 Linux Kernel

2014-03-31 Thread Ирек Фасихов
e back porting rbd.ko to 2.6.32 linux kernel. > > Thanks, > Vilobh > > From: Ирек Фасихов > Date: Monday, March 31, 2014 at 10:56 PM > To: Vilobh Meshram > Cc: "ceph-users@lists.ceph.com" > Subject: Re: [ceph-users] Backport rbd.ko to 2.6.32 Linux Kernel > &g

Re: [ceph-users] Backport rbd.ko to 2.6.32 Linux Kernel

2014-03-31 Thread Ирек Фасихов
Not backport for 2.6.32 and in the future is not planned. 2014-04-01 9:19 GMT+04:00 Vilobh Meshram : > What is the procedure to back port rbd.ko to 2.6.32 Linux Kernel ? > > Thanks, > Vilobh > > ___ > ceph-users mailing list > ceph-users@lists.ceph.c

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-29 Thread Ирек Фасихов
Thanks, Ilya. 2014-03-29 22:06 GMT+04:00 Ilya Dryomov : > On Sat, Mar 29, 2014 at 5:15 PM, Ирек Фасихов wrote: > > Ilya, hi. Maybe you have the required patches for the kernel? > > Hi, > > It turned out there was a problem with userspace. If you grab the > latest cep

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-29 Thread Ирек Фасихов
Ilya, hi. Maybe you have the required patches for the kernel? 2014-03-25 14:51 GMT+04:00 Ирек Фасихов : > Yep, so works. > > > 2014-03-25 14:45 GMT+04:00 Ilya Dryomov : > > On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов wrote: >> > Hmmm, create another image in anoth

Re: [ceph-users] TCP failed connection attempts

2014-03-26 Thread Ирек Фасихов
Hi, Daniel. I use the following settings: net.ipv4.tcp_syncookies = 0 net.ipv4.tcp_moderate_rcvbuf=0 net.ipv4.tcp_low_latency = 1 Message "failed connection attempts", can be ignored, it is not just a server error, but the client. For example: A client lost its connection to the server. 2014-

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
Yep, so works. 2014-03-25 14:45 GMT+04:00 Ilya Dryomov : > On Tue, Mar 25, 2014 at 12:00 PM, Ирек Фасихов wrote: > > Hmmm, create another image in another pool. Pool without cache tier. > > > > [root@ceph01 cluster]# rbd create test/image --size 102400 > > [root@ceph

Re: [ceph-users] Ceph 0.78: cache tier+image-format=2 fail. Bug?

2014-03-25 Thread Ирек Фасихов
Thanks, Ilya. 2014-03-25 14:24 GMT+04:00 Ilya Dryomov : > On Tue, Mar 25, 2014 at 10:14 AM, Ирек Фасихов wrote: > > I want to create an image in format 2 through cache tier, but get an > error > > creating. > > > > [root@ceph01 cluster]# rbd create rbd/myimage --

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
#x27;m doing wrong? Thanks. 2014-03-25 13:34 GMT+04:00 Ilya Dryomov : > On Tue, Mar 25, 2014 at 10:59 AM, Ирек Фасихов wrote: > > Ilya, set "chooseleaf_vary_r 0", but no map rbd images. > > > > [root@ceph01 cluster]# rbd map rbd/tst > > 2014-03-25 12:48:14.3

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
I also added a new log in Google Drive. https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing 2014-03-25 12:59 GMT+04:00 Ирек Фасихов : > Ilya, set "chooseleaf_vary_r 0", but no map rbd images. > > [root@ceph01 cluster]# *rbd map rbd/tst* >

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-25 Thread Ирек Фасихов
-03-25 12:26 GMT+04:00 Ilya Dryomov : > On Tue, Mar 25, 2014 at 8:38 AM, Ирек Фасихов wrote: > > Hi, Ilya. > > > > I added the files(crushd and osddump) to a folder in GoogleDrive. > > > > > https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ym

[ceph-users] Ceph 0.78: cache tier+image-format=2 fail. Bug?

2014-03-25 Thread Ирек Фасихов
I want to create an image in format 2 through cache tier, but get an error creating. [root@ceph01 cluster]# *rbd create rbd/myimage --size 102400 --image-format 2* 2014-03-25 12:03:44.835686 7f668e09d760 1 -- :/0 messenger.start 2014-03-25 12:03:44.835994 7f668e09d760 2 auth: KeyRing::load: load

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
Hi, Ilya. I added the files(crushd and osddump) to a folder in GoogleDrive. https://drive.google.com/folderview?id=0BxoNLVWxzOJWX0NLV1kzQ1l3Ymc&usp=sharing 2014-03-25 0:19 GMT+04:00 Ilya Dryomov : > On Mon, Mar 24, 2014 at 9:46 PM, Ирек Фасихов wrote: > > Kernel module supp

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
orage Consultant > Inktank Professional Services > > > On Mon, Mar 24, 2014 at 2:58 PM, Ирек Фасихов wrote: > >> Hi, Gregory! >> I think that there is no interesting :). >> >> *dmesg:* >> Key type dns_resolver registered >> Key type ceph register

Re: [ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
7;t remember what features should exist where, but I expect that > the cluster is making use of features that the kernel client doesn't > support yet (despite the very new kernel). Have you checked to see if > there's anything interesting in dmesg? > -Greg > Software Engineer

[ceph-users] Ceph RBD 0.78 Bug or feature?

2014-03-24 Thread Ирек Фасихов
Created cache pool for documentation: http://ceph.com/docs/master/dev/cache-pool/ *ceph osd pool create cache 100* *ceph osd tier add rbd cache* *ceph osd tier cache-mode cache writeback* *ceph osd tier set-overlay rbd cache* *ceph osd pool set cache hit_set_type bloom* *ceph osd pool set cach

Re: [ceph-users] Error initializing cluster client: Error

2014-03-22 Thread Ирек Фасихов
You have file config sync? 22 марта 2014 г. 16:11 пользователь "Pavel V. Kaygorodov" написал: > Hi! > > I have two nodes with 8 OSDs on each. First node running 2 monitors on > different virtual machines (mon.1 and mon.2), second node runing mon.3 > After several reboots (I have tested power fail

Re: [ceph-users] firefly timing

2014-03-18 Thread Ирек Фасихов
I'm ready to test the tiering. 2014-03-18 11:07 GMT+04:00 Stefan Priebe - Profihost AG < s.pri...@profihost.ag>: > Hi Sage, > > i really would like to test the tiering. Is there any detailed > documentation about it and how it works? > > Greets, > Stefan > > Am 18.03.2014 05:45, schrieb Sage Wei

Re: [ceph-users] Replication lag in block storage

2014-03-15 Thread Ирек Фасихов
Which model you have hard drives? 2014-03-14 21:59 GMT+04:00 Greg Poirier : > We are stressing these boxes pretty spectacularly at the moment. > > On every box I have one OSD that is pegged for IO almost constantly. > > ceph-1: > Device: rrqm/s wrqm/s r/s w/srkB/swkB/s

Re: [ceph-users] Fluctuating I/O speed degrading over time

2014-03-07 Thread Ирек Фасихов
What you model SSD disk? 07 марта 2014 г. 13:50 пользователь "Indra Pramana" написал: > Hi, > > I have a Ceph cluster, currently with 5 osd servers and around 22 OSDs > with SSD drives and I noted that the I/O speed, especially write access to > the cluster is degrading over time. When we first s

Re: [ceph-users] RBD+KVM problems with sequential read

2014-02-07 Thread Ирек Фасихов
s (20 GB) copied, 244.024 s, 82.0 MB/s > > > > Changing these parameters are not affected... > > Are there other ideas on this problem? > > Thank you. > > > > > > 2014-02-07 Konrad Gutkowski : > > Hi, > > > > W dniu 07.02.2014 o 08:14 Ирек Фасихо

Re: [ceph-users] RBD+KVM problems with sequential read

2014-02-07 Thread Ирек Фасихов
d, 244.024 s, 82.0 MB/s Changing these parameters are not affected... Are there other ideas on this problem? Thank you. 2014-02-07 Konrad Gutkowski : > Hi, > > W dniu 07.02.2014 o 08:14 Ирек Фасихов pisze: > > [...] > > > Why might such a low speed sequential read? Do ide

[ceph-users] RBD+KVM problems with sequential read

2014-02-06 Thread Ирек Фасихов
Hi All. Hosts: Dell R815x5, 128 GB RAM, 25 OSD + 5 SSD(journal+system). Network: 2x10Gb+LACP Kernel: 2.6.32 QEMU emulator version 1.4.2, Copyright (c) 2003-2008 Fabrice Bellard POOLs: root@kvm05:~# ceph osd dump | grep 'rbd' pool 5 'rbd' rep size 2 min_size 1 crush_ruleset 2 object_hash rjenkins

Re: [ceph-users] Calculating required number of PGs per pool

2014-01-24 Thread Ирек Фасихов
Hi. Please read: http://ceph.com/docs/master/rados/operations/placement-groups/ 2014/1/24 Graeme Lambert > Hi, > > I've got 6 OSDs and I want 3 replicas per object, so following the > function that's 200 PGs per OSD, which is 1,200 overall. > > I've got two RBD pools and the .rgw.buckets pool

Re: [ceph-users] Low write speed

2014-01-17 Thread Ирек Фасихов
Hi, Виталий. Whether a sufficient number of PGS? 2014/1/17 Никитенко Виталий > Good day! Please help me solve the problem. There are the following scheme > : > Server ESXi with 1Gb NICs. it has local store store2Tb and two isci > storage connected to the second server . > The second server supe

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Ирек Фасихов
Kernel Patch for Intel S3700, Intel 530... diff --git a/drivers/scsi/sd.c b/drivers//scsi/sd.c --- a/drivers/scsi/sd.c 2013-09-14 12:53:21.0 +0400 +++ b/drivers//scsi/sd.c2013-12-19 21:43:29.0 +0400 @@ -137,6 +137,7 @@ char *buffer_data; struct scsi_mode_

Re: [ceph-users] servers advise (dell r515 or supermicro ....)

2014-01-15 Thread Ирек Фасихов
I use H700 on Dell R815, 4 nodes. No problem performance. Configuration: 1 SSD Intel 530 - OS and Journal. 5 OSD HDD 600G: certified DELL - WD/HITACHI/SEAGATE. Size replication=2. Iops ~ 4k no VM. 15 янв. 2014 г. 15:47 пользователь "Alexandre DERUMIER" написал: > > Hello List, > > I'm going to bu

Re: [ceph-users] ceph start error

2014-01-11 Thread Ирек Фасихов
the directory /etc/ceph/. How can I > resolve it? > > > > *From:* Ирек Фасихов [mailto:malm...@gmail.com] > *Sent:* Saturday, January 11, 2014 10:24 PM > *To:* You, Rong > *Cc:* ceph-users@lists.ceph.com > *Subject:* Re: [ceph-users] ceph start error > > > > glo

Re: [ceph-users] ceph start error

2014-01-11 Thread Ирек Фасихов
global_init: *unable to open config file from search list*/etc/ceph/ceph.conf 2014/1/11 You, Rong > Hi, > > I encounter a problem when startup the ceph cluster. > > When run the command: service ceph -a start, > > The process always hang up. The error result is: > > > > [root

Re: [ceph-users] libvirt qemu/kvm/rbd inside VM read slow

2014-01-10 Thread Ирек Фасихов
; < > steffen.thorha...@iti.cs.uni-magdeburg.de> het volgende geschreven: > > On 01/10/2014 01:21 PM, Ирек Фасихов wrote: > > You need to use VirtIO. > > > > > with this parameter ist not a real performance increase: > dd if=/dev/zero of=zerofile-2 bs=1G

Re: [ceph-users] libvirt qemu/kvm/rbd inside VM read slow

2014-01-10 Thread Ирек Фасихов
You need to use VirtIO. 2014/1/10 > Hi, > I'm using a 1GBit network. 16 osd on 8 hosts with xfs and journal on ssd. > I have a read performance problem in a libvirt kvm/qemu/rbd VM > on a ceph client host. All involved hosts are ubuntu 13.10. Ceph is 72.2. > The only VM disk is a rbd vol

Re: [ceph-users] 2014-01-02 05:46:13.398699 7f6658278700 0 - :/ 1001908 >> 192.168.1.130:6789 / 0 pipe (0x7f6648005490 sd = 3: 0 s = 1 pgs = 0 cs = 0 l = 1 c = 0x7f66480025a0) . fault

2014-01-02 Thread Ирек Фасихов
You need to replace # with ; 02 янв. 2014 г. 19:58 пользователь "xyx" написал: > *Hello,My Ceph Teacher:* > I just finished my ceph configuration: > Configured as follows: > [global] > auth cluster required = none > auth service required = none > au

Re: [ceph-users] rbd: add failed: (1) Operation not permitted

2013-12-27 Thread Ирек Фасихов
*sudo rbd map ceph-pool/RBDTest -n client.admin -k /home/ceph/ceph-cluster-prd/ceph.client.admin.keyring* 2013/12/27 German Anders > Hi Cephers, > > I had a basic question, I've already setup up a Ceph cluster with 55 > OSD's daemons running and 3 MON with a total of 7TB raw data, and a

Re: [ceph-users] HEALTH_WARN too few pgs per osd (3 < min 20)

2013-12-27 Thread Ирек Фасихов
http://ceph.com/docs/master/rados/operations/placement-groups/ 2013/12/27 German Anders > Hi to All, > >I've the following warning message (WARN) in my cluster: > > ceph@ceph-node04:~$ sudo ceph status > cluster 50ae3778-dfe3-4492-9628-54a8918ede92 >* health HEALTH_WARN too few pg

Re: [ceph-users] Ceph RAM Requirement?

2013-12-21 Thread Ирек Фасихов
recommended 1 GB of RAM on one OSD disk. 21 дек. 2013 г. 17:54 пользователь "hemant burman" написал: > > Can someone please help out here? > > > On Sat, Dec 21, 2013 at 9:47 AM, hemant burman wrote: > >> Hello, >> >> We have boxes with 24 Drives, 2TB each and want to run one OSD per drive. >> Wha

Re: [ceph-users] problem with delete or rename a pool

2013-11-28 Thread Ирек Фасихов
Hi ceph osd pool delete --help OR ceph osd pool delete -h 2013/11/29 You, RongX > Hi, > > I have made a mistake, and create a pool named "-help", > > Execute command "ceph osd lspools", and returns: > > 0 data,1 metadata,2 rbd,3 testpool1,4 testpool2,5 -help,6 > te

Re: [ceph-users] HEALTH_WARN # requests are blocked > 32 sec

2013-11-25 Thread Ирек Фасихов
ceph health detail 2013/11/25 Michael > Hi, > > Any ideas on troubleshooting a "requests are blocked" when all of the > nodes appear to be running OK? > Nothing gets reported in /var/log/ceph/ceph.log as everything is > active+clean throughout the event. All of the nodes can be accessed and al

Re: [ceph-users] PG state diagram

2013-11-25 Thread Ирек Фасихов
Yes, I would like to see this graph. Thanks 2013/11/25 Regola, Nathan (Contractor) > Is there a vector graphics file (or a higher resolution file of some type) > of the state diagram on the page below, as I can't read the text. > > Thanks, > Nate > > > http://ceph.com/docs/master/dev/peering/

Re: [ceph-users] ceph osd thrash?

2013-11-11 Thread Ирек Фасихов
Thank, Greg. 12 нояб. 2013 г. 4:00 пользователь "Gregory Farnum" написал: > > On Mon, Nov 11, 2013 at 2:16 AM, Ирек Фасихов wrote: > > Hello community. > > > > I do not understand the argument: ceph osd thrash. > > Why the need for this option? > >

[ceph-users] ceph osd thrash?

2013-11-11 Thread Ирек Фасихов
Hello community. I do not understand the argument: ceph osd thrash. Why the need for this option? Description of the parameter is not found in the documentation. Where you can read a more detailed description of the parameter? Thank you. -- С уважением, Фасихов Ирек Нургаязович Моб.: +7922904575

Re: [ceph-users] Help with CRUSH maps

2013-10-31 Thread Ирек Фасихов
https://ceph.com/docs/master/rados/operations/crush-map/#placing-different-pools-on-different-osds See "rule ssd-primary" 31 окт. 2013 г. 17:29 пользователь "Alexis GÜNST HORN" < alexis.gunsth...@outscale.com> написал: > Hello to all, > > Here is my ceph osd tree output : > > > # idweight t

Re: [ceph-users] Ceph PG's incomplete and OSD drives lost

2013-10-30 Thread Ирек Фасихов
2013/10/31 Иван Кудрявцев > Hello, List. > > I met very big trouble during ceph upgrade from bobtail to cuttlefish. > > My OSDs started to crash to stale so LA went to 100+ on node, after I stop > OSD I unable to launch it again because of errors. So, I started to > reformat OSDs and eventually

[ceph-users] interested questions

2013-10-29 Thread Ирек Фасихов
Hi, All. I am interested in the following questions: 1.Does the amount of HDD performance cluster? 2.Is there any experience of implementing KVM virtualization and Ceph on the same server? Thank! -- С уважением, Фасихов Ирек Нургаязович Моб.: +79229045757 ___

Re: [ceph-users] Balance data on near full osd warning or error

2013-10-22 Thread Ирек Фасихов
http://ceph.com/docs/master/rados/operations/placement-groups/ 2013/10/22 HURTEVENT VINCENT > Hello, > > we're using a small Ceph cluster with 8 nodes, each 4 osds. People are > using it through instances and volumes in a Openstack platform. > > We're facing a HEALTH_ERR with full or near full

Re: [ceph-users] Ceph OSDs not using private network

2013-10-22 Thread Ирек Фасихов
http://ceph.com/docs/master/rados/configuration/network-config-ref/ 22 окт. 2013 г. 18:22 пользователь "Abhay Sachan" написал: > Hi All, > I have a ceph cluster setup with 3 nodes which has 1Gbps public network > and 10Gbps private cluster network which is not accessible from public > network. I

Re: [ceph-users] ceph uses too much disk space!!

2013-10-06 Thread Ирек Фасихов
http://ceph.com/docs/master/rados/operations/placement-groups/ 2013/10/5 Linux Chips > Hi every one; > we have a small testing cluster, one node with 4 OSDs of 3TB each. i > created one RBD image of 4TB. now the cluster is nearly full: > > # ceph df > GLOBAL: > SIZE AVAIL RAW USE