Hi all,
I'm looking into other ways I can boost the performance of RBD devices
on the cluster here and I happened to see these settings:
http://ceph.com/docs/next/rbd/rbd-config-ref/
A query, is it possible for the cache mentioned there to be paged out to
swap residing on a SSD or is it purely R
On 06/05/2014 08:59 AM, Stuart Longland wrote:
Hi all,
I'm looking into other ways I can boost the performance of RBD devices
on the cluster here and I happened to see these settings:
http://ceph.com/docs/next/rbd/rbd-config-ref/
A query, is it possible for the cache mentioned there to be page
Hello,
> Huh. We took the 5MB limit from S3, but it definitely is unfortunate
> in combination with our 4MB chunking. You can change the default slice
> size using a config option, though. I believe you want to change
> rgw_obj_stripe_size (default: 4 << 20). There might be some other
> considerat
Hi All,
I have a four node ceph cluster. But the command ceph -s shows health
warning. How should I safely synchronize time across the nodes.
The OS platform is Ubuntu.
---
root@cephadmin:/home/oss# ceph -s
cluster 9acd33d7-759b-45f4-b48f-a4682fd6c674
health HEALTH_WARN clock
On 06/05/2014 10:24 AM, yalla.gnan.ku...@accenture.com wrote:
Hi All,
I have a four node ceph cluster. But the command ceph –s shows health
warning. How should I safely synchronize time across the nodes.
Try to run this first:
$ ceph health detail
It will tell you the skew per monitor.
A
Hi,
I was running some tests on the new civetweb frontend, hoping to get
rid of the lighttpd we have in front of it and found an issue.
If you execute a HEAD on something that returns an error, the _body_
of the error will be sent, which is incorrect for a HEAD. In a
keepalive scenario this scre
ceph 0.72.2 on SL6.5 from offical repo.
After down one of OSDs (for further the sever out) one of PGs become incomplte:
$ ceph health detail
HEALTH_WARN 1 pgs incomplete; 1 pgs stuck inactive; 1 pgs stuck unclean; 2
requests are blocked > 32 sec; 1 osds have slow requests
pg 4.77 is stuck inac
Hi,
I'm new with ceph and I'm trying to understand how replicating data
between two regions/zones work.
I've read this http://ceph.com/docs/master/radosgw/federated-config/
and this
http://www.sebastien-han.fr/blog/2013/01/28/ceph-geo-replication-sort-of/
and tried that http://blog.kri5.fr/?p=2
Hi all,
could somoene explain what's happen when I create a rbd file in a cluster
using a public and a cluster network ?
The client is on the public network and create a file with rbd command.
I think the client contacts the monitor on the public network .
What is the network used for replicat
Hi,
Is it possible to remove the metadata and data pools with FireFly (0.80)? As in
delete them...
We will not be using any form of CephFS and the cluster is simply design for
RBD devices so these pools and their abilities will likely never be used
either however when I try to remove them it
As far as I know, at this moment you could only delete them if you
create a newfs on other pools. But I think I saw a discussion
somewhere for having an 'rmfs' command in the future:)
- Message from Pieter Koorts -
Date: Thu, 05 Jun 2014 11:12:46 + (GMT)
From: Pieter K
On 06/05/2014 12:37 PM, Ignazio Cassano wrote:
Hi all,
could somoene explain what's happen when I create a rbd file in a cluster
using a public and a cluster network ?
The client is on the public network and create a file with rbd command.
I think the client contacts the monitor on the public n
Hi all,
A couple of weeks ago i've upgraded from emperor to firefly.
I'm using Cloudstack /w CEPH as the storage backend for VMs and templates.
Since the upgrade, ceph is in a HEALTH_ERR with 500+ pgs inconsistent and
2000+ scrub errors. Not sure if it has the do with firefly though, but
the u
Hi All,
I have four node ceph cluster. I have another three node setup for openstack.
I have integrated Ceph with openstack.
Whenever I try to create storage with ceph as storage backend for the openstack
vm, the creation process goes on forever in the horizon dashboard.
It never completes.
Ah okay, Thanks for the hint.
Hopefully it will be allowed in the LTS release. Technically not a new feature
but I don't know much of the backend code design.
Pieter
On Jun 05, 2014, at 12:20 PM, Kenneth Waegeman
wrote:
As far as I know, at this moment you could only delete them if you
cr
On 06/05/2014 01:38 PM, yalla.gnan.ku...@accenture.com wrote:
Hi All,
I have four node ceph cluster. I have another three node setup for
openstack. I have integrated Ceph with openstack.
Whenever I try to create storage with ceph as storage backend for the
openstack vm, the creation process
Hi all,
Did you find a solution ? Because I’m experiencing the same issue for my fresh
install (5 nodes)
I’ve got 5 mon_initial_members :
mon_initial_members = ceph1, ceph2, ceph3, ceph4, ceph5
mon_host = ip1, ip2, ip3, ip4, ip5
Trying to ping the whole cluster with the hostnames from my admin-n
On Thu, Jun 5, 2014 at 8:58 AM, NEVEU Stephane
wrote:
> Hi all,
>
>
>
> Did you find a solution ? Because I’m experiencing the same issue for my
> fresh install (5 nodes)
You should increase the verbosity of the mons in ceph.conf, restart
the monitors and then see what the logs say.
Like:
[mon]
Hi, several osds were down/out with similar logs as below, could you help?
-38> 2014-06-05 10:27:54.700832 7f2ceead6700 1 -- 192.168.40.11:6800/19542
<== osd.11 192.168.40.11:6822/20298 2 pg_notify(0.aa4(2) epoch 7) v5
812+0+0 (3873498789 0 0) 0x57a0540 con 0x49d14a0
-37> 2014-0
Alfredo,
Believe me or not but the simple fact to add this in my ceph.conf :
[mon]
debug mon = 20
debug ms = 10
and purging and redeploying the whole cluster made the keys to be created !!
So thank you very much but I can't figure out why it wasn't working previously
:/
-Message d'ori
Hello,
Probably this is anti-pattern, but I have to get answer how this
will work / not work.
Input:
I have single host for tests with ceph 0.80.1 and 2 OSD:
OSD.0 – 1000 Gb
OSD.1 – 750 Gb
Recompiled CRUSH map to set „step chooseleaf fi
Hello all,
I'm currently building a new small cluster with three nodes, each node
having 4x 1 Gbit/s network interfaces available and 8-10 OSDs running
per node.
I thought I assign 2x 1 Gb/s for the public network, and the other 2x 1
Gb/s for the cluster network.
My low-budget setup consis
many thanks
2014-06-05 13:33 GMT+02:00 Wido den Hollander :
> On 06/05/2014 12:37 PM, Ignazio Cassano wrote:
>
>> Hi all,
>> could somoene explain what's happen when I create a rbd file in a cluster
>> using a public and a cluster network ?
>>
>> The client is on the public network and create a
On Thu, 5 Jun 2014, Wido den Hollander wrote:
> On 06/05/2014 08:59 AM, Stuart Longland wrote:
> > Hi all,
> >
> > I'm looking into other ways I can boost the performance of RBD devices
> > on the cluster here and I happened to see these settings:
> >
> > http://ceph.com/docs/next/rbd/rbd-config-
This usually happens on larger clusters when you hit the max fd limit.
Add
max open files = 131072
in the [global] section of ceph.conf to fix it (default is 16384).
sage
On Thu, 5 Jun 2014, Cao, Buddy wrote:
>
> Hi, several osds were down/out with similar logs as below, could you
Sage,
Yes, I already set the max open file in ceph.conf. And I also checked in OS.
Any suggestions?
# cat /proc/sys/fs/file-max
3274834
[global]
auth supported = cephx
keyring = /etc/ceph/keyring.admin
max open files = 131072
osd pool default pg num = 1600
osd pool default pgp num = 1600
Wei C
Hello,
On Thu, 5 Jun 2014 14:11:47 + Vadim Kimlaychuk wrote:
> Hello,
>
> Probably this is anti-pattern, but I have to get answer how
> this will work / not work. Input:
> I have single host for tests with ceph 0.80.1 and 2 OSD:
> OSD.0 – 1000 Gb
>
Just wanted to close this open loop:
I gave up attempting to recover pool 4 as it was just test data, and the PGs
with unfound objects were localized to that pool. After I destroyed and
recreated the pool this were fine.
Thank you for your help, Florian.
./JRH
On Jun 3, 2014, at 6:30 PM, Jas
Hi folks,
I just upgraded from 0.72 to 0.80, and everything seems fine
with the exception of one mds, which refuses to start because
"one or more OSDs do not support TMAP2OMAP". Two other mdses are
fine. I've checked the osd processes, and they are all version 0.80.1,
and they were all u
Hi,
>>My low-budget setup consists of two gigabit switches, capable of LACP,
>>but not stackable. For redundancy, I'd like to have my links spread
>>evenly over both switches.
If you want to do lacp with both switches, they need to be stackable.
(or use active-backup bonding)
>>My question wh
This may happen if the mon wasn't upgraded to 0.80.x before all of the
OSDs were restarted. You should be able to find the OSD(s) that were
restarted before the mons with
ceph osd dump -f json-pretty | grep features
Look for features set to 0 instead of some big number.
sage
On Thu, 5 Jun
Hi Alexandre,
thanks for the reply. As said, my switches are not stackable, so using LCAP
seems not to be my best option.
I'm seeking for an explanation how Ceph is utilizing two (or more) independent
links on both the public and the cluster network.
If I configure two IPs for the public netwo
On Thu, 05 Jun 2014 16:20:04 +0200, Sven Budde
wrote:
> My question where I didn't find a conclusive answer in the documentation
> and mailing archives:
> Will the OSDs utilize both 'single' interfaces per network, if I assign
> two IPs per public and per cluster network? Or will all OSDs just
I have
osd pool default size = 2
at my ceph.conf. Shouldn' it tell ceph to use 2 OSDs ? Or it is somewhere in
CRUSH map?
Vadim
From: Christian Balzer [ch...@gol.com]
Sent: Thursday, June 05, 2014 18:26
To: Vadim Kimlaychuk
Cc: ceph-users@lists.ceph.co
On Thu, Jun 5, 2014 at 4:38 AM, Dennis Kramer wrote:
> Hi all,
>
> A couple of weeks ago i've upgraded from emperor to firefly.
> I'm using Cloudstack /w CEPH as the storage backend for VMs and templates.
Which versions exactly were you and are you running?
>
> Since the upgrade, ceph is in a HE
I don't believe that should cause any issues; the chunk sizes are in
the metadata.
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Jun 5, 2014 at 12:23 AM, Sylvain Munaut
wrote:
> Hello,
>
>> Huh. We took the 5MB limit from S3, but it definitely is unfortunate
>> in co
ceph osd dump | grep size
Check that all pools are size 2, min size 2 or 1.
If not you can change on the fly with:
ceph osd pool set #poolname size/min_size #size
See docs http://ceph.com/docs/master/rados/operations/pools/ for
alterations to pool attributes.
-Michael
On 05/06/2014 17:29, V
Right. It only matters for newly created objects.
On Thu, Jun 5, 2014 at 10:01 AM, Gregory Farnum wrote:
> I don't believe that should cause any issues; the chunk sizes are in
> the metadata.
> -Greg
> Software Engineer #42 @ http://inktank.com | http://ceph.com
>
>
> On Thu, Jun 5, 2014 at 12:2
Le 05/06/2014 18:27, Sven Budde a écrit :
> Hi Alexandre,
>
> thanks for the reply. As said, my switches are not stackable, so using LCAP
> seems not to be my best option.
>
> I'm seeking for an explanation how Ceph is utilizing two (or more)
> independent links on both the public and the cluster
Hello,
dmesg:
[ 690.181780] libceph: mon1 192.168.214.102:6789 feature set mismatch,
my 4a042a42 < server's 504a042a42, missing 50
[ 690.181907] libceph: mon1 192.168.214.102:6789 socket error on read
[ 700.190342] libceph: mon0 192.168.214.101:6789 feature set mismatch,
my 4a042a42 < s
You'll also want to change the crush weights of your OSDs to reflect
the different sizes so that the smaller disks don't get filled up
prematurely. See "weighting bucket items" here:
http://ceph.com/docs/master/rados/operations/crush-map/
On Thu, Jun 5, 2014 at 10:14 AM, Michael wrote:
> ceph os
>>I'm seeking for an explanation how Ceph is utilizing two (or more)
>>independent links on both the public and the cluster network.
public network : client -> osd
cluster network : osd->osd (mainly replication)
>>If I configure two IPs for the public network on two NICs, will Ceph route
>>tra
Doing bonding without LACP is probably going to end up being painful.
Sooner or later you're going to end up with one end thinking that bonding
is working while the other end thinks that it's not, and half of your
traffic is going to get black-holed.
I've had moderately decent luck running Ceph o
Yes, forgot to mention that, of course LACP and stackable switches is
the safest and easy way, but sometimes when budget is a constraint you
have to deal with it. Prices difference between simple Gb switches and
stackable ones are not negligible. You generally get what you paid for ;-)
But I think
Do for every read/write rbd read/write full block of data (4MB) or rbd can
read/wite part of block?
For example - I have a 500MB file (database) and need random read/write by
blocks about 1-10Kb.
Do for every read 1 Kb rbd will read 4MB from hdd?
for write?
___
On 05/06/14 17:01, yalla.gnan.ku...@accenture.com wrote:
Hi All,
I have a ceph storage cluster with four nodes. I have created block storage
using cinder in openstack and ceph as its storage backend.
So, I see a volume is created in ceph in one of the pools. But how to get
information like o
Hello Ceph Guru,
I rebooted osd server to fix “osd.33”. When the server came back online, I
noticed all the osd are down, while I am troubleshooting and restarting the
osd, I got below error for authentication. I also noticed the “keyring” for
each osd had shifted. For example, for osd.33 whic
I would think that rbd block are like stripes for RAID or blocks for hard
drives, even if you only need to read or write 1k, the full stripe has to be
read or write.
Cheers
--
Cédric Lemarchand
> Le 5 juin 2014 à 22:56, Timofey Koolin a écrit :
>
> Do for every read/write rbd read/write ful
Hi all,
I did a bit of an experiment with multi-mds on firefly, and it worked fine
until one of the MDS crashed when rebalancing. It's not the end of the world,
and I could just start fresh with the cluster, but I'm keen to see if this can
be fixed as running multi-mds is something I would like
There's some prefetching and stuff, but the rbd library and RADOS storage
are capable of issuing reads and writes in any size (well, down to the
minimal size of the underlying physical disk).
There are some scenarios where you will see it writing a lot more if you
use layering -- promotion of data
On Fri, Jun 6, 2014 at 8:38 AM, David Jericho
wrote:
> Hi all,
>
>
>
> I did a bit of an experiment with multi-mds on firefly, and it worked fine
> until one of the MDS crashed when rebalancing. It's not the end of the
> world, and I could just start fresh with the cluster, but I'm keen to see if
The rados_stat() in fuse module will block all the time, what's problem?
why?
#define FUSE_USE_VERSION 26
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
#include
char *poold = "data";
char *p
On 11/28/13, 4:18 AM, Dan Van Der Ster wrote:
> Dear users/experts,
> Does anyone know how to use radosgw-admin log show? It seems to not properly
> read the --bucket parameter.
>
> # radosgw-admin log show --bucket=asdf --date=2013-11-28-09
> --bucket-id=default.7750582.1
> error reading log 20
On Thu, Jun 5, 2014 at 8:42 PM, Derek Yarnell wrote:
> On 11/28/13, 4:18 AM, Dan Van Der Ster wrote:
>> Dear users/experts,
>> Does anyone know how to use radosgw-admin log show? It seems to not properly
>> read the --bucket parameter.
>>
>> # radosgw-admin log show --bucket=asdf --date=2013-11-2
Sage,
When you talk about "large cluster", do you mean the total num of OSDs or the
OSDs on each storage node? I'm setup the cluster on 4 storage nodes with 30
OSDs (800G HDD) on each node. Since I already set the max open file in
ceph.conf, do you think I also adjust the user/system level ma
On Fri, Jun 6, 2014 at 8:38 AM, David Jericho
wrote:
> Hi all,
>
>
>
> I did a bit of an experiment with multi-mds on firefly, and it worked fine
> until one of the MDS crashed when rebalancing. It's not the end of the
> world, and I could just start fresh with the cluster, but I'm keen to see if
> -Original Message-
> From: Yan, Zheng [mailto:uker...@gmail.com]
> looks like you removed mds.0 from the failed list. I don't think there is a
> command to add mds the failed back. maybe you can use 'ceph mds setmap
> ...' .
>From memory, I probably did, misunderstanding how it worked.
Hi everybody,
we are going to do our first french proxmox in Paris in september
http://www.meetup.com/Proxmox-VE-French-Meetup/
And of course, we'll talk about ceph integration in proxmox.
So if you are interested, feel free to join us !
Regards,
Alexandre
___
Michael, indeed I have pool size = 3. I changed it to 2. After that I have
recompiled crush map to reflect different sizes of hard drives and put 1.0 to
1Tb drive and 0.75 for 750Gb.
Now I have all my PG-s at status "active". It should be „active+clean“ isn’t
it ?
I put object into the clu
Hi Alexandre,
http://www.lataverneducroissant.fr/ turned out to be a nice place for the Ceph
meetups. It's free of charge also ... as long as people drink. You just have to
be careful to choose a non football event night otherwise the video projector
is not available. And it's too noisy to disc
60 matches
Mail list logo