Looks like you enabled directory fragments, which is buggy in ceph version 0.72.
Regards
Yan, Zheng
When it's enabled it wasn't intentionally. So how would I disable it?
Regards,
Georg
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://l
And that's exactly what it sounds like — the MDS isn't finding objects
that are supposed to be in the RADOS cluster.
I'm not sure, what I should think about that. MDS shouldn't access data
for RADOS and vice versa?
Anyway, glad it fixed itself, but it sounds like you've got some
infrastruc
Hi Karan
thx for u cooperation.
I got some help on #ceph, so now I got mon[123] and mds[123] and ods[12345].
also I got one client on centos6,5 with elrepo lt kernel, which provides me
nonfuse mount of my cephfs.
I got some simple cases for test. like bonnie++ and this:
for i in `seq 0 256`; do t
hi,list:
I tried to sync between master zone and slave zone belong one region . I stored
a file "hello" in the bucket beast,and it returned error ,when running the
radosgw-agent to sync the objects.
But metadata is OK !
the radosgw-agent reflected info that:
Thu, 24 Apr 2014 08:33:29 GMT
/admi
Hello,
Today I read the monitor trouble shooting doc
(https://ceph.com/docs/master/rados/troubleshooting/troubleshooting-mon/) with
this section:
Scrap the monitor and create a new one
You should only take this route if you are positive that you won’t
lose the information kept by th
- Message from Craig Lewis -
Date: Fri, 18 Apr 2014 14:59:25 -0700
From: Craig Lewis
Subject: Re: [ceph-users] OSD distribution unequally
To: ceph-users@lists.ceph.com
When you increase the number of PGs, don't just go to the max value.
Step into it.
You'll want to e
On 04/23/2014 09:35 PM, Craig Lewis wrote:
On 4/23/14 12:33 , Dyweni - Ceph-Users wrote:
Hi,
I'd like to know what happens to a cluster with one monitor while that
one monitor process is being restarted.
For example, if I have an RBD image mounted and in use (actively
reading/writing) when I
Hi,
I cant delete pool with empty name:
$ sudo rados rmpool "" "" --yes-i-really-really-mean-it
successfully deleted pool
but after a few seconds it is recreated automatically.
$ sudo ceph osd dump | grep '^pool'
pool 3 '.rgw' rep size 2 min_size 1 crush_ruleset 0 object_hash rjenkins pg_num
8
On 04/24/2014 12:05 PM, myk...@gmail.com wrote:
Hi,
I cant delete pool with empty name:
$ sudo rados rmpool "" "" --yes-i-really-really-mean-it
successfully deleted pool
but after a few seconds it is recreated automatically.
I have the same 'problem'. I think it's something which is done by
You need to create a pool named ".rgw.buckets.index"
2014-04-24 14:05 GMT+04:00 :
> Hi,
>
> I cant delete pool with empty name:
>
> $ sudo rados rmpool "" "" --yes-i-really-really-mean-it
> successfully deleted pool
>
> but after a few seconds it is recreated automatically.
>
> $ sudo ceph osd
> You need to create a pool named ".rgw.buckets.index"
I tried it before i sent a letter to the list.
All of my buckets have "index_pool": ".rgw.buckets".
--
Regards,
Mikhail
On Thu, 24 Apr 2014 14:21:57 +0400
Irek Fasikhov wrote:
> You need to create a pool named ".rgw.buckets.index"
>
>
>
Hi,
I am a new to the storage clucter concept. I have been task to test CEPH.
I have two DELL PE R515 machines running ubuntu 12.04 LTS. I understand that I
need to make one the admin node where I will be able to run all the commands
and the other node the server, where osd processes will
These pools of different purposes.
[root@ceph01 ~]# radosgw-admin zone list
{ "zones": [
"default"]}
[root@ceph01 ~]# radosgw-admin zone get default
{ "domain_root": ".rgw",
"control_pool": ".rgw.control",
"gc_pool": ".rgw.gc",
"log_pool": ".log",
"intent_log_pool": ".intent-log",
I dont use distributed replication across zones.
$ sudo radosgw-admin zone list
{ "zones": [
"default"]}
--
Regards,
Mikhail
On Thu, 24 Apr 2014 14:52:09 +0400
Irek Fasikhov wrote:
> These pools of different purposes.
>
>
> [root@ceph01 ~]# radosgw-admin zone list
> { "zones": [
>
I do not use distributed replication across zones. :)
2014-04-24 15:00 GMT+04:00 :
> I dont use distributed replication across zones.
> $ sudo radosgw-admin zone list
> { "zones": [
> "default"]}
>
> --
> Regards,
> Mikhail
>
>
> On Thu, 24 Apr 2014 14:52:09 +0400
> Irek Fasikhov wrote:
Hi,
We also get the '' pool from rgw, which is clearly a bug somewhere. But
we recently learned that you can prevent it from being recreated by
removing the 'x' capability on the mon from your client.radosgw.* users,
for example:
client.radosgw.cephrgw1
key: xxx
caps: [mon] allow r
>>My recommendation would be to add that bdrv_invalidate() implementation,
>>then we can be sure for raw, and get the rest fixed as well.
They are a bug tracker about bdrv_invalidate(), closed 2 years ago
http://tracker.ceph.com/issues/2467
Can we reopened it ?
- Mail original -
De
Hello,
Le 24/04/2014 12:39, Sakhi Hadebe a écrit :
>
> Hi,
>
>
> I am a new to the storage clucter concept. I have been task to test CEPH.
>
>
> I have two DELL PE R515 machines running ubuntu 12.04 LTS. I
> understand that I need to make one the admin node where I will be able
> to run all the c
Hi Cedric
Regards,
Sakhi Hadebe
SANReN Engineer - CSIR Meraka Institute
Tel: +27 12 841 2308
Fax: +27 12 841 4223
http://www.sanren.ac.za
>>> Cedric Lemarchand 4/24/2014 2:24 PM >>>
Hello,
Le 24/04/2014 12:39, Sakhi Hadebe a écrit :
Hi,
I am a new to the storage clucter concept
HI,
you can try with 1 monitor node and 2 osd nodes minimum. and detailed
instructions are given below.
http://karan-mj.blogspot.in/2013/12/what-is-ceph-ceph-is-open-source.html
Srinivas.
On Thu, Apr 24, 2014 at 5:54 PM, Cedric Lemarchand wrote:
> Hello,
>
> Le 24/04/2014 12:39, Sakhi Hadebe
Hello,
I am testing radosgw-agent for federation. I have a fully working two
cluster master/secondary zones.
When I try to run radosgw-agent, I receive the following error:
root@us-master:/etc/ceph# radosgw-agent -c inter-sync.conf
ERROR:root:Could not retrieve region map from destination
Tr
i also have this issue and there is another thread on it. radosgw-agent
will sync metadata but not data.
Do you have different gateway system user keys on master and slave zone?
On 04/24/2014 09:45 AM, lixuehui wrote:
hi,list:
I tried to sync between master zone and slave zone belong one regi
Hi Yehuda,
I am getting the following from the radosgw logs :-
---
2014-04-22 09:36:00.024618 7ff16ccf6700 1 == starting new request
req=0x1ec7270 =
2014-04-22 09:36:00.024719 7ff16ccf6700 2 req 15:0.000100::GET
/admin/usage::initializing
2014-04-22 09:36:00.024731 7
Hi All,
What does osd_recovery_max_single_start do? I could not find a description
of it.
Thanks!
Chad.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
It totally depends upon why kind of hardware you are using in your ceph
cluster. you should always choose the right hardware for your cluster keeping
your performance needs in mind.
Can you tell us about the hardware you have or planning to purchase , also what
kind of workload your setup woul
I'm trying to configure a small ceph cluster with both public and
cluster networks.
This is my conf:
[global]
public_network = 192.168.0/24
cluster_network = 10.0.0.0/24
auth cluster required = cephx
auth service required = cephx
auth client required = cephx
fsid = 004baba0-74dc-4429-8
On Thursday, April 24, 2014, Georg Höllrigl
wrote:
>
> And that's exactly what it sounds like — the MDS isn't finding objects
>> that are supposed to be in the RADOS cluster.
>>
>
> I'm not sure, what I should think about that. MDS shouldn't access data
> for RADOS and vice versa?
The metadata
Do you have a typo? :
public_network = 192.168.0/24
should this read:
public_network = 192.168.0.0/24
On 04/24/2014 04:53 PM, Gandalf Corvotempesta wrote:
I'm trying to configure a small ceph cluster with both public and
cluster networks.
This is my conf:
[global]
public_network = 192.
hello again. I planning to rent 2 servers with 22hd each. 2 in raid1 for
system and 20 in two way tests
20 hdd = 20 ods
or
20 hdd = 1 ods.
10-15k sas and sata drives, dont know absolute specificaition, servers are
preparing.
workload - a lot of reads with small files. 1-20mb, but 1 000 000 000 +
On Thu, Apr 24, 2014 at 8:04 AM, Punit Dambiwal wrote:
>
>
> Hi Yehuda,
>
> I am getting the following from the radosgw logs :-
>
> ---
> 2014-04-22 09:36:00.024618 7ff16ccf6700 1 == starting new request
> req=0x1ec7270 =
> 2014-04-22 09:36:00.024719 7ff16ccf6700 2 r
During a recovery, I'm hitting oom-killer for ceph-osd because it's
using more than 90% of avaialble ram (8GB)
How can I decrease the memory footprint during a recovery ?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinf
2014-04-24 18:09 GMT+02:00 Peter :
> Do you have a typo? :
>
> public_network = 192.168.0/24
>
>
> should this read:
>
> public_network = 192.168.0.0/24
Sorry, it was a typo when posting in list.
ceph.conf is correct.
___
ceph-users mailing list
ceph-use
The value of osd_recovery_max_single_start (default 5) is used in conjunction
with osd_recovery_max_active (default 15). This means that a given PG will
start up to 5 recovery operations at time of a total of 15 operations active at
a time. This allows recovery to spread operations across mo
Hi,
Per the docs, I see that cloning is only supported in format 2, and that
the kernel rbd module does not support format 2.
Is there another way to be able to mount a format 2 rbd image on a
physical host without using the kernel rbd module?
One idea I had (not tested) is to use rbd-fuse
Hi David,
Thanks for the reply.
I'm a little confused by OSD versus PGs in the description of the two
options osd_recovery_max_single_start and osd_recovery_max_active .
The ceph webpage describes osd_recovery_max_active as "The number of active
recovery requests per OSD at one time." It doe
Yehuda says he's fixed several of these bugs in recent code, but if
you're seeing it from a recent dev release, please file a bug!
Likewise if you're on a named release and would like to see a backport. :)
-Greg
Software Engineer #42 @ http://inktank.com | http://ceph.com
On Thu, Apr 24, 2014 at
On 04/24/2014 08:14 PM, Gandalf Corvotempesta wrote:
> During a recovery, I'm hitting oom-killer for ceph-osd because it's
> using more than 90% of avaialble ram (8GB)
>
> How can I decrease the memory footprint during a recovery ?
You can reduce pg count per OSD for example, it scales down well
Do you have all of the cluster IP's defined in the host file on each OSD
server? As I understand it, the mon's do not use a cluster network, only the
OSD servers.
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Gandalf
I believe any kernel greater than 3.9 supports format 2 RBD's. I'm sure
someone will correct me if this is a misstatement.
Brad
-Original Message-
From: ceph-users-boun...@lists.ceph.com
[mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Dyweni - Ceph-Users
Sent: Thursday, April 2
Your OSDs shouldn't be crashing during a remap. Although.. you might
try to get that one OSDs below 96% first, it could have an effect. If
OSDs continue to crash after that, I'd start a new thread about the crashes.
If you have the option to delete some data and reload it it later, I'd
do t
Great! I've just confirmed this with 3.12.
However, I also see 'rbd: image test1: WARNING: kernel layering is
EXPERIMENTAL!' FYI
On 2014-04-24 13:07, McNamara, Bradley wrote:
I believe any kernel greater than 3.9 supports format 2 RBD's. I'm
sure someone will correct me if this is a miss
Hi,
sorry for the late response. I opened a ceph tracker issue for it (#8202).
Thanks,
Yehuda
On Wed, Apr 16, 2014 at 1:00 AM, wsnote wrote:
> OS: CentOS 6.5
> Version: Ceph 0.67.7 or 0.79
>
> Hello, everyone!
> I have configured federated gateways for several.
> Now I can sync files from mas
Is there a recommended way to copy an RBD image between two different
clusters?
My initial thought was 'rbd export - | ssh "rbd import -"', but I'm not
sure if there's a more efficient way.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http:
I'm going to note that I've seen this on Firefly. I don't think it
necessarily needs a named release, but I can confirm that it seems to be
related to radosgw.
All the best,
~ Christopher
On Thu, Apr 24, 2014 at 10:33 AM, Gregory Farnum wrote:
> Yehuda says he's fixed several of these bugs in
Hi, Yehuda.
It doesn't matter.We have fixed it.
The filename will be transcoded by url_encode and decoded by urlDecode. There
is a bug when decoding the filename.
At 2014-04-25 03:32:02,"Yehuda Sadeh" wrote:
>Hi,
>
> sorry for the late response. I opened a ceph tracker issue for it (#8202).
>
>
Hi, Yehuda.
It doesn't matter.We have fixed it.
The filename will be transcoded by url_encode and decoded by url_decode. There
is a bug when decoding the filename.
There is another bug when decoding the filename. when radosgw-agent fails
decoding a filename, files sync will get stuck and other f
Dear Ceph-devel, ceph-users,
I am currently facing issue with my ceph mds server. Ceph-mds daemon does not
want to bring up back.
Tried running that manually with ceph-mds -i mon01 -d but it shows that it
stucks at failed assert(session) line 1303 in mds/journal.cc and aborted.
Can someone shed
47 matches
Mail list logo