Hello,
We use 3 node cluster with EC 8+2. Kraken 11.2.0
Cluster was installed with 11.1.1 and upgrade to 11.2.0
After a couple of days 1 osd is stop and fail to start.
this osd was recreated from scratch on ver11.2.0, but after some times it fail
again.
2017-01-27 11:21:35.333547 7fef0771194
Hello,
I’m quite new to ceph and radosgw. With the python API, I found calls
for writing objects via boto API. It’s also possible to add metadata’s
to our objects. But now I have a question: is it possible to select or
search objects via metadata? A little more in detail: I want to store
obje
> Op 30 januari 2017 om 10:29 schreef Johann Schwarzmeier
> :
>
>
> Hello,
> I’m quite new to ceph and radosgw. With the python API, I found calls
> for writing objects via boto API. It’s also possible to add metadata’s
> to our objects. But now I have a question: is it possible to select or
Hello Wido,
That is not good news, but it's what i expected. Thanks for your qick
answer.
Jonny
Am 2017-01-30 11:57, schrieb Wido den Hollander:
Op 30 januari 2017 om 10:29 schreef Johann Schwarzmeier
:
Hello,
I’m quite new to ceph and radosgw. With the python API, I found calls
for writing
Dear Marc,
On 28/01/17 23:43, Marc Roos wrote:
Is there a doc that describes all the parameters that are published by
collectd-ceph?
The best I've found is the Redhat documentation of the performance
counters (which are what collectd-ceph is querying):
https://access.redhat.com/documentati
On Mon, Jan 30, 2017 at 7:09 AM, Burkhard Linke
wrote:
> Hi,
>
>
>
> On 01/26/2017 03:34 PM, John Spray wrote:
>>
>> On Thu, Jan 26, 2017 at 8:18 AM, Burkhard Linke
>> wrote:
>>>
>>> HI,
>>>
>>>
>>> we are running two MDS servers in active/standby-replay setup. Recently
>>> we
>>> had to disconne
Matthew,
Very good documentation on performance counters.
Thank you for sharing with us.
Regards,
André
- Mensagem original -
> De: "Matthew Vernon"
> Para: "Marc Roos" , "ceph-users"
>
> Enviadas: Segunda-feira, 30 de janeiro de 2017 9:18:55
> Assunto: Re: [ceph-users] Ceph monitor
On 01/30/2017 06:11 AM, Johann Schwarzmeier wrote:
Hello Wido,
That is not good news, but it's what i expected. Thanks for your qick
answer.
Jonny
Am 2017-01-30 11:57, schrieb Wido den Hollander:
Op 30 januari 2017 om 10:29 schreef Johann Schwarzmeier
:
Hello,
I’m quite new to ceph and ra
I have been playing with the Python version of librados and am getting
startling answers from get_stats() on a pool. I am seeing 'num_objects'
as zero at a point where I am expecting one. But if I loop, waiting for
my expected one, I will get it in a second or so.
I think I created this object
On Mon, Jan 30, 2017 at 4:06 PM, Kent Borg wrote:
> I have been playing with the Python version of librados and am getting
> startling answers from get_stats() on a pool. I am seeing 'num_objects' as
> zero at a point where I am expecting one. But if I loop, waiting for my
> expected one, I will g
On 01/30/2017 11:20 AM, John Spray wrote:
Pool stats are not synchronous -- when you call get_stats it is not
querying every OSD in the system before giving you a response.
Ah!
Is an object's existence and value synchronous?
Thanks,
-kb
___
ceph-u
On Mon, Jan 30, 2017 at 4:22 PM, Kent Borg wrote:
> On 01/30/2017 11:20 AM, John Spray wrote:
>>
>> Pool stats are not synchronous -- when you call get_stats it is not
>> querying every OSD in the system before giving you a response.
>
> Ah!
>
> Is an object's existence and value synchronous?
Yep
On 01/30/2017 11:32 AM, John Spray wrote:
Is an object's existence and value synchronous?
Yep.
Makes sense, as hashing is at the core of this design.
Thanks for the super fast response!
-kb
___
ceph-users mailing list
ceph-users@lists.ceph.com
ht
Dear list,
I'm having some big problems with my setup.
I was trying to increase the global capacity by changing some osds by
bigger ones. I changed them without wait the rebalance process finished,
thinking the replicas were saved in other buckets, but I found a lot of
PGs incomplete, so replicas
On Sun, Jan 29, 2017 at 6:40 AM, Muthusamy Muthiah
wrote:
> Hi All,
>
> Also tried EC profile 3+1 on 5 node cluster with bluestore enabled . When
> an OSD is down the cluster goes to ERROR state even when the cluster is n+1
> . No recovery happening.
>
> health HEALTH_ERR
> 75 pgs are
You might also check out "ceph osd tree" and crush dump and make sure
they look the way you expect.
On Mon, Jan 30, 2017 at 1:23 PM, Gregory Farnum wrote:
> On Sun, Jan 29, 2017 at 6:40 AM, Muthusamy Muthiah
> wrote:
>> Hi All,
>>
>> Also tried EC profile 3+1 on 5 node cluster with bluestore ena
First off, the followings, please.
* ceph -s
* ceph osd tree
* ceph pg dump
and
* what you actually did with exact commands.
Regards,
On Tue, Jan 31, 2017 at 6:10 AM, José M. Martín wrote:
> Dear list,
>
> I'm having some big problems with my setup.
>
> I was trying to increase the global
17 matches
Mail list logo