Our Ceph cluster entered into that HEALTH_ERR last week.
We’re running Infernalis and that was the first time I’ve seen
it in that state. Even when OSD instances dropped off we’ve
only seen HEALTH_WARN. The output of `ceph status` looks
like this:
[root@r01u02-b ~]# ceph status
cluster ed62b3b9
Would be interested as well.
- Kevin
2018-02-04 19:00 GMT+01:00 Yoann Moulin :
> Hello,
>
> What is the best kernel for Luminous on Ubuntu 16.04 ?
>
> Is linux-image-virtual-lts-xenial still the best one ? Or
> linux-virtual-hwe-16.04 will offer some improvement ?
>
> Thanks,
>
> --
> Yoann Moul
Greetings ceph-users. I have been trying to build a test cluster in a KVM
environment - something I have done before successfully before but this time
I'm running into an issue I can't seem to get past. My Internet searches have
shown instances of this by other users that involved either ownersh
Hello Everyone,
I have a Ceph test setup with 3 mons, 3 RGWs, 5 OSD nodes and 22 OSDs. RadosGW
instances run on the monitor nodes and they are behind a load balancer. I run
RGW instances in the full debug mode (20/20 for rgw and 20/20 for civet web).
I can easily access RGW via S3 API with any
Hi,
Am 14.12.2017 um 15:02 schrieb Sage Weil:
> On Thu, 14 Dec 2017, Stefan Priebe - Profihost AG wrote:
>>
>> Am 14.12.2017 um 13:22 schrieb Sage Weil:
>>> On Thu, 14 Dec 2017, Stefan Priebe - Profihost AG wrote:
Hello,
Am 21.11.2017 um 11:06 schrieb Stefan Priebe - Profihost AG:
>
Smallest scope is per-bucket.
Daniel
On 02/06/2018 02:24 PM, Robert Stanford wrote:
Hello Ceph users. Is object lifecycle (currently expiration) for rgw
implementable on a per-object basis, or is the smallest scope the bucket?
___
ceph-users
Hi,
On Tue, Feb 6, 2018 at 10:04 AM, Ingo Reimann wrote:
> Just to add -
>
> We wrote a little wrapper, that reads the output of "radosgw-admin usage
> show" and stops, when the loop starts. When we add all entries by
> ourselves, the result is correct. Moreover - the duplicate timestamp, that
>
Hey Cephers,
This is just a friendly reminder that the next Ceph Developer Montly
meeting is coming up:
http://wiki.ceph.com/Planning
If you have work that you're doing that it a feature work, significant
backports, or anything you would like to discuss with the core team,
please add it to the
Hello Kai,
we use RBD's as part of pacemaker resource groups for 2 years on Hammer
with no
problems.
The resource is always configured in active/passive mode due to the
fact that the
filesystem is not cluster aware. Therefore during switchover the RBD's
are unmapped
cleanly on the active node bef