Hi there, we have been using ceph for a few years now, it's only now that
I've noticed we have been using the same name for all RGW hosts, resulting
when you run ceph -s:
rgw: 1 daemon active (..)
despite having more than 10 RGW hosts.
* What are the side effects of doing this? Is this a no-no?
Dear Jason,
On 2019-09-10 23:04, Jason Dillaman wrote:
> On Tue, Sep 10, 2019 at 2:08 PM Oliver Freyermuth
> wrote:
>>
>> Dear Jason,
>>
>> On 2019-09-10 18:50, Jason Dillaman wrote:
>>> On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth
>>> wrote:
Dear Cephalopodians,
I have
On Tue, Sep 10, 2019 at 2:08 PM Oliver Freyermuth
wrote:
>
> Dear Jason,
>
> On 2019-09-10 18:50, Jason Dillaman wrote:
> > On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth
> > wrote:
> >>
> >> Dear Cephalopodians,
> >>
> >> I have two questions about RBD mirroring.
> >>
> >> 1) I can not get i
Dear Jason,
On 2019-09-10 18:50, Jason Dillaman wrote:
> On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth
> wrote:
>>
>> Dear Cephalopodians,
>>
>> I have two questions about RBD mirroring.
>>
>> 1) I can not get it to work - my setup is:
>>
>> - One cluster holding the live RBD volumes and
On Tue, Sep 10, 2019 at 12:25 PM Oliver Freyermuth
wrote:
>
> Dear Cephalopodians,
>
> I have two questions about RBD mirroring.
>
> 1) I can not get it to work - my setup is:
>
> - One cluster holding the live RBD volumes and snapshots, in pool "rbd",
> cluster name "ceph",
> running l
Dear Cephalopodians,
I have two questions about RBD mirroring.
1) I can not get it to work - my setup is:
- One cluster holding the live RBD volumes and snapshots, in pool "rbd", cluster name
"ceph",
running latest Mimic.
I ran "rbd mirror pool enable rbd pool" on that cluster and
On Tue, Sep 10, 2019 at 9:46 AM Marc Schöchlin wrote:
>
> Hello Mike,
>
> as described i set all the settings.
>
> Unfortunately it crashed also with these settings :-(
>
> Regards
> Marc
>
> [Tue Sep 10 12:25:56 2019] Btrfs loaded, crc32c=crc32c-intel
> [Tue Sep 10 12:25:57 2019] EXT4-fs (dm-0):
Hello Mike,
as described i set all the settings.
Unfortunately it crashed also with these settings :-(
Regards
Marc
[Tue Sep 10 12:25:56 2019] Btrfs loaded, crc32c=crc32c-intel
[Tue Sep 10 12:25:57 2019] EXT4-fs (dm-0): mounted filesystem with ordered data
mode. Opts: (null)
[Tue Sep 10 12:25:
Hello Mike,
Am 03.09.19 um 04:41 schrieb Mike Christie:
> On 09/02/2019 06:20 AM, Marc Schöchlin wrote:
>> Hello Mike,
>>
>> i am having a quick look to this on vacation because my coworker
>> reports daily and continuous crashes ;-)
>> Any updates here (i am aware that this is not very easy to f
Hi Paul, all,
Thanks! But I don't seem to find how to debug the purge queue. When I
check the purge queue, I get these numbers:
[root@mds02 ~]# ceph daemon mds.mds02 perf dump | grep -E 'purge|pq'
"purge_queue": {
"pq_executing_ops": 0,
"pq_executing": 0,
"pq_execut
Hi!
After MDS scrub I got error:
1 MDSs report damaged metadata
#ceph tell mds.0 damage ls
[
{
"damage_type": "backtrace",
"id": 712325338,
"ino": 1099526730308,
"path": "/erant/smb/public/docs/3. Zvity/1. Prodazhi/~$Data-center
2019.08.xlsx"
},
{
I run Ceph on both a home server and a personal offsite backup server
(both single-host setups). It's definitely feasible and comes with a lot
of advantages over traditional RAID and ZFS and the like. The main
disadvantages are performance overhead and resource consumption.
On 07/09/2019 06.16
12 matches
Mail list logo