way to get cluster running or at least get data from OSDs?
Will appreciate any help.
Thank you
--
Best regards,
Petr
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello Konstantin,
Wednesday, June 16, 2021, 1:50:55 PM, you wrote:
> Hi,
>> On 16 Jun 2021, at 01:33, Petr wrote:
>>
>> I've upgraded my Ubuntu server from 18.04.5 LTS to Ubuntu 20.04.2 LTS via
>> 'do-release-upgrade',
>> during that proc
I created a cephfs using mgr dashboard, which created two pools: cephfs.fs.meta
and cephfs.fs.data
We are using custom provisioning for user defined volumes (users provide yaml
manifests with definition of what they want) which creates dedicated data pools
for them, so cephfs.fs.data is never u
flags are identical.
Could someone please advise me why the dockerized MDS is being stuck as a
standby? Maybe some config values missing or smth?
Best regards,
Petr
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph
, problems started during the migration to cephadm
(which was done after migrating everything to Pacific).
It only occurs when using dockerized MDS. Non-dockerized MDS nodes, also
Pacific, everything runs fine.
Petr
> On 4 Oct 2021, at 12:43, 胡 玮文 wrote:
>
> Hi Petr,
>
> Pl
hat you are facing the same issue.
>
> [1]:
> https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KQ5A5OWRIUEOJBC7VILBGDIKPQGJQIWN/
>
> <https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/KQ5A5OWRIUEOJBC7VILBGDIKPQGJQIWN/>
>
>> 在 2021年10月4日,19:0
Hello,
I wanted to try out (lab ceph setup) what exactly is going to happen
when parts of data on OSD disk gets corrupted. I created a simple test
where I was going through the block device data until I found something
that resembled user data (using dd and hexdump) (/dev/sdd is a block
devic
Hello,
No I don't have osd_scrub_auto_repair, interestingly after about a week after
forgetting about this, an error manifested:
[ERR] OSD_SCRUB_ERRORS: 1 scrub errors
[ERR] PG_DAMAGED: Possible data damage: 1 pg inconsistent
pg 4.1d is active+clean+inconsistent, acting [4,2]
which could be
Most likely it wasn't, the ceph help or documentation is not very clear about
this:
osd deep-scrub
initiate deep scrub on osd , or use
to deep scrub all
It doesn't say anything like "initiate dee
Hello
In https://docs.ceph.com/en/latest/cephfs/client-auth/ we can find that
ceph fs authorize cephfs_a client.foo / r /bar rw Results in
client.foo
key: *key*
caps: [mds] allow r, allow rw path=/bar
caps: [mon] allow r
caps: [osd] allow rw tag cephfs data=cephfs_a
Wha
Hello,
My goal is to setup multisite RGW with 2 separate CEPH clusters in separate
datacenters, where RGW data are being replicated. I created a lab for this
purpose in both locations (with latest reef ceph installed using cephadm) and
tried to follow this guide: https://docs.ceph.com/en/reef/r
.
Have somebody seen similar issues before?
Best regards,
Petr Belyaev
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
We are evaluating pros and cons of running postgresql backed by ceph. We
know that running pg on dedicated physical hw is highly recommended, but
we've got our reasons.
So to the question: What could happen if we switch fsync to off on postgre
backed by ceph?
The increase of perfomance is huge, w
13 matches
Mail list logo