Hello,
maybe I missed the announcement but why is the documentation of the
older ceph version not accessible anymore on docs.ceph.com?
Best,
Martin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.i
en/luminous/"; will not work any more.
Is it planned to make the documentation of the older version available
again through doc.ceph.com?
Best,
Martin
On Sat, Nov 21, 2020 at 2:11 AM Dan Mick wrote:
>
> On 11/14/2020 10:56 AM, Martin Palma wrote:
> > Hello,
> >
> > may
Hello,
what is the currently preferred method, in terms of stability and
performance, for exporting a CephFS directory with Samba?
- locally mount the CephFS directory and export it via Samba?
- using the "vfs_ceph" module of Samba?
Best,
Martin
___
ce
Hi, What is the maximum number of files per directory? I could find
the answer in the docs.
Best,
Martin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hello,
after an unexpected power outage our production cluster has 5 PGs
inactive and incomplete. The OSDs on which these 5 PGs are located all
show "stuck requests are blocked":
Reduced data availability: 5 pgs inactive, 5 pgs incomplete
98 stuck requests are blocked > 4096 sec. Implicated os
> Are the OSDs online? Or do they refuse to boot?
Yes. They are up and running and not marked as down or out of the cluster.
> Can you list the data with ceph-objectstore-tool on these OSDs?
If you mean the "list" operation on the PG works if an output for example:
$ ceph-objectstore-tool --data-p
Yes, but that didn’t help. After some time they have blocked requests again
and remain inactive and incomplete.
On Sat, 15 Aug 2020 at 16:58, wrote:
> Did you tried to restart the sayed osds?
>
>
>
> Hth
>
> Mehmet
>
>
>
> Am 12. August 2020 21:07:55 MESZ schr
clude tunables (because greatly raising choose_total_tries, eg. 200 may be
> the solution to your problem):
> ceph osd crush dump | jq '[.rules, .tunables]'
>
> Peter
>
> On 8/16/20 1:18 AM, Martin Palma wrote:
> > Yes, but that didn’t help. After some time they
Here is the output with all OSD up and running.
ceph -s: https://pastebin.com/5tMf12Lm
ceph health detail: https://pastebin.com/avDhcJt0
ceph osd tree: https://pastebin.com/XEB0eUbk
ceph osd pool ls detail: https://pastebin.com/ShSdmM5a
On Mon, Aug 17, 2020 at 9:38 AM Martin Palma wrote:
>
&
story_les_bound". We tried to set
the "osd_find_best_info_ignore_history_les = true" but with no success
the OSDs keep in a peering loop.
On Mon, Aug 17, 2020 at 9:53 AM Martin Palma wrote:
>
> Here is the output with all OSD up and running.
>
> ceph -s: https://pastebi
If Ceph consultants are reading this please feel free to contact me
off list. We are seeking for someone who can help us of course we will
pay.
On Mon, Aug 17, 2020 at 12:50 PM Martin Palma wrote:
>
> After doing some research I suspect the problem is that during the
> cluster was ba
e PGs (on a size 1 pool). I haven't tried any of
> > that but it could be worth a try. Apparently it only would work if the
> > affected PGs have 0 objects but that seems to be the case, right?
> >
> > Regards,
> > Eugen
> >
> > [1]
> > htt
Dan van der Ster wrote:
>
> Did you already mark osd.81 as lost?
>
> AFAIU you need to `ceph osd lost 81`, and *then* you can try the
> osd_find_best_info_ignore_history_les option.
>
> -- dan
>
>
> On Thu, Aug 20, 2020 at 11:31 AM Martin Palma wrote:
> >
>
output. So we marked that PG on that OSD as complete. This
solved the inactive/incomplete PG for that pool.
The other PGs are from our main CephFS pool and we have the fear that
by doing the above we could lose access to the whole pool and data.
On Thu, Aug 20, 2020 at 11:49 AM Martin Palma wrote
pool and the whole pool.
On Thu, Aug 20, 2020 at 11:55 AM Martin Palma wrote:
>
> On one pool, which was only a test pool, we investigated both OSDs
> which host the inactive and incomplete PG with the following command:
>
> % ceph-objectstore-tool --data-path /var/lib/ceph/osd
To set these setting during runtime I use the following commands from
my admin node:
ceph tell osd.* injectargs '--osd-max-backfills 1'
ceph tell osd.* injectargs '--osd-recovery-max-active 1'
ceph tell osd.* injectargs '--osd-op-queue-cut-off high'
Right?
On Wed, Aug 19, 2020 at 11:08 AM Stefan
Thanks for the clarification. And by raising the PG should we also set
"no scrub" and "no deep scrub" during data movement? What is here the
recommandation?
On Fri, Aug 28, 2020 at 10:25 AM Stefan Kooman wrote:
>
> On 2020-08-28 09:36, Martin Palma wrote:
> > To
Hi all,
today we observe that out of the sudden our standby-replay metadata
server continuously writes the following logs:
2020-02-13 11:56:50.216102 7fd2ad229700 1 heartbeat_map is_healthy
'MDSRank' had timed out after 15
2020-02-13 11:56:50.287699 7fd2ad229700 0 mds.beacon.dcucmds401
Skipping
Hi,
is it possible to run MDS on a newer version than the monitoring nodes?
I mean we run monitoring nodes on 12.2.10 and would like to upgrade
the MDS to 12.2.13 is this possible?
Best,
Martin
___
ceph-users mailing list -- ceph-users@ceph.io
To unsub
Hi Patrick,
we have performed a minor upgrade to 12.2.13 which resolved the issue.
We think it was the following bug:
https://tracker.ceph.com/issues/37723
Best,
Martin
On Thu, Feb 20, 2020 at 5:16 AM Patrick Donnelly wrote:
>
> Hi Martin,
>
> On Thu, Feb 13, 2020 at 4:10 AM
Yes in the end we are in the process of doing it, but we first
upgraded the MDSs which worked fine and it solved the problem we had
with CephFS.
Best,
Martin
On Wed, Feb 26, 2020 at 9:34 AM Konstantin Shalygin wrote:
>
> On 2/26/20 12:49 AM, Martin Palma wrote:
> > is it possible t
21 matches
Mail list logo