Hi,
I’m continuously getting scrub errors in my index pool and log pool that I need
to repair always.
HEALTH_ERR 2 scrub errors; Possible data damage: 1 pg inconsistent
[ERR] OSD_SCRUB_ERRORS: 2 scrub errors
[ERR] PG_DAMAGED: Possible data damage: 1 pg inconsistent
pg 20.19 is active+clean+in
Forgot the very minimum entries after the scrub done:
2021-04-01T11:37:43.559539+0700 osd.39 (osd.39) 50 : cluster [DBG] 20.19 repair
starts
2021-04-01T11:37:43.889909+0700 osd.39 (osd.39) 51 : cluster [ERR] 20.19 soid
20:990258ea:::.dir.9213182a-14ba-48ad-bde9-289a1c0c0de8.17263260.1.237:head :
We're glad to announce the first release of the Pacific v16.2.0 stable
series. There have been a lot of changes across components from the
previous Ceph releases, and we advise everyone to go through the release
and upgrade notes carefully.
Major Changes from Octopus
--
Ge
Hi,
Is there any way to log the x-amz-request-id along with the request in
the rgw logs? We're using beast and don't see an option in the
configuration documentation to add headers to the request lines. We
use centralized logging and would like to be able to search all layers
of the request path (
Hi David,
Don't have any good idea for "octopus" (other than ops log), but you can do
that (and more) in "pacific" using lua scripting on the RGW:
https://docs.ceph.com/en/pacific/radosgw/lua-scripting/
Yuval
On Thu, Apr 1, 2021 at 7:11 PM David Orman wrote:
> Hi,
>
> Is there any way to log th
On 3/31/21 9:44 AM, David Galloway wrote:
>
> On 3/31/21 5:24 AM, Stefan Kooman wrote:
>> On 3/30/21 10:28 PM, David Galloway wrote:
>>> This is the 19th update to the Ceph Nautilus release series. This is a
>>> hotfix release to prevent daemons from binding to loopback network
>>> interfaces.
Hi! I have a single machine ceph installation and after trying to update to
pacific the upgrade is stuck with:
ceph -s
cluster:
id: d9f4c810-8270-11eb-97a7-faa3b09dcf67
health: HEALTH_WARN
Upgrade: Need standby mgr daemon
services:
mon: 1 daemons, quorum sev.spacescience.ro (age 3w)
mgr: sev
I think what it’s saying is that it wants for more than one mgr daemon to be
provisioned, so that it can failover when the primary is restarted. I suspect
you would then run into the same thing with the mon. All sorts of things tend
to crop up on a cluster this minimal.
> On Apr 1, 2021, at
Hi Folks,
A Red Hat SA (Mustafa Aydin) suggested, some while back, a concise
formula for relaying ops-log to syslog, basically a script executing
socat unix-connect:/var/run/ceph/opslog,reuseaddr UNIX-CLIENT:/dev/log &
I haven't experimented with it.
Matt
On Thu, Apr 1, 2021 at 12:22 PM Yuval
On 4/1/21 8:19 PM, Anthony D'Atri wrote:
I think what it’s saying is that it wants for more than one mgr daemon to be
provisioned, so that it can failover
unfortunately it is not allowed as the port usage is clashing ...
i found out the name of the daemon by grepping the ps output (it would be
Hello,
thanks for a very interesting new Ceph Release.
Are there any plans to build for Debian bullseye as well? It's in
"Hard Freeze" since 2021-03-12 and at the moment it comes with a
Nautilus release that will be EOL when Debian bullseye will be
official stable. That will be a pain for Debian
On 4/1/21 6:56 PM, David Galloway wrote:
They will be built and pushed hopefully today. We had a bug in our CI
after updating our builders to Ubuntu Focal.
Just pushed.
Great, thanks for the heads up!
Gr. Stefan
___
ceph-users mailing list -- ce
On 3/30/21 12:48 PM, Mike Perez wrote:
Hi everyone,
I didn't get enough responses on the previous Doodle to schedule a
meeting. I'm wondering if people are OK with the previous PDF I
released or if there's interest in the community to develop better
survey results?
https://ceph.io/community/cep
Hi,
This is awesome news! =).
I did hear mention before about Crimson and Pacific - does anybody know
what the current state of things is?
I see there's a doc page for it here -
https://docs.ceph.com/en/latest/dev/crimson/crimson/
Are we able to use Crimson yet in Pacific? (As in, do we need to
14 matches
Mail list logo