Hi,
it's been a while since I last looked into this, but as far as I know,
you'd have to iterate over each object in the pool to restore it from
your snapshot. There's no option to restore all of them with one
command.
Regards,
Eugen
Zitat von Pavel Kaygorodov :
Hi!
May be a dumb ques
Hi George,
Looks like you hit this one [1]. Can't find the fix [2] in Reef release notes
[3]. You'll have to cherry pick it and build sources or wait for it to come to
next build.
Regards,
Frédéric.
[1] https://tracker.ceph.com/issues/58878
[2] https://github.com/ceph/ceph/pull/55265
[3] https
Hi,
The [2] is the fix for [1] and should be backported? Currently fields are not
filled, so no one knows that backports are needed
k
> On 27 Sep 2024, at 11:01, Frédéric Nass
> wrote:
>
> Hi George,
>
> Looks like you hit this one [1]. Can't find the fix [2] in Reef release notes
> [3].
Hi Eugen,
Thanks again for taking the time to help us with this.
Here are answers to your questions:
Nothing stands out from the mgr logs. Even when `ceph orch device ls` stops
reporting, it still shows a claim on the osd in the logs when I run it:
Sep 27 09:39:24 ceph-mon3 bash[476409]: debug
Adding --zap to orch command cleans WALL logical volume :
ceph orch osd rm 37 --replace *--zap*
After replacement, new OSD is correctly created. Tested a few times with
18.2.4.
Thanks.
On Fri, 27 Sept 2024 at 19:31, Igor Fedotov wrote:
> Hi!
>
> I'm not an expert in the Ceph orchestrator but
On Tue, Aug 27, 2024 at 6:49 AM Eugen Block wrote:
>
> Hi,
>
> I just looked into one customer cluster that we upgraded some time ago
> from Octopus to Quincy (17.2.6) and I'm wondering why there are still
> both pools, "device_health_metrics" and ".mgr".
>
> According to the docs [0], it's suppos
Oh interesting, I just got into the same situation (I believe) on a
test cluster:
host1:~ # ceph orch ps | grep unknown
osd.1 host6
stopped 72s ago 36m-4096M
osd.13 host6
Hi!
I'm not an expert in the Ceph orchestrator but it looks to me like WAL
volume hasn't been properly cleaned up during osd.1 removal.
Please compare LVM tags for osd.0 and .1:
osd.0:
"devices": [
"/dev/sdc"
],
...
"lv_tags":
"...,ceph.osd_fsid=d47
I am running 18.2.2, which apparently is the latest one available for proxmox
at this time (9/2024).
I’d rather not mess around with backporting and testing fixes at this point,
since this is our “production” cluster.. If it was not a production one, then
I could possibly play around with this
Not a unique issue and I suspect that it affects lots of people who don’t know
yet.
Might be that you should rm the old LVM first or specify it with an explicit
create command.
> On Sep 27, 2024, at 8:55 AM, mailing-lists wrote:
>
> Dear Ceph-users,
> I have a problem that I'd like to ha
Dear Ceph-users,
I have a problem that I'd like to have your input for.
Preface:
I have got a test-cluster and a productive-cluster. Both are setup the
same and both are having the same "issue". I am running Ubuntu 22.04 and
deployed ceph 17.2.3 via cephadm. Upgraded to 17.2.7 later on, which i
By increasing debulg level I found out the following but have no idea how to
fix this issue.
```
src/osd/OSDMap.cc: 3242: FAILED ceph_assert(pg_upmap_primaries.empty())
```
There is only one topic in google and with no answer
___
ceph-users mailing lis
yes, this is a bug, indeed.
https://www.spinics.net/lists/ceph-users/msg82468.html
> Remove mappings by:
> $ `ceph osd dump`
> For each pg_upmap_primary entry in the above output:
> $ `ceph osd rm-pg-upmap-primary `
___
ceph-users mailing list -- ceph-u
fixed by https://www.spinics.net/lists/ceph-users/msg82468.html
CLOSED.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Here are the contents from the same directory on our osd node:
ceph-osd31.prod.os:/var/lib/ceph/9b3b3539-59a9-4338-8bab-3badfab6e855# ls -l
total 412
-rw-r--r-- 1 root root 366903 Sep 14 14:53
cephadm.8b92cafd937eb89681ee011f9e70f85937fd09c4bd61ed4a59981d275a1f255b
drwx-- 3 167 167 4096
WARNING, if you're using cephadm and nfs please don't upgrade to this
release for the time being. There are compatibility issues with cephadm's
deployment of the NFS daemon and ganesha v6 which made its way into the
release container.
On Thu, Sep 26, 2024 at 6:20 PM Laura Flores wrote:
> We're v
Hello everybody,
found intresting thing: for some reason ALL the monitors crash when I try to
rbd map on client host.
here is my pool:
root@ceph1:~# ceph osd pool ls
iotest
Here is my rbd in this pool:
root@ceph1:~# rbd ls -p iotest
test1
this is a client creds to connect to this pool:
[cli
Hi,
> On 27 Sep 2024, at 14:59, Alex from North wrote:
>
> By increasing debulg level I found out the following but have no idea how to
> fix this issue.
>
> ```
> src/osd/OSDMap.cc: 3242: FAILED ceph_assert(pg_upmap_primaries.empty())
> ```
>
> There is only one topic in google and with no a
Hi Alex,
Maybe this one [1] that leads to osd / mon asserts. Have a look at Laura's post
here [2] for more information.
Updating clients to Reef+ (not sure which kernel added the upmap read feature)
or removing any pg_upmap_primaries entries may help in your situation.
Regards,
Frédéric.
[1]
We have pushed a new 19.2.0 container image that uses ganesha v5.5 rather
than 6. For those who hit this issue, rerunning the `ceph orch upgrade`
command needed to upgrade to the original 19.2.0 image again (ceph orch
upgrade start quay.io/ceph/ceph:v19.2.0) was tested and confirmed to get
the nfs
Hi,
Do we know roughly when the 17.2.8 Quincy is going to be released?
Thank you
This message is confidential and is for the sole use of the intended
recipient(s). It may also be privileged or otherwise protected by copyright or
other legal rules. If you have r
Thanks for noting this, I just imported our last cluster and couldn't get
ceph-exporter to start. I noticed that the images it was using for
node-exporter and ceph-exporter were not the same as the other clusters!
Wish this was in the adoption documentation. I have a running list of all
the thing
22 matches
Mail list logo