[ceph-users] Ceph tracker broken?

2024-07-01 Thread Frank Schilder
Hi all, hopefully someone on this list can help me out. I recently started to 
receive unsolicited e-mail from the ceph tracker and also certain merge/pull 
requests. The latest one is:

[CephFS - Bug #66763] (New) qa: revert commit to unblock snap-schedule 
testing

I have nothing to do with that and I have not subscribed to this tracker item 
(https://tracker.ceph.com/issues/66763) eithrt. Yet, I receive unrequested 
updates.

Could someone please take a look and try to find out what the problem is?

Thanks a lot!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [EXTERN] Urgent help with degraded filesystem needed

2024-07-01 Thread Stefan Kooman

Hi Dietmar,

On 29-06-2024 10:50, Dietmar Rieder wrote:

Hi all,

finally we were able to repair the filesystem and it seems that we did 
not lose any data. Thanks for all suggestions and comments.


Here is a short summary of our journey:


Thanks for writing this up. This might be useful for someone in the future.

--- snip ---


X. Conclusion:

If we would have be aware of the bug and its mitigation we would have 
saved a lot of downtime and some nerves.


Is there an obvious place that I missed were such known issues are 
prominently made public? (The bug tracker maybe, but I think it is easy 
to miss the important among all others)



Not that I know of. But changes in behavior of Ceph (daemons) and or 
Ceph kernels would be good to know about indeed. I follow the 
ceph-kernel mailing list to see what is going on with the development of 
kernel CephFS. And there is a thread about reverting the PR that Enrico 
linked to [1], here the last mail in that thread from Venky to Ilya [2]:


"Hi Ilya,

After some digging and talking to Jeff, I figured that it's possible
to disable async dirops from the mds side by setting
`mds_client_delegate_inos_pct` config to 0:

- name: mds_client_delegate_inos_pct
  type: uint
  level: advanced
  desc: percentage of preallocated inos to delegate to client
  default: 50
  services:
  - mds

So, I guess this patch is really not required. We can suggest this
config update to users and document it for now. We lack tests with
this config disabled, so I'll be adding the same before recommending
it out. Will keep you posted."

However, I have not seen any update after this. So apparently it is 
possible to disable this preallocate behavior globally by disabling it 
on the MDS. But there are (were) no MDS tests with this option disabled 
(I guess a percentage of "0" would disable it). So I'm not sure it's 
safe to disable it, and what would happen if you disable this on the MDS 
when there are clients actually using preallocated inodes. I have added 
Venky in the CC so I hope he can give us an update about the recommended 
way(s) of disabling preallocated inodes


Gr. Stefan

[1]: 
https://github.com/gregkh/linux/commit/f7a67b463fb83a4b9b11ceaa8ec4950b8fb7f902


[2]: 
https://lore.kernel.org/all/20231003110556.140317-1-vshan...@redhat.com/T/



___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: How can I increase or decrease the number of osd backfilling instantly

2024-07-01 Thread Jaemin Joo
It is working!! Thank you :)

2024년 6월 28일 (금) 오후 5:02, Malte Stroem 님이 작성:

> Hello Jaemin,
>
> it's mclock now.
>
> Read this:
>
> https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/
>
> and apply that:
>
>
> https://docs.ceph.com/en/reef/rados/configuration/mclock-config-ref/#steps-to-modify-mclock-max-backfills-recovery-limits
>
> Best,
> Malte
>
> On 28.06.24 08:54, Jaemin Joo wrote:
> > Hi All,
> >
> > I'd like to speed up or slow down osd recovery in ceph v18.2.1.
> > According to the page (
> > https://www.suse.com/ko-kr/support/kb/doc/?id=19693 ), I understand
> > that osd_max_backfills, osd_recovery_max_active have to be increased or
> > decreased.
> > but It seems not to impact the number of osd backfilling instantly. so I
> > tried to restart all of the osds even though I didn't need to restart
> osds.
> > I wonder how long it takes to show that the number of backfilling is
> > increased or decreased If the osds is not restarted. and Is there a way
> to
> > increase or decrease the number of backfilling instantly after the
> > configuration is changed.
> > ___
> > ceph-users mailing list -- ceph-users@ceph.io
> > To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [EXTERN] Re: Urgent help with degraded filesystem needed

2024-07-01 Thread Xiubo Li


On 6/26/24 14:08, Dietmar Rieder wrote:
...sending also to the list and Xiubo (were accidentally removed from 
recipients)...


On 6/25/24 21:28, Dietmar Rieder wrote:

Hi Patrick,  Xiubo and List,

finally we managed to get the filesystem repaired and running again! 
YEAH, I'm so happy!!


Big thanks for your support Patrick and Xiubo! (Would love invite you 
for a beer)!



Please see some comments and (important?) questions below:

On 6/25/24 03:14, Patrick Donnelly wrote:

On Mon, Jun 24, 2024 at 5:22 PM Dietmar Rieder
 wrote:


(resending this, the original message seems that it didn't make it 
through between all the SPAM recently sent to the list, my 
apologies if it doubles at some point)


Hi List,

we are still struggeling to get our cephfs back online again, this 
is an update to inform you what we did so far, and we kindly ask 
for any input on this to get an idea on how to proceed:


After resetting the journals Xiubo suggested (in a PM) to go on 
with the disaster recovery procedure:


cephfs-data-scan init skipped creating the inodes 0x0x1 and 0x0x100

[root@ceph01-b ~]# cephfs-data-scan init
Inode 0x0x1 already exists, skipping create.  Use --force-init to 
overwrite the existing object.
Inode 0x0x100 already exists, skipping create.  Use --force-init to 
overwrite the existing object.


We did not use --force-init and proceeded with scan_extents using a 
single worker, which was indeed very slow.


After ~24h we interupted the scan_extents and restarted it with 32 
workers which went through in about 2h15min w/o any issue.


Then I started scan_inodes with 32 workers this was also finished 
after ~50min no output on stderr or stdout.


I went on with scan_links, which after ~45 minutes threw the 
following error:


# cephfs-data-scan scan_links
Error ((2) No such file or directory)


Not sure what this indicates necessarily. You can try to get more
debug information using:

[client]
   debug mds = 20
   debug ms = 1
   debug client = 20

in the local ceph.conf for the node running cephfs-data-scan.


I did that, and restarted the  "cephfs-data-scan scan_links" .

It didn't produce any additional debug output, however this time it 
just went through without error (~50 min)


We then reran "cephfs-data-scan cleanup" and it also finished without 
error after about 10h.


We then set the fs as repaired and all seems to work fin again:

[root@ceph01-b ~]# ceph mds repaired 0
repaired: restoring rank 1:0

[root@ceph01-b ~]# ceph -s
   cluster:
 id: aae23c5c-a98b-11ee-b44d-00620b05cac4
 health: HEALTH_OK

   services:
 mon: 3 daemons, quorum cephmon-01,cephmon-03,cephmon-02 (age 6d)
 mgr: cephmon-01.dsxcho(active, since 6d), standbys: 
cephmon-02.nssigg, cephmon-03.rgefle

 mds: 1/1 daemons up, 5 standby
 osd: 336 osds: 336 up (since 2M), 336 in (since 4M)

   data:
 volumes: 1/1 healthy
 pools:   4 pools, 6401 pgs
 objects: 284.68M objects, 623 TiB
 usage:   890 TiB used, 3.1 PiB / 3.9 PiB avail
 pgs: 6206 active+clean
  140  active+clean+scrubbing
  55   active+clean+scrubbing+deep

   io:
 client:   3.9 MiB/s rd, 84 B/s wr, 482 op/s rd, 1.11k op/s wr


[root@ceph01-b ~]# ceph fs status
cephfs - 0 clients
==
RANK  STATE  MDS    ACTIVITY DNS INOS 
DIRS   CAPS
  0    active  default.cephmon-03.xcujhz  Reqs:    0 /s   124k 60.3k 
1993  0

  POOL    TYPE USED  AVAIL
ssd-rep-metadata-pool  metadata   298G  63.4T
   sdd-rep-data-pool  data    10.2T  84.5T
    hdd-ec-data-pool  data 808T  1929T
    STANDBY MDS
default.cephmon-01.cepqjp
default.cephmon-01.pvnqad
default.cephmon-02.duujba
default.cephmon-02.nyfook
default.cephmon-03.chjusj
MDS version: ceph version 18.2.2 
(531c0d11a1c5d39fbfe6aa8a521f023abf3bf3e2) reef (stable)



The msd log however shows some "bad backtrace on directory inode" 
messages:


2024-06-25T18:45:36.575+ 7f8594659700  1 
mds.default.cephmon-03.xcujhz Updating MDS map to version 8082 from 
mon.1
2024-06-25T18:45:36.575+ 7f8594659700  1 mds.0.8082 
handle_mds_map i am now mds.0.8082
2024-06-25T18:45:36.575+ 7f8594659700  1 mds.0.8082 
handle_mds_map state change up:standby --> up:replay

2024-06-25T18:45:36.575+ 7f8594659700  1 mds.0.8082 replay_start
2024-06-25T18:45:36.575+ 7f8594659700  1 mds.0.8082  waiting for 
osdmap 34331 (which blocklists prior instance)
2024-06-25T18:45:36.581+ 7f858de4c700  0 mds.0.cache creating 
system inode with ino:0x100
2024-06-25T18:45:36.581+ 7f858de4c700  0 mds.0.cache creating 
system inode with ino:0x1

2024-06-25T18:45:36.589+ 7f858ce4a700  1 mds.0.journal EResetJournal
2024-06-25T18:45:36.589+ 7f858ce4a700  1 mds.0.sessionmap wipe start
2024-06-25T18:45:36.589+ 7f858ce4a700  1 mds.0.sessionmap wipe 
result

2024-06-25T18:45:36.589+ 7f858ce4a700  1 mds.0.sessionmap wipe done
2024-06-25T18:45:36.589+ 7f858ce4a700  1 mds.0.8082 Finished 
repl

[ceph-users] Re: Viability of NVMeOF/TCP for VMWare

2024-07-01 Thread Maged Mokhtar

On 28/06/2024 17:59, Frédéric Nass wrote:


We came to the same conclusions as Alexander when we studied replacing Ceph's 
iSCSI implementation with Ceph's NFS-Ganesha implementation: HA was not working.
During failovers, vmkernel would fail with messages like this:
2023-01-14T09:39:27.200Z Wa(180) vmkwarning: cpu18:2098740)WARNING: NFS41: 
NFS41ProcessExidResult:2499: 'Cluster Mismatch due to different server scope. 
Probable server bug. Remount data store to access.'

We replaced Ceph's iSCSI implementation with PetaSAN's iSCSI GWs plugged to our 
external Ceph cluster (unsupported setup) and never looked back.
It's HA, active/active, highly scalable, robust whatever the situation (network 
issues, slow requests, ceph osd pause), and it rocks performances wise.



Good to hear. Interfacing PetaSAN with external clusters can be setup 
manually but it is not something we currently support, we may add this 
in future releases. Our NFS setup also works well with VMWare, with ha 
and active/active giving throughput on par with iSCSI but with slightly 
less iops.





We're using a PSP of type RR with below SATP rule:
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -P VMW_PSP_RR -V PETASAN -M RBD -c tpgs_on -o 
enable_action_OnRetryErrors -O "iops=1" -e "Ceph iSCSI ALUA RR iops=1 PETASAN"

And these adaptor settings:
esxcli system settings advanced set -o /ISCSI/MaxIoSizeKB -i 512
esxcli iscsi adapter param set -A $(esxcli iscsi adapter list | grep iscsi_vmk 
| cut -d ' ' -f1) --key FirstBurstLength --value 524288
esxcli iscsi adapter param set -A $(esxcli iscsi adapter list | grep iscsi_vmk 
| cut -d ' ' -f1) --key MaxBurstLength --value 524288
esxcli iscsi adapter param set -A $(esxcli iscsi adapter list | grep iscsi_vmk 
| cut -d ' ' -f1) --key MaxRecvDataSegment --value 524288

Note that it's important to **not** use object-map feature on RBD images to 
avoid the issue mentioned in [1].



With version 3.2 and earlier, our iSCSI will not connect if 
fast-diff/object-map is enabled. With 3.3 we do support 
fast-diff/object-map but in such case we recommend setting clients in 
active/failover configuration. It will still work ok with active/active 
but with degraded performance due to the exclusive-lock acquiring among 
the nodes (required to support fast-diff/object-map) the lock will 
ping-pong among the nodes hence the active/passive recommendation. But 
to get top scale-out performance, active/active is recommended hence no 
fast-diff/object-map.





Regarding NVMe-oF, there's been an incredible amount of work done over a year 
and a half but we'll likely need to wait a bit longer to have something fully 
production-ready (hopefully active/active).



We will definitely be supporting this. Note that active/active will not 
possible with fast-diff/object-map/exclusive lock for same reasons noted 
above for iSCSI.


/maged


Regards,
Frédéric.

[1] https://croit.io/blog/fixing-data-corruption

- Le 28 Juin 24, à 7:12, Alexander Patrakov patra...@gmail.com a écrit :


For NFS (e.g., as implemented by NFS-ganesha), the situation is also
quite stupid.

Without high availability (HA), it works (that is, until you update
NFS-Ganesha version), but corporate architects won't let you deploy
any system without HA, because, in their view, non-HA systems are not
production-ready by definition. (And BTW, the current NVMe-oF gateway
also has no multipath and thus no viable HA)

With an attempt to set up HA for NFS, you'll get at least the
following showstoppers:

For NFS v4.1:

* VMware refuses to work until the manual admin intervention if it
sees any change in the "owner" and "scope" fields of the EXCHANGE_ID
message between the previous and the current NFS connection.
* NFS-Ganesha sets both fields from the hostname by default, and the
patch that makes these fields configurable is "quite recent" (in
version 4.3). This is important, as otherwise, every NFS server
fail-over would trip off VMware, thus defeating the point of a
high-availability setup.
* There is a regression in NFS-Ganesha that manifests as a deadlock
(easily triggerable even without Ceph by running xfstests), which is
critical, because systemd cannot restart deadlocked services.
Unfortunately, the last NFS-Ganesha version before the regression
(4.0.8) does not contain the patch that allows manipulating the
"owner" and "scope" fields.
* Cephadm-based deployments do not set these configuration options anyway.
* If you would like to use the "rados_cluster" NFSv4 recovery backend
(used for grace periods), you need to be extra careful with various
"server names" also because they are used to decide whether to end the
grace period. If the recovery backend has seen two server names
(corresponding to two NFS-Ganesha instances, for scale-out), then both
must be up for the grace period to end. If there is only one server
name, you are allowed to run only one instance. If you want high
availability together with scale-out, you need to be able to schedule

[ceph-users] squid 19.1.0 RC QE validation status

2024-07-01 Thread Yuri Weinstein
Details of this release are summarized here:

https://tracker.ceph.com/issues/66756#note-1

Release Notes - TBD
LRC upgrade - TBD

(Reruns were not done yet.)

Seeking approvals/reviews for:

smoke
rados - Radek, Laura
rgw- Casey
fs - Venky
orch - Adam King
rbd, krbd - Ilya
quincy-x, reef-x - Laura, Neha
powercycle - Brad
perf-basic - Yaarit, Laura
crimson-rados - Samuel
ceph-volume - Guillaume

Pls let me know if any tests were missed from this list.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: [EXTERN] Urgent help with degraded filesystem needed

2024-07-01 Thread Dietmar Rieder

Hi Stefan,

On 7/1/24 10:34, Stefan Kooman wrote:

Hi Dietmar,

On 29-06-2024 10:50, Dietmar Rieder wrote:

Hi all,

finally we were able to repair the filesystem and it seems that we did 
not lose any data. Thanks for all suggestions and comments.


Here is a short summary of our journey:


Thanks for writing this up. This might be useful for someone in the future.


Yeah, your welcome, I thought so too



--- snip ---


X. Conclusion:

If we would have be aware of the bug and its mitigation we would have 
saved a lot of downtime and some nerves.


Is there an obvious place that I missed were such known issues are 
prominently made public? (The bug tracker maybe, but I think it is 
easy to miss the important among all others)



Not that I know of. But changes in behavior of Ceph (daemons) and or 
Ceph kernels would be good to know about indeed. I follow the 
ceph-kernel mailing list to see what is going on with the development of 
kernel CephFS. And there is a thread about reverting the PR that Enrico 
linked to [1], here the last mail in that thread from Venky to Ilya [2]:


"Hi Ilya,

After some digging and talking to Jeff, I figured that it's possible
to disable async dirops from the mds side by setting
`mds_client_delegate_inos_pct` config to 0:

     - name: mds_client_delegate_inos_pct
   type: uint
   level: advanced
   desc: percentage of preallocated inos to delegate to client
   default: 50
   services:
   - mds

So, I guess this patch is really not required. We can suggest this
config update to users and document it for now. We lack tests with
this config disabled, so I'll be adding the same before recommending
it out. Will keep you posted."

However, I have not seen any update after this. So apparently it is 
possible to disable this preallocate behavior globally by disabling it 
on the MDS. But there are (were) no MDS tests with this option disabled 
(I guess a percentage of "0" would disable it). So I'm not sure it's 
safe to disable it, and what would happen if you disable this on the MDS 
when there are clients actually using preallocated inodes. I have added 
Venky in the CC so I hope he can give us an update about the recommended 
way(s) of disabling preallocated inodes


Gr. Stefan

[1]: 
https://github.com/gregkh/linux/commit/f7a67b463fb83a4b9b11ceaa8ec4950b8fb7f902


[2]: 
https://lore.kernel.org/all/20231003110556.140317-1-vshan...@redhat.com/T/


I'm curious about any updates as well. I hope that not too many cephfs 
users will end up in this situation until the bug is fixed furthermore I 
hope that with posting our experiences here some will get alerted and 
make the proposed mitigation settings


Best
  Dietmar




OpenPGP_signature.asc
Description: OpenPGP digital signature
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: squid 19.1.0 RC QE validation status

2024-07-01 Thread Adam King
For the orch run from https://tracker.ceph.com/issues/66756, most of the
failed jobs will be fixed by https://github.com/ceph/ceph/pull/58341 being
merged (and included in the test build). Guillaume has scheduled a
teuthology run to test the PR already.

On Mon, Jul 1, 2024 at 10:23 AM Yuri Weinstein  wrote:

> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/66756#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
>
> (Reruns were not done yet.)
>
> Seeking approvals/reviews for:
>
> smoke
> rados - Radek, Laura
> rgw- Casey
> fs - Venky
> orch - Adam King
> rbd, krbd - Ilya
> quincy-x, reef-x - Laura, Neha
> powercycle - Brad
> perf-basic - Yaarit, Laura
> crimson-rados - Samuel
> ceph-volume - Guillaume
>
> Pls let me know if any tests were missed from this list.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph tracker broken?

2024-07-01 Thread Gregory Farnum
You currently have "Email notifications" set to "For any event on all
my projects". I believe that's the firehose setting, so I've gone
ahead and changed it to "Only for things I watch or I'm involved in".
I'm unaware of any reason that would have been changed on the back
end, though there were some upgrades recently. It's also possible you
got assigned to a new group or somehow joined some of the projects
(I'm not well-versed in all the terminology there).
-Greg

On Sun, Jun 30, 2024 at 10:35 PM Frank Schilder  wrote:
>
> Hi all, hopefully someone on this list can help me out. I recently started to 
> receive unsolicited e-mail from the ceph tracker and also certain merge/pull 
> requests. The latest one is:
>
> [CephFS - Bug #66763] (New) qa: revert commit to unblock snap-schedule 
> testing
>
> I have nothing to do with that and I have not subscribed to this tracker item 
> (https://tracker.ceph.com/issues/66763) eithrt. Yet, I receive unrequested 
> updates.
>
> Could someone please take a look and try to find out what the problem is?
>
> Thanks a lot!
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: squid 19.1.0 RC QE validation status

2024-07-01 Thread Ilya Dryomov
On Mon, Jul 1, 2024 at 4:24 PM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/66756#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
>
> (Reruns were not done yet.)
>
> Seeking approvals/reviews for:
>
> smoke
> rados - Radek, Laura
> rgw- Casey
> fs - Venky
> orch - Adam King
> rbd, krbd - Ilya

Hi Yuri,

Need reruns for rbd and krbd.

After infrastructure failures are cleared in reruns, I'm prepared to
approve as is, but here is a list of no-brainer PRs that would fix some
of the failing jobs in case you end up rebuilding the branch:

https://github.com/ceph/ceph/pull/57031 (qa-only)
https://github.com/ceph/ceph/pull/57465 (qa-only)
https://github.com/ceph/ceph/pull/57556 (qa-only)
https://github.com/ceph/ceph/pull/57571

Thanks,

Ilya
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: squid 19.1.0 RC QE validation status

2024-07-01 Thread Casey Bodley
On Mon, Jul 1, 2024 at 10:23 AM Yuri Weinstein  wrote:
>
> Details of this release are summarized here:
>
> https://tracker.ceph.com/issues/66756#note-1
>
> Release Notes - TBD
> LRC upgrade - TBD
>
> (Reruns were not done yet.)
>
> Seeking approvals/reviews for:
>
> smoke
> rados - Radek, Laura
> rgw- Casey

rgw approved, thanks

one rgw/notifications job crashed due to
https://tracker.ceph.com/issues/65337. the fix was already backported
to squid, but merged after we forked the RC. i would not consider it a
blocker for this RC

> fs - Venky
> orch - Adam King
> rbd, krbd - Ilya
> quincy-x, reef-x - Laura, Neha
> powercycle - Brad
> perf-basic - Yaarit, Laura
> crimson-rados - Samuel
> ceph-volume - Guillaume
>
> Pls let me know if any tests were missed from this list.
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Ceph tracker broken?

2024-07-01 Thread Frank Schilder
Hi Gregory,

thanks a lot! I hope that sets things back to normal.

It seems possible that my account got assigned to a project by accident. Not 
sure if you (or I myself) can find out about that. I'm not a dev and should not 
be on projects but some of my tickets were picked up as "low hanging fruits" 
and that's when it started. I got added to some related PRs and maybe on this 
occasion to a lot more by accident.

Thanks for your help!
=
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14


From: Gregory Farnum 
Sent: Monday, July 1, 2024 8:38 PM
To: Frank Schilder
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Ceph tracker broken?

You currently have "Email notifications" set to "For any event on all
my projects". I believe that's the firehose setting, so I've gone
ahead and changed it to "Only for things I watch or I'm involved in".
I'm unaware of any reason that would have been changed on the back
end, though there were some upgrades recently. It's also possible you
got assigned to a new group or somehow joined some of the projects
(I'm not well-versed in all the terminology there).
-Greg

On Sun, Jun 30, 2024 at 10:35 PM Frank Schilder  wrote:
>
> Hi all, hopefully someone on this list can help me out. I recently started to 
> receive unsolicited e-mail from the ceph tracker and also certain merge/pull 
> requests. The latest one is:
>
> [CephFS - Bug #66763] (New) qa: revert commit to unblock snap-schedule 
> testing
>
> I have nothing to do with that and I have not subscribed to this tracker item 
> (https://tracker.ceph.com/issues/66763) eithrt. Yet, I receive unrequested 
> updates.
>
> Could someone please take a look and try to find out what the problem is?
>
> Thanks a lot!
> =
> Frank Schilder
> AIT Risø Campus
> Bygning 109, rum S14
> ___
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
>

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Debian 12 + Ceph 18.2 + ARM64

2024-07-01 Thread filip Mutterer
Trying to install Ceph 18.2 on Debian 12 but version of package "ceph" 
looks wrong when doing apt search ceph.



 My ceph.list:

deb https://download.ceph.com/debian-reef/ bookworm main


 Apt search result:

apt search ceph|grep 18\.2

WARNING: apt does not have a stable CLI interface. Use with caution in 
scripts.


ceph-grafana-dashboards/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-cephadm/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-dashboard/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-diskprediction-local/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-k8sevents/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-modules-core/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-rook/stable,stable 18.2.2-1~bpo12+1 all
ceph-prometheus-alerts/stable,stable 18.2.2-1~bpo12+1 all
ceph-volume/stable,stable 18.2.2-1~bpo12+1 all
cephfs-shell/stable,stable 18.2.2-1~bpo12+1 all
cephfs-top/stable,stable 18.2.2-1~bpo12+1 all
libcephfs-java/stable,stable 18.2.2-1~bpo12+1 all
python3-ceph-common/stable,stable 18.2.2-1~bpo12+1 all

So there is no "ceph" package in version 18.2 but version 16. Is Using 
an ARM based system a Problem here as packages above end on "all"?


Is it possible to disable the installation of Cephs older 16.2 packages?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Fwd: Debian 12 + Ceph 18.2 + ARM64

2024-07-01 Thread filip Mutterer
I tried by commenting out all other sources. But in the end package 
"ceph" didn't show up.


On 02.07.2024 00:43, Anthony D'Atri wrote:
Did you disable the Debian OS repositories, so you don’t get their 
bundled build?



On Jul 1, 2024, at 6:21 PM, filip Mutterer  wrote:

Trying to install Ceph 18.2 on Debian 12 but version of package 
"ceph" looks wrong when doing apt search ceph.



My ceph.list:

deb https://download.ceph.com/debian-reef/ bookworm main


Apt search result:

apt search ceph|grep 18\.2

WARNING: apt does not have a stable CLI interface. Use with caution 
in scripts.


ceph-grafana-dashboards/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-cephadm/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-dashboard/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-diskprediction-local/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-k8sevents/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-modules-core/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-rook/stable,stable 18.2.2-1~bpo12+1 all
ceph-prometheus-alerts/stable,stable 18.2.2-1~bpo12+1 all
ceph-volume/stable,stable 18.2.2-1~bpo12+1 all
cephfs-shell/stable,stable 18.2.2-1~bpo12+1 all
cephfs-top/stable,stable 18.2.2-1~bpo12+1 all
libcephfs-java/stable,stable 18.2.2-1~bpo12+1 all
python3-ceph-common/stable,stable 18.2.2-1~bpo12+1 all

So there is no "ceph" package in version 18.2 but version 16. Is 
Using an ARM based system a Problem here as packages above end on "all"?


Is it possible to disable the installation of Cephs older 16.2 packages?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Debian 12 + Ceph 18.2 + ARM64

2024-07-01 Thread filip Mutterer
I tried by commenting out all other sources. But in the end package 
"ceph" didn't show up.


On 02.07.2024 00:43, Anthony D'Atri wrote:

Did you disable the Debian OS repositories, so you don’t get their bundled 
build?


On Jul 1, 2024, at 6:21 PM, filip Mutterer  wrote:

Trying to install Ceph 18.2 on Debian 12 but version of package "ceph" looks 
wrong when doing apt search ceph.


 My ceph.list:

deb https://download.ceph.com/debian-reef/ bookworm main


 Apt search result:

apt search ceph|grep 18\.2

WARNING: apt does not have a stable CLI interface. Use with caution in scripts.

ceph-grafana-dashboards/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-cephadm/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-dashboard/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-diskprediction-local/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-k8sevents/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-modules-core/stable,stable 18.2.2-1~bpo12+1 all
ceph-mgr-rook/stable,stable 18.2.2-1~bpo12+1 all
ceph-prometheus-alerts/stable,stable 18.2.2-1~bpo12+1 all
ceph-volume/stable,stable 18.2.2-1~bpo12+1 all
cephfs-shell/stable,stable 18.2.2-1~bpo12+1 all
cephfs-top/stable,stable 18.2.2-1~bpo12+1 all
libcephfs-java/stable,stable 18.2.2-1~bpo12+1 all
python3-ceph-common/stable,stable 18.2.2-1~bpo12+1 all

So there is no "ceph" package in version 18.2 but version 16. Is Using an ARM based 
system a Problem here as packages above end on "all"?

Is it possible to disable the installation of Cephs older 16.2 packages?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


[ceph-users] Re: Debian 12 + Ceph 18.2 + ARM64

2024-07-01 Thread filip Mutterer
In the official guide for manual installation the command ceph -s shows 
cluster status. But having it in a different version gives me doubts if 
I am on the right way installing ceph.


On 02.07.2024 02:03, Anthony D'Atri wrote:

I’m not familiar with a “Ceph” package.  Might be a Debian meta package ?




On Jul 1, 2024, at 7:09 PM, filip Mutterer  wrote:

⋚훲犉ꙺ筢鸊⺵ꥥꋘ庮쨮귇갆默幺睩槉᪁燪慶❧뛈棂齍듯洶㣜৭蚉୫謊眪⹶⬚湗궅渦ꜹ⫞ꚋ⊶諢立⣊讝ꉱ㛳佽똇궶អ궻ꝶ垝滨ꕴ멝盓渚랭딓漴S챾⥢꓋궵그ꖊ鷝盈ꣂ먭示砭ꈩꥥ৪懗춨鰷鮉꧵�궽誉障醨ṱ隊⒳૨鸌ⅺ睨詸᪦�檷ⅱ㌧Ầᥢ닗幮᭭ꛏカ谧隆鵱犉뽵橺�ᅴꊉア릦権삦�檷↭雖ꦶ잚귈屺顠귪痳斀䓒ങꩭ皇겞譡櫷骲횛闠謢⧭窷�笈궅욮똪➊笜긪涱잩蘊�꧚疫Ⅾ蚫盏ꛥ立婮垵땮騵�媖圞ꘙꂭ잩薧曾쭚湗겵ꛥ筟㛛囩ꍝ뻕ꥥ燪憚૝櫈审ꫝﻋ婮垬떦弶�嶾햩敱騊�줩귧扲�鹚ᱪ忬떦쭚湗뗳涵溚㗛陗Ầᦠ깏ⱺ뛏ꛥ立婮垵땮騵�媖圞ꘙꂮ樝멗걲諞ﻋ婮垬떦弶�嶾햩敱騊褿닖鮕榹廗춶햺棗澵橙屺顩꺉麶ឮ놩庮�닖鮕榹廗춶햺棗澵橙屺顯ꉛꙻשּׂ榹庲횛闭糛浛ꚍ盻嚥闇ꦅשּׁ藩旾쭚湗겵ꛥ筟㛛囩ꍝ뻕ꥥ燪慾쭨ꟻ⵩륞닖鮕�宦赶ﭖꖖ墛燪慾죚붯ꛥ立婮垵땮騵�媖婲똚⟝잩藊⚚觿닖鮕榹廗춶햺棗澵橙劢�귨겞蜞ꘚ婲䚠稩묢ꉽ糙뮭뷪觵ᒲ⧠橰ᄱ뚬秛㊲힦棺垦藪�쩚牆ꁺ욛ꋷ麝�橙䢲⭩ꋋ≮垭ꇘ걩륞똗ꊞ쭚陖궊觨簧ꦆ쨥痪橚牆ꁺ윞ꘛ걺묦権択ॢ닗Ầᮬ窻ᱺ顢ꄺ⺞쮛뇊鷖ꝺ暢雚ᱺ顮뇪秇ꦆ�

___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io