Le 08/11/2023 à 19:29:19+0100, David C. a écrit
Hi David.
>
> What would be the number of replicas (in total and on each row) and their
> distribution on the tree ?
Well “inside” a row that would be 3 in replica mode.
Between row...well two ;-)
Beside to understanding how to write a rule a l
Hi,
On my opinion... Please don't. In worst case, maybe only messages concerning
critical updates (security, stability issues).
For two reasons:
1) as low as the impact may be, server sources are precious...
2) my time is also precious. If I login to the GUI, it's with the intention to
do some
My vote would be "no":
* This is an operational high-criticality system. Not the right place
to have distracting other stuff or to bloat the dashboard.
* Our ceph systems deliberately don't have direct internet connectivity.
* There is plenty of useful operational information that could fil
Hi,
you mean you forgot your password? You can remove the service with
'ceph orch rm grafana', then re-apply your grafana.yaml containing the
initial password. Note that this would remove all of the grafana
configs or custom dashboards etc., you would have to reconfigure them.
So before do
09.11.2023 10:35, Nizamudeen A пишет:
Hello,
We wanted to get some feedback on one of the features that we are planning
to bring in for upcoming releases.
On the Ceph GUI, we thought it could be interesting to show information
regarding the community events, ceph release information (Release no
Hello,
We wanted to get some feedback on one of the features that we are planning
to bring in for upcoming releases.
On the Ceph GUI, we thought it could be interesting to show information
regarding the community events, ceph release information (Release notes and
changelogs) and maybe even notif
On Thu, Nov 9, 2023 at 3:53 AM Laura Flores wrote:
> @Venky Shankar and @Patrick Donnelly
> , I reviewed the smoke suite results and identified
> a new bug:
>
> https://tracker.ceph.com/issues/63488 - smoke test fails from "NameError:
> name 'DEBUGFS_META_DIR' is not defined"
>
> Can you take a
@Venky Shankar and @Patrick Donnelly
, I reviewed the smoke suite results and identified a
new bug:
https://tracker.ceph.com/issues/63488 - smoke test fails from "NameError:
name 'DEBUGFS_META_DIR' is not defined"
Can you take a look?
On Wed, Nov 8, 2023 at 12:32 PM Adam King wrote:
> >
> > h
This server configured Dell R730 with HBA 330 card HDD are configured write
through mode.
From: David C.
Sent: Wednesday, November 8, 2023 10:14
To: Peter
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] HDD cache
Without (raid/jbod) controller ?
Le mer. 8
>
> https://tracker.ceph.com/issues/63151 - Adam King do we need anything for
> this?
>
Yes, but not an actual code change in the main ceph repo. I'm looking into
a ceph-container change to alter the ganesha version in the container as a
solution.
On Wed, Nov 8, 2023 at 11:10 AM Yuri Weinstein w
Hi Albert,
What would be the number of replicas (in total and on each row) and their
distribution on the tree ?
Le mer. 8 nov. 2023 à 18:45, Albert Shih a écrit :
> Hi everyone,
>
> I'm totally newbie with ceph, so sorry if I'm asking some stupid question.
>
> I'm trying to understand how the
Without (raid/jbod) controller ?
Le mer. 8 nov. 2023 à 18:36, Peter a écrit :
> Hi All,
>
> I note that HDD cluster commit delay improves after i turn off HDD cache.
> However, i also note that not all HDDs are able to turn off the cache.
> special I found that two HDD with same model number, on
Hi everyone,
I'm totally newbie with ceph, so sorry if I'm asking some stupid question.
I'm trying to understand how the crush map & rule work, my goal is to have
two groups of 3 servers, so I'm using “row” bucket
ID CLASS WEIGHTTYPE NAME STATUS REWEIGHT PRI-AFF
-1
Hi All,
I note that HDD cluster commit delay improves after i turn off HDD cache.
However, i also note that not all HDDs are able to turn off the cache. special
I found that two HDD with same model number, one can turn off, anther doesn't.
i guess i have my system config or something different
Sorry for not making it clear, we are using upmap. Just saw this from the
code and wondering about the usage.
For the OSDs, we do not have any OSD weight < 1.00 until one OSD reaches
the 85% near full ratio. Before I reweight the
OSD, our mgr/balancer/upmap_max_deviation is set to 5 and the PG
dis
Hello all,
Here are the minutes from today's meeting.
- New time for CDM APAC to increase participation
- 9.30 - 11.30 pm PT seems like the most popular based on
https://doodle.com/meeting/participate/id/aM9XGZ3a/vote
- One more week for more feedback; please ask more APAC folk
Hi,
On 11/7/23 12:35, necoe0...@gmail.com wrote:
Ceph 3 clusters are running and the 3rd cluster gave an error, it is currently
offline. I want to get all the remaining data in 2 clusters. Instead of fixing
ceph, I just want to save the data. How can I access this data and connect to
the pool
dashboard approved, the test failure is known cypress issue which is not a
blocker.
Regards,
Nizam
On Wed, Nov 8, 2023, 21:41 Yuri Weinstein wrote:
> We merged 3 PRs and rebuilt "reef-release" (Build 2)
>
> Seeking approvals/reviews for:
>
> smoke - Laura, Radek 2 jobs failed in "objectstore/bl
Dear cephers,
we have a cephfs volume, that will be mounted by many clients with
concurrent read/write capability. From time to time, maybe when concurrency
goes as high as 100 clients' access, accessing it will become very slow to
be useful at all.
the cluster has multiple active mds. All disks a
We merged 3 PRs and rebuilt "reef-release" (Build 2)
Seeking approvals/reviews for:
smoke - Laura, Radek 2 jobs failed in "objectstore/bluestore" tests
(see Build 2)
rados - Neha, Radek, Travis, Ernesto, Adam King
rgw - Casey reapprove on Build 2
fs - Venky, approve on Build 2
orch - Adam King
up
Take hints from this: "544 pgs not deep-scrubbed in time". Your OSDs are
unable to scrub their data in time, likely because they cannot cope with
the client + scrubbing I/O. I.e. there's too much data on too few and too
slow spindles.
You can play with osd_deep_scrub_interval and increase the scru
Hi,
this directory is very busy:
ceph tell mds.* dirfrag ls
/volumes/csi/csi-vol-3a69d51a-f3cd-11ed-b738-964ec15fdba7/
while running it, all mds output:
[
{
"value": 0,
"bits": 0,
"str": "0/0"
}
]
Thank you,
Ben
Patrick Donnelly 于2023年11月8日周三 21:58写道:
>
> On Mo
I configured a password for Grafana because I want to use Loki. I used the spec
parameter initial_admin_password and this works fine for a staging environment,
where I never tried to used Grafana with a password for Loki.
Using the username admin with the configured password gives a credential
Hello Casey,
Thank you so much, the steps you provided worked. I'll follow up on the
tracker to provide further information.
Regards,
Jayanth
On Wed, Nov 8, 2023 at 8:41 PM Jayanth Reddy
wrote:
> Hello Casey,
>
> Thank you so much for the response. I'm applying these right now and let
> you kn
I configured a password for Grafana because I want to use Loki. I used the spec parameter initial_admin_password and this works fine for a staging environment, where I never tried to used Grafana with a password for Loki.
Using the username admin with the configur
Hello Casey,
Our Production buckets are impacted due to this issue. We have downgraded Ceph
version from 17.2.7 to 17.2.6 but still we are getting "bucket policy parsing"
error while accessing the buckets. rgw_policy_reject_invalid_principals is not
present in 17.2.6 as configurable parameter.
I'd like to discuss the questions I should ask to understand the values under
the 'attrs' of an object in the following JSON data structure and evaluate the
health of these objects:
I have a sample json output, can you comment on the object state here?
{ "name": "$image.name", "size": 0, "tag":
Ceph 3 clusters are running and the 3rd cluster gave an error, it is currently
offline. I want to get all the remaining data in 2 clusters. Instead of fixing
ceph, I just want to save the data. How can I access this data and connect to
the pool? Can you help me?1 and 2 clusters are working. I wa
Dear Ceph user,
I'm wondering how much an increase of PG number would impact on the memory
occupancy of OSD daemons. In my cluster I currently have 512 PGs and I would
like to increase it to 1024 to mitigate some disk occupancy issues, but having
machines with low amount of memory (down to 24 G
Hello,
We are using a Ceph Pacific (16.2.10) cluster and enabled the balancer module,
but the usage of some OSDs keeps growing and reached up to
mon_osd_nearfull_ratio, which we use 85% by default, and we think the balancer
module should do some balancer work.
So I checked our balancer configu
Hi Eugen
Please find the details below
root@meghdootctr1:/var/log/ceph# ceph -s
cluster:
id: c59da971-57d1-43bd-b2b7-865d392412a5
health: HEALTH_WARN
nodeep-scrub flag(s) set
544 pgs not deep-scrubbed in time
services:
mon: 3 daemons, quorum meghdootctr1,meghdootctr2,meghdootctr3 (age 5d)
mgr: m
Hello Casey,
Thank you so much for the response. I'm applying these right now and let
you know the results.
Regards,
Jayanth
On Wed, Nov 8, 2023 at 8:15 PM Casey Bodley wrote:
> i've opened https://tracker.ceph.com/issues/63485 to allow
> admin/system users to override policy parsing errors li
Hi,
On Tue, 7 Nov 2023, Harry G Coin wrote:
These repeat for every host, only after upgrading from prev release Quincy to
17.2.7. As a result, the cluster is always warned, never indicates healthy.
I'm hitting this error, too.
"/usr/lib/python3.6/site-packages/ceph_volume/util/device.py",
We've had some issues with Exos drives dropping out of our sas controllers (LSI
SAS3008 PCI-Express Fusion-MPT SAS-3) intermittently which we believe is due to
this. Upgrading the drive firmware largely solved it for us so we never ended
up messing about with the power settings.
___
i've opened https://tracker.ceph.com/issues/63485 to allow
admin/system users to override policy parsing errors like this. i'm
not sure yet where this parsing regression was introduced. in reef,
https://github.com/ceph/ceph/pull/49395 added better error messages
here, along with a rgw_policy_reject
Yuri, we need to add this issue as a blocker for 18.2.1. We discovered this
issue after the release of 17.2.7, and don't want to hit the same blocker
in 18.2.1 where some types of OSDs are failing to be created in new
clusters, or failing to start in upgraded clusters.
https://tracker.ceph.com/issu
Hello Wesley,
Thank you for the response. I tried the same but ended up with 403.
Regards,
Jayanth
On Wed, Nov 8, 2023 at 7:34 PM Wesley Dillingham
wrote:
> Jaynath:
>
> Just to be clear with the "--admin" user's key's you have attempted to
> delete the bucket policy using the following method:
Jaynath:
Just to be clear with the "--admin" user's key's you have attempted to
delete the bucket policy using the following method:
https://docs.aws.amazon.com/cli/latest/reference/s3api/delete-bucket-policy.html
This is what worked for me (on a 16.2.14 cluster). I didn't attempt to
interact wit
On Mon, Nov 6, 2023 at 4:56 AM Ben wrote:
> Hi,
> I used this but all returns "directory inode not in cache"
> ceph tell mds.* dirfrag ls path
>
> I would like to pin some subdirs to a rank after dynamic subtree
> partitioning. Before that, I need to know where are they exactly
>
If the dirfrag
Hello Casey,
We're totally stuck at this point and none of the options seem to work.
Please let us know if there is something in metadata or index to remove
those applied bucket policies. We downgraded to v17.2.6 and encountering
the same.
Regards,
Jayanth
On Wed, Nov 8, 2023 at 7:14 AM Jayanth
so the next step is to place the pools on the right rule :
ceph osd pool set db-pool crush_rule fc-r02-ssd
Le mer. 8 nov. 2023 à 12:04, Denny Fuchs a écrit :
> hi,
>
> I've forget to write the command, I've used:
>
> =
> ceph osd crush move fc-r02-ceph-osd-01 root=default
> ceph osd crush
Hi,
I overseen also this:
==
root@fc-r02-ceph-osd-01:[~]: ceph -s
cluster:
id: cfca8c93-f3be-4b86-b9cb-8da095ca2c26
health: HEALTH_OK
services:
mon: 5 daemons, quorum
fc-r02-ceph-osd-01,fc-r02-ceph-osd-02,fc-r02-ceph-osd-03,fc-r02-ceph-osd-05
hi,
I've forget to write the command, I've used:
=
ceph osd crush move fc-r02-ceph-osd-01 root=default
ceph osd crush move fc-r02-ceph-osd-01 root=default
...
=
and I've found also this param:
===
root@fc-r02-ceph-osd-01:[~]: ceph osd crush tree --show-shadow
ID CLASS WEIGHT
I've probably answered too quickly if the migration is complete and there
are no incidents.
Are the pg active+clean?
Cordialement,
*David CASIER*
Le mer. 8 nov. 2023 à 11:50, Dav
Hi,
It seems to me that before removing buckets from the crushmap, it is
necessary to do the migration first.
I think you should restore the initial crushmap by adding the default root
next to it and only then do the migration.
There should be some backfill (probably a lot).
__
Hi Yuri,
On Wed, Nov 8, 2023 at 2:32 AM Yuri Weinstein wrote:
>
> 3 PRs above mentioned were merged and I am returning some tests:
> https://pulpito.ceph.com/?sha1=55e3239498650453ff76a9b06a37f1a6f488c8fd
>
> Still seeing approvals.
> smoke - Laura, Radek, Prashant, Venky in progress
> rados - Ne
Hello,
we upgraded to Quincy and tried to remove an obsolete part:
In the beginning of Ceph, there where no device classes and we created
rules, to split them into hdd and ssd on one of our datacenters.
https://www.sebastien-han.fr/blog/2014/08/25/ceph-mix-sata-and-ssd-within-the-same-box/
S
47 matches
Mail list logo