Run reshard instances rm
And reshard your bucket by hand or leave dynamic resharding process to do this
work
k
Sent from my iPhone
> On 13 Apr 2021, at 19:33, dhils...@performair.com wrote:
>
> All;
>
> We run 2 Nautilus clusters, with RADOSGW replication (14.2.11 --> 14.2.16).
>
> Initial
Hi,
Actually vfs_ceph should perform better, but this method will not work with
another's vfs's, like recycle bin or audit, in one stack
k
Sent from my iPhone
> On 14 Apr 2021, at 09:56, Martin Palma wrote:
>
> Hello,
>
> what is the currently preferred method, in terms of stability and
>
Hello Konstantin,
In my experience the CephFS kernel driver (Ubuntu 20.04) was always
faster and the CPU load was much lower compared to vfs_ceph.
Alex
Am Mittwoch, dem 14.04.2021 um 10:19 +0300 schrieb Konstantin Shalygin:
> Hi,
>
> Actually vfs_ceph should perform better, but this method wil
On Wed, 2021-04-14 at 08:55 +0200, Martin Palma wrote:
> Hello,
>
> what is the currently preferred method, in terms of stability and
> performance, for exporting a CephFS directory with Samba?
>
> - locally mount the CephFS directory and export it via Samba?
> - using the "vfs_ceph" module of Samb
This week's meeting will focus on the ongoing rewrite of the cephadm
documentation and the upcoming Google Season of Docs project.
Meeting: https://bluejeans.com/908675367
Etherpad: https://pad.ceph.com/p/Ceph_Documentation
___
ceph-users mailing list --
Just working this through, how does one identify the OIDs within a PG,
without list_unfound?
I've been poking around, but can't seem to find a command that outputs
the necessary OIDs. I tried a handful of cephfs commands, but they of
course become stuck, and ceph pg commands haven't revealed the O
Additional to my last note, I should have mentioned, I am exploring
options to delete the damaged data, but in hopes to preserve what I
can, prior to moving to simply deleting all data on that pool.
When trying to simply empty pgs, it seems like the pgs don't exist.
In attempting to follow:
https
cephadm bootstrap --skip-monitoring-stack
should to the trick. See man cephadm
On Tue, Apr 13, 2021 at 6:05 PM mabi wrote:
> Hello,
>
> When bootstrapping a new ceph Octopus cluster with "cephadm bootstrap",
> how can I tell the cephadm bootstrap NOT to install the ceph-grafana
> container?
>
>
Hi,
I'm currently testing some disaster scenarios.
When removing one osd/monitor host, I see that a new quorum is build
without the missing host. The missing host is listed in the dashboard
under Not In Quorum, so probably everything as expected.
After restarting the host, I see that the osd
Konstantin;
Dynamic resharding is disabled in multisite environments.
I believe you mean radosgw-admin reshard stale-instances rm.
Documentation suggests this shouldn't be run in a multisite environment. Does
anyone know the reason for this?
Is it, in fact, safe, even in a multisite environme
On Wed, Apr 14, 2021 at 11:44 AM wrote:
>
> Konstantin;
>
> Dynamic resharding is disabled in multisite environments.
>
> I believe you mean radosgw-admin reshard stale-instances rm.
>
> Documentation suggests this shouldn't be run in a multisite environment.
> Does anyone know the reason for th
Hello,
Cluster is 3 nodes Debian 10. Started cephadm upgrade on healthy 15.2.10
cluster. Managers were upgraded fine then first monitor went down for
upgrade and never came back. Researching at the unit files container
fails to run because of an error:
root@host1:/var/lib/ceph/97d9f40e-9d33-
Casey;
That makes sense, and I appreciate the explanation.
If I were to shut down all uses of RGW, and wait for replication to catch up,
would this then address most known issues with running this command in a
multi-site environment? Can I offline RADOSGW daemons as an added precaution?
Thank
Hi Igor,
After updating to 14.2.19 and then moving some PGs around we have a
few warnings related to the new efficient PG removal code, e.g. [1].
Is that something to worry about?
Best Regards,
Dan
[1]
/var/log/ceph/ceph-osd.792.log:2021-04-14 20:34:34.353 7fb2439d4700 0
osd.792 pg_epoch: 409
Hi everyone,
In June 2021, we're hosting a month of Ceph presentations, lightning
talks, and unconference sessions such as BOFs. There is no
registration or cost to attend this event.
The CFP is now open until May 12th.
https://ceph.io/events/ceph-month-june-2021/cfp
Speakers will receive confi
Thanks for the pointer Dave,
in my case though problem proved to be old docker version (18) provided
by OS repos. Installing latest docker-ce from docker.com resolves the
problem. It would be nice though if host was checked for compatibility
before starting an upgrade.
On 14.4.2021 г. 13:1
Hi, every osd in a SSD that I have upgraded from 15.2.9->15.210 logs
errors like the ones below. The osd's in HD or NVME don't. But they
restart ok and a deep-scrub of the entire pool finishes ok. Could be the
same bug?
2021-04-14T00:29:27.740+0200 7f364750d700 3 rocksdb:
[table/block_based
Radoslav,
I ran into the same. For Debian 10 - recent updates - you have to add
'cgroup_enable=memory swapaccount=1' to the kernel command line
(/etc/default/grub). The reference I found said that Debian decided to
disable this by default and make us turn it on if we want to run containers.
-Da
Hi Dan,
Seen that once before and haven't thoroughly investigated yet but I
think the new PG removal stuff just revealed this "issue". In fact it
had been in the code before the patch.
The warning means that new object(s) (given the object names these are
apparently system objects, don't rem
Hello everyone!
I'm running nautilus 14.2.16 and I'm using RGW with Beast frontend.
I see this eror log in every SSD osd which is using for rgw index.
Can you please tell me what is the problem?
OSD LOG:
cls_rgw.cc:1102: ERROR: read_key_entry()
idx=�1000_matches/xdir/05/21/27260.jpg ret=-2
cls_rg
We saw this warning once in testing
(https://tracker.ceph.com/issues/49900#note-1), but there, the problem
was different, which also led to a crash. That issue has been fixed
but if you can provide osd logs with verbose logging, we might be able
to investigate further.
Neha
On Wed, Apr 14, 2021 a
More informations:
I have a overlimit bucket and the error belongs to this bucket.
fill_status=OVER 100%
objects_per_shard: 363472 (I use default 100K per shard)
num_shards: 750
I'm deleting objects from this bucket with absolute path and I dont use
dynamic bucket resharding due to multisite.
I
I've same issue and joined to the club.
Almost every deleted bucket is still there due to multisite. Also I've
removed secondary zone and stopped sync but these stale-instance's still
there.
Before adding new secondary zone I want to remove them. If you gonna run
anything let me know please.
a
Thank you for the hint regarding the --skip-monitoring-stack parameter.
Actually I already bootstrapped my cluster without this option, so is there a
way to disable and remove the ceph-grafana part now? or do I need to bootstrap
my cluster again?
‐‐‐ Original Message ‐‐‐
On Wednesday, A
24 matches
Mail list logo