Hmmm, doesn’t seems smooth :(
How about copying bucket with s3 browser :D?
So the actual migration steps are the steps which went smoothly right?
> On 2020. Dec 22., at 20:00, Kalle Happonen wrote:
>
> Email received from outside the company. If in doubt don't click links nor
> open attachment
In other words, I want to figure out when "total_time" is calculated from
and when it ends
opengers 于2020年12月24日周四 上午11:14写道:
> Hello everyone,I enabled rgw ops log by setting "rgw_enable_ops_log =
> true",There is a "total_time" field in rgw ops log
>
> But I want to figure out whether "total_
Hello everyone,I enabled rgw ops log by setting "rgw_enable_ops_log =
true",There is a "total_time" field in rgw ops log
But I want to figure out whether "total_time" includes the period of time
when rgw returns a response to the client?
___
ceph-users
Hi Patrick,
Any updates? Looking forward to your reply :D
On Thu, Dec 17, 2020 at 11:39 AM Patrick Donnelly wrote:
>
> On Wed, Dec 16, 2020 at 5:46 PM Alex Taylor wrote:
> >
> > Hi Cephers,
> >
> > I'm using VSCode remote development with a docker server. It worked OK
> > but fails to start th
On 12/19/20 7:37 PM, Patrick Donnelly wrote:
Well that's interesting. I don't have an explanation unfortunately.
You upgraded the MDS too, right? Only scenario that could cause this I
can think of is that the MDS were never restarted/upgraded to
nautilus.
Yes, the MDSes were upgraded and rest
I have enabled bluefs_buffered_io on some of my OSD nodes and disable some
others based on the server node situation and I'm experiencing this issue
on both of them!
How can manual RocksDB compaction help?
Can you please share with me the topic names for this issue on the mailing
list?
On Wed, D
Thank you for the update, this is excellent news. Hopefully we'll see the
fixed package in the next point release container-build for Ceph; this has
been a big stumbling block for our deployments and countless others we've
seen reporting this. We greatly appreciate your diligent work and urgency
su
Not entirely sure about this...
but after a bunch of cluster teardown and rebuilds I got rbds mapped.
Seems to me like biggest difference is that recently, I was sticking to using
the webgui to create the pools
(and I did the enable-application = rbd checkbox!!!)
but this last time, I went back t
On Tue, Dec 22, 2020 at 12:03 PM Ken Dreyer wrote:
> There are a few more small cleanups I need to land in order to
> reconcile the epel8 and master branches.
The maintainers merged the cleanups. Here's the next PR to sync the
remaining epel8 diff into master:
https://src.fedoraproject.org/rpms/p
I fixed it by starting again deleting everything and using the
"--skip-mon-network” option to cephadm bootstrap, I think the config was not
finished before.
The logging is very verbose by default. I have reduced most of but can’t reduce
cluster 2020-12-23T12:26:48.142993+ mgr.host1.xsqlhs
Hi Seena,
one of the frequent cause for such a timeout is slow RocksDB
operationing. Which in turn might be caused by bluefs_buffered_io set to
false and/or DB "fragmentation" after massive data removal.
Hence the potential workarounds are adjusting bluefs_buffered_io and
manual RocksDB comp
Hi Mika,
Could you see if making the `test` user a system user works?
The user that the Dashboard uses to communicate with RGW needs to be a system
user.
The document suggests a `--system` flag should be provided when creating the
user:
https://docs.ceph.com/en/latest/mgr/dashboard/#enabling-th
Hi,
I am trying to set up a new cluster with cephadm using a docker backend.
The initial boot strap did not finish cleanly and it errored out waiting for
the mon-ip, I used the command:
cephadm bootstrap --mon-ip 192.168.0.1
With 192.168.0.1 being the ip address of this first host.
I tried the
Hello,
TL;DR How can I recreate the device_health_metrics pool?
I'm experimenting with Ceph Octopus v15.2.8 in a 3 node cluster under
Proxmox 6.3. After initializing CEPH the usual way, a
"device_health_metrics" pool is created as soon as I create the first
manager. That pool has just 1 PG but
Hi,
All my OSD nodes in the SSD tier are getting heartbeat_map timed out
randomly and I don't find why!
7ff2ed3f2700 1 heartbeat_map is_healthy 'OSD::osd_op_tp thread
0x7ff2c8943700' had timed out after 15
It occurs many times in a day and causes my cluster to be down.
Is there any way to find
krbd or /ceph_fs_mount_point already use one of the best cache - kernel
page cache
k
On 23.12.2020 14:08, huxia...@horebdata.cn wrote:
Dear ceph folks,
rbd_cache can be set up as a read /write cache for librbd, widely used with
openstack cinder. Does krbd has a silmilar cache controll mecha
Dear ceph folks,
rbd_cache can be set up as a read /write cache for librbd, widely used with
openstack cinder. Does krbd has a silmilar cache controll mechanism or not? I
am using krbd for iSCSI and NFS backend storage, and wonder whether a cache
setting exists for krbd.
thanks in advance,
Sa
Hi,
And that fixed the problem :)
Huge thanks,
-Mika
On Wed, Dec 23, 2020 at 12:05 PM Kiefer Chang wrote:
> Hi Mika,
>
> Could you see if making the `test` user a system user works?
>
> The user that the Dashboard uses to communicate with RGW needs to be a
> system user.
> The documen
18 matches
Mail list logo