On 10/29/19 10:56 PM, Frank R wrote:
oldest incremental change not applied: 2019-10-22 00:24:09.0.720448s
May be zone period is not the same on both sides?
k
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-use
Hi Kári,
what about this:
health: HEALTH_WARN
854 pgs not deep-scrubbed in time
maybe you should
$ ceph –cluster first pg scrub XX.YY
or
$ ceph –cluster first pg deep-scrub XX.YY
all the PGs.
Tue, 29 Oct 2019 22:43:28 +
Kári Bertilsson ==> Nathan Fish :
> I am encounter
On 10/29/19 3:45 PM, tuan dung wrote:
i have a cluster run ceph object using version 14.2.1. I want to creat
2 pool for bucket data for purposes for security:
+ one bucket-data pool for public client access from internet (name
/zone1.rgw.buckets.data-pub) /
+ one bucket-data pool for private cl
On 10/29/19 3:50 PM, tuan dung wrote:
I want to log IP client to rados gateway log to check information
about loadbalancing and other things. I using LB before rados gateway
nodes, what need to be done for configuration in rados gateway?
You need to setup `rgw_log_http_headers` option.
```
On 10/29/19 1:40 AM, Mac Wynkoop wrote:
So, I'm in the process of trying to migrate our rgw.buckets.data pool
from a replicated rule pool to an erasure coded pool. I've gotten the
EC pool set up, good EC profile and crush ruleset, pool created
successfully, but when I go to "rados cppool xxx.rg
I have 104 pg stays in unknown states for a long time
[root@node-1 /]# ceph -s
cluster:
id: 653c6c1a-607e-4a62-bb92-dfe2f0d7afb6
health: HEALTH_ERR
1 osds down
Reduced data availability: 104 pgs inactive
24 slow requests are blocked > 32 sec. Implic
Hi,
May be Mon service have problem, pls check your mon service
Br,
--
Dương Tuấn Dũng
Email: dungdt.aicgr...@gmail.com
ĐT: 0986153686
On Tue, Oct 29, 2019 at 10:45 PM Thomas Schneider <74cmo...@gmail.com>
wrote:
> Hi,
>
> in my unhealthy cluster I
I am encountering the dirlist hanging issue on multiple clients and none of
them are Ubuntu.
Debian buster running kernel 4.19.0-2-amd64. This one was working fine
until after ceph was upgraded to nautilus
Proxmox running kernels 5.0.21-1-pve and 5.0.18-1-pve
On Tue, Oct 29, 2019 at 9:04 PM Nath
Ubuntu's 4.15.0-66 has this bug, yes. -65 is safe and -67 will have the fix.
On Tue, Oct 29, 2019 at 4:54 PM Patrick Donnelly wrote:
>
> On Mon, Oct 28, 2019 at 11:33 PM Lars Täuber wrote:
> >
> > Hi!
> >
> > What kind of client (kernel vs. FUSE) do you use?
> > I experience a lot of the followi
On Mon, Oct 28, 2019 at 11:33 PM Lars Täuber wrote:
>
> Hi!
>
> What kind of client (kernel vs. FUSE) do you use?
> I experience a lot of the following problems with the most recent ubuntu
> 18.04.3 kernel 4.15.0-66-generic :
> kernel: [260144.644232] cache_from_obj: Wrong slab cache. inode_cache
Bucket deletion is somewhat slow, several days to delete a large
bucket is not unusual.
BTW: you can speed it up with --max-concurrent-ios=XXX (default is 32,
documentation in Nautilus for the switch is wrong, it does work for
deleteions, docs are fixed in master)
Paul
--
Paul Emmerich
Lookin
On Tue, Oct 29, 2019 at 7:26 PM Bryan Stillwell wrote:
>
> Thanks Casey,
>
> If I'm understanding this correctly the only way to turn on RGW compression
> is to do it essentially cluster wide in Luminous since all our existing
> buckets use the same placement rule? That's not going to work for
As update,
It continues..
2019-10-29 19:36:48.787 7fc5ae22c700 0 abort_bucket_multiparts WARNING :
aborted 2437000 incomplete multipart uploads
How to get debug for upload ?
Regards
De: EDH - Manuel Rios Fernandez
Enviado el: lunes, 28 de octubre de 2019 14:18
Para: ceph-u
Thanks Casey,
If I'm understanding this correctly the only way to turn on RGW compression is
to do it essentially cluster wide in Luminous since all our existing buckets
use the same placement rule? That's not going to work for what I want to do
since it's a shared cluster and other buckets ne
Hi Bryan,
Luminous docs about pool placement and compression can be found at
https://docs.ceph.com/docs/luminous/radosgw/placement/. You're correct
that a bucket's placement target is set on creation and can't be
changed. But the placement target itself can be modified to enable
compression a
Florian,
Thank you for your detailed reply. I was right in thinking that the 223k+
usage log entries were causing my large omap object warning. You've also
confirmed my suspicions that osd_deep_scrub_large_omap_object_key_threshold
was changed between Ceph versions. I ended up trimming all of the
I have checked the network already.
There's no indication for a problem with the network, means there are no
dropped packages and loadtest with iperf shows good performance.
Am 29.10.2019 um 17:44 schrieb Bryan Stillwell:
> I would look into a potential network problem. Check for errors on bot
I would look into a potential network problem. Check for errors on both the
server side and on the switch side.
Otherwise I'm not really sure what's going on. Someone else will have to jump
into the conversation.
Bryan
On Oct 29, 2019, at 10:38 AM, Thomas Schneider <74cmo...@gmail.com> wrote
Thanks.
2 of 4 MGR nodes are sick.
I have stopped MGR services on both nodes.
When I start the service again on node A, I get this in its log:
root@ld5508:~# tail -f /var/log/ceph/ceph-mgr.ld5508.log
2019-10-29 17:32:02.024 7fe20e881700 0 --1- 10.97.206.96:0/201758478 >>
v1:10.97.206.96:7055/179
On Oct 29, 2019, at 9:44 AM, Thomas Schneider <74cmo...@gmail.com> wrote:
> in my unhealthy cluster I cannot run several ceph osd command because
> they hang, e.g.
> ceph osd df
> ceph osd pg dump
>
> Also, ceph balancer status hangs.
>
> How can I fix this issue?
Check the status of your ceph-m
I'm wondering if it's possible to enable compression on existing RGW buckets?
The cluster is running Luminous 12.2.12 with FileStore as the backend (no
BlueStore compression then).
We have a cluster that recently started to rapidly fill up with compressible
content (qcow2 images) and I would l
Hi Konstantin,
Thanks very much for your help. Things seem to be running smoothly now.
One remaining issue on the secondary side is that I see:
-
oldest incremental change not applied: 2019-10-22 00:24:09.0.720448s
-
Replication appears to be working fine when I upload files or create
b
Hi,
in my unhealthy cluster I cannot run several ceph osd command because
they hang, e.g.
ceph osd df
ceph osd pg dump
Also, ceph balancer status hangs.
How can I fix this issue?
THX
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe sen
I jumped the gun too quickly, dirlisting is still hanging with no entries
in `ceph osd blacklist ls`.
But when i restart the active MDS and the standby goes active dirlisting
finishes and i get 2 entries in blacklist with the IP address on the
previously active MDS.
On Tue, Oct 29, 2019 at 1:03 P
I just set this up as well and had the same issue with s3cmd ws-create not
working. Adding "rgw_enable_static_website = true" to the s3 api gateways
solved it. This does appear to be the correct solution. The s3website api
gateways are serving their error messages in html and the s3 api gateways
ar
I am noticing i have many entries in `ceph osd blacklist ls` and dirlisting
works again after i removed all entries.
What can cause this and is there any way to disable blacklisting ?
On Tue, Oct 29, 2019 at 11:56 AM Kári Bertilsson
wrote:
> The file system was created on luminous and the proble
The file system was created on luminous and the problems started after
upgrading from luminous to nautilus.
All CephFS configuration should be pretty much default except i enabled
snapshots which was disabled by default on luminous.
On Tue, Oct 29, 2019 at 11:48 AM Kári Bertilsson
wrote:
> All c
All clients are using the kernel client on proxmox kernel
version 5.0.21-3-pve.
The mds logs are not showing anything interesting and have very little in
them except for the restarts, maybe i need to increase debug level ?
On Tue, Oct 29, 2019 at 6:33 AM Lars Täuber wrote:
> Hi!
>
> What kind o
Hi all,
I want to log IP client to rados gateway log to check information about
loadbalancing and other things. I using LB before rados gateway nodes, what
need to be done for configuration in rados gateway?
thank you very much.
Br,
--
Dương Tuấn Dũng
hi ceph-users,
i have a cluster run ceph object using version 14.2.1. I want to creat 2
pool for bucket data for purposes for security:
+ one bucket-data pool for public client access from internet (name
*zone1.rgw.buckets.data-pub) *
+ one bucket-data pool for private client access from local net
Hi David,
On 28/10/2019 20:44, David Monschein wrote:
> Hi All,
>
> Running an object storage cluster, originally deployed with Nautilus
> 14.2.1 and now running 14.2.4.
>
> Last week I was alerted to a new warning from my object storage cluster:
>
> [root@ceph1 ~]# ceph health detail
> HEALTH_
31 matches
Mail list logo