Hi,
We've let our Ceph pool (Octopus) get into a bad state, with around 90%
full:
# ceph health
> HEALTH_ERR 1/4 mons down, quorum
> angussyd-kvm01,angussyd-kvm02,angussyd-kvm03; 3 backfillfull osd(s); 1 full
> osd(s); 14 nearfull osd(s); Low space hindering backfill (add storage if
> this doesn'
On Mon, Jul 27, 2020 at 08:02:23PM +0200, Mariusz Gronczewski wrote:
> Hi,
>
> I've got a problem on Octopus (15.2.3, debian packages) install, bucket
> S3 index shows a file:
>
> s3cmd ls s3://upvid/255/38355 --recursive
> 2020-07-27 17:48 50584342
>
> s3://upvid/255/38355/juz_nie_
Well, port 6800 is not a monitor port as I just looked up, so I wouldn't
look there.
Can you use ceph command from another mon ?
Also maybe the user you use can't access the admin keyring - as far as I
remember that lead to infinetely hanging commands on my test cluster
(but was Nautilus, don
On Mon, Jul 27, 2020 at 3:08 PM Herbert Alexander Faleiros
wrote:
>
> Hi,
>
> On Fri, Jul 24, 2020 at 12:37:38PM -0400, Jason Dillaman wrote:
> > On Fri, Jul 24, 2020 at 10:45 AM Herbert Alexander Faleiros
> > wrote:
> > >
> > > On Fri, Jul 24, 2020 at 07:28:07PM +0500, Alexander E. Patrakov wrot
Hi,
On Fri, Jul 24, 2020 at 12:37:38PM -0400, Jason Dillaman wrote:
> On Fri, Jul 24, 2020 at 10:45 AM Herbert Alexander Faleiros
> wrote:
> >
> > On Fri, Jul 24, 2020 at 07:28:07PM +0500, Alexander E. Patrakov wrote:
> > > On Fri, Jul 24, 2020 at 6:01 PM Herbert Alexander Faleiros
> > > wrote:
Hi all,
When running containerized Ceph (Nautilus) is anyone else seeing a
constant memory leak in the ceph-mgr pod with constant ms_handle_reset
errors in the logs for the backup mgr instance?
---
0 client.0 ms_handle_reset on v2:172.29.1.13:6848/1
0 client.0 ms_handle_reset on v2:172.29.1.13:68
Hi,
I've got a problem on Octopus (15.2.3, debian packages) install, bucket
S3 index shows a file:
s3cmd ls s3://upvid/255/38355 --recursive
2020-07-27 17:48 50584342
s3://upvid/255/38355/juz_nie_zyjesz_sezon_2___oficjalny_zwiastun___netflix_mp4
radosgw-admin bi list also shows it
Hi Igor,
thanks for your answer. I was thinking about that, but as far as I understood,
to hit this bug actually requires a partial rewrite to happen. However, these
are disk images in storage servers with basically static files, many of which
very large (15GB). Therefore, I believe, the vast m
Here are all the active ports on mon1 (with the exception of sshd and ntpd):
# netstat -npl
Proto Recv-Q Send-Q Local Address Foreign Address State
PID/Program name
tcp0 0 :3300 0.0.0.0:* LISTEN
1582/ceph-mon
tcp0 0 :6789
Hi Alexei,
just left a comment in the ticket...
Thanks,
Igor
On 7/25/2020 3:31 PM, Aleksei Zakharov wrote:
Hi all,
I wonder if someone else faced the issue described on the tracker:
https://tracker.ceph.com/issues/45519
We thought that this problem is caused by high OSD fragmentation,
unti
Hi,
have you tried to locally connect to the ports with netcat (or telnet)?
Is the process listening ? (something like netstat -4ln or the current
equivalent thereof)
Is the old (new) Firewall maybe still running ?
On 27.07.20 16:00, Илья Борисович Волошин wrote:
Hello,
I've created an Oc
Hello,
I've created an Octopus 15.2.4 cluster with 3 monitors and 3 OSDs (6 hosts
in total, all ESXi VMs). It lived through a couple of reboots without
problem, then I've reconfigured the main host a bit:
set iptables-legacy as current option in update-alternatives (this is a
Debian10 system), app
Hi,
since some days I try to debug a problem with snaptrimming under
nautilus.
I have a cluster with Nautilus (v14.2.10) , 44 Nodes á 24 OSDs á 14 TB
I create every day a snapshot for 7 days.
Every time the old snapshot is deleting I have bad IO performcance and blocked
requests for several sec
Hi Cem,
Since https://github.com/ceph/ceph/pull/35576 you will be able to tell cephadm
to keep your `/etc/ceph/ceph.conf` updated in all hosts by runnig:
# ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf true
But this feature was not released yet, so you will have to wait for v15.2.5.
Frank,
suggest to start with perf counter analysis as per the second part of my
previous email...
Thanks,
Igor
On 7/27/2020 2:30 PM, Frank Schilder wrote:
Hi Igor,
thanks for your answer. I was thinking about that, but as far as I understood,
to hit this bug actually requires a partial r
Hi David, which ceph version are you using?
From: David Thuong
Sent: Wednesday, July 22, 2020 10:45 AM
To: ceph-users@ceph.io
Subject: [ceph-users] please help me fix iSCSI Targets not available
iSCSI Targets not available
Please consult the documentation on how
Hi Frank,
you might be being hit by https://tracker.ceph.com/issues/44213
In short the root causes are significant space overhead due to high
bluestore allocation unit (64K) and EC overwrite design.
This is fixed for upcoming Pacific release by using 4K alloc unit but it
is unlikely to be b
Hello all,
is there a way to interrogate a cache tier pool about the number of dirty
objects/bytes that it contains?
Thank you,
Laszlo
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
I have a question about the garbage collector within RGWs. We run Nautilus
14.2.8 and we have 32 garbage objects in the gc pool with totally 39 GB of
garbage that needs to be processed.
When we run,
radosgw-admin gc process --include-all
objects are processed but most of them won't
19 matches
Mail list logo