Looks as if your cluster is still running 15.2.1.
Have a look at https://docs.ceph.com/docs/master/cephadm/upgrade/
Am 28.07.20 um 09:57 schrieb Ml Ml:
> Hello,
>
> i get:
>
> [WRN] CEPHADM_HOST_CHECK_FAILED: 6 hosts fail cephadm check
> host ceph01 failed check: Failed to connect to ceph01
No, they are stored locally on ESXi data storage on top of hardware RAID5
built with SAS/SATA (different hardware on hosts).
Also, I've tried going back to the snapshot taken just after all monitors
and OSDs were added to cluster. The host boots fine and is working as it
should, however, after the
Hi,
did you try following the documentation on that one?
https://docs.ceph.com/docs/master/rados/troubleshooting/troubleshooting-pg/#pgs-inconsistent
Best regards
Philipp
On 7/24/20 12:47 PM, Fabrizio Cuseo wrote:
> Hello, I use ceph with proxmox, release 14.2.9, with bluestore OSD.
>
> I had
i user ceph octopus v15
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi All,
radosgw-admin is configured in ceph-deploy, created a few buckets from the
Ceph dashboard, but when accessing through Java AWS S3 code to create a new
bucket i am facing the below issue..
Exception in thread "main" com.amazonaws.SdkClientException: Unable to
execute HTTP request: firstbuc
Hi,
we observe crashes in librbd1 on specific workloads in virtual machines
on Ubuntu 20.04 hosts with librbd1=15.2.4-1focal.
The changes in
https://github.com/ceph/ceph/commit/50694f790245ca90a3b8a644da7b128a7a148cc6
could be related, but do not easily apply against v15.2.4.
We have collected s
Dnia 2020-07-27, o godz. 21:31:33
"Robin H. Johnson" napisał(a):
> On Mon, Jul 27, 2020 at 08:02:23PM +0200, Mariusz Gronczewski wrote:
> > Hi,
> >
> > I've got a problem on Octopus (15.2.3, debian packages) install,
> > bucket S3 index shows a file:
> >
> > s3cmd ls s3://upvid/255/38355 --
Hello,
i get:
[WRN] CEPHADM_HOST_CHECK_FAILED: 6 hosts fail cephadm check
host ceph01 failed check: Failed to connect to ceph01 (ceph01).
Check that the host is reachable and accepts connections using the
cephadm SSH key
you may want to run:
> ssh -F =(ceph cephadm get-ssh-config) -i =(ceph c
Thanks Ricardo for clarification.
Regards.
On Mon, Jul 27, 2020 at 2:50 PM Ricardo Marques wrote:
> Hi Cem,
>
> Since https://github.com/ceph/ceph/pull/35576 you will be able to tell
> cephadm to keep your `/etc/ceph/ceph.conf` updated in all hosts by runnig:
>
> # ceph config set mgr mgr/cephad
Hi
I often get "Server error - An error occurred while processing your request."
when tryignt o view this lis in Fireofx, is it a known issues?
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/message/new
___
ceph-users mailing list -- ceph-us
On Tue, Jul 28, 2020 at 7:19 AM Johannes Naab
wrote:
>
> Hi,
>
> we observe crashes in librbd1 on specific workloads in virtual machines
> on Ubuntu 20.04 hosts with librbd1=15.2.4-1focal.
>
> The changes in
> https://github.com/ceph/ceph/commit/50694f790245ca90a3b8a644da7b128a7a148cc6
> could be
Hi,
My harbor registry uses ceph object storage to save the images. But
I couldn't pull/push images from harbor a few moments ago. Ceph was
in warning health status in the same time.
The cluster just had a warning message said that osd.24 has slow ops.
I check the ceph-osd.24.log, and showed as b
On 2020-07-28 14:49, Jason Dillaman wrote:
>> VM in libvirt with:
>>
>>
>>
>>
>>
>>
>>
>> 209715200
>> 209715200
>> 5000
>> 5000
>> 314572800
>> 314572800
>> 7500
>> 7500
>> 60
>>
On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
wrote:
>
> On 2020-07-28 14:49, Jason Dillaman wrote:
> >> VM in libvirt with:
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> 209715200
> >> 209715200
> >> 5000
> >> 5000
> >>
We have the same mgr memory leak problem. I doubt it’s related to the PID which
is used to identify peer address.
Maybe you cloud try to set the ‘PidMode’ to ‘host’ in your deployment.
> 2020年7月28日 上午2:44,Frank Ritchie 写道:
>
> Hi all,
>
> When running containerized Ceph (Nautilus) is anyone el
Hi all,
I'm currently experience some strange behavior in our cluster: the dashboards
object gateway "buckets" submenu is broken and I'm getting 503 errors (however,
"Users" and "Daemons" work flawlessly). Looking into the mgr log gives me
following error:
2020-07-24T12:38:12.695+0200 7f42150f
On 2020-07-28 15:52, Jason Dillaman wrote:
> On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
> wrote:
>>
>> On 2020-07-28 14:49, Jason Dillaman wrote:
VM in libvirt with:
209715200
209715200
On Tue, Jul 28, 2020 at 11:19 AM Johannes Naab
wrote:
>
> On 2020-07-28 15:52, Jason Dillaman wrote:
> > On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
> > wrote:
> >>
> >> On 2020-07-28 14:49, Jason Dillaman wrote:
> VM in libvirt with:
>
>
>
>
>
On Tue, Jul 28, 2020 at 11:39 AM Jason Dillaman wrote:
>
> On Tue, Jul 28, 2020 at 11:19 AM Johannes Naab
> wrote:
> >
> > On 2020-07-28 15:52, Jason Dillaman wrote:
> > > On Tue, Jul 28, 2020 at 9:44 AM Johannes Naab
> > > wrote:
> > >>
> > >> On 2020-07-28 14:49, Jason Dillaman wrote:
> >
And here's the recording:
https://youtu.be/m-ogTC8J7Y4
On 7/17/20 5:55 AM, Kevin Hrpcek wrote:
Hey all,
We will be having a Ceph science/research/big cluster call on
Wednesday July 22nd. If anyone wants to discuss something specific
they can add it to the pad linked below. If you have questi
Hi,
As we expand our cluster (adding nodes), we'd like to take advantage of
better EC profiles enabled by higher server/rack counts. I understand, as
Ceph currently exists (15.2.4), there is no way to live-migrate from one EC
profile to another on an existing pool, for example, from 4+2 to 17+3 wh
Hello,
I have a problem that old versions of S3 objects are not being deleted. Can
anyone advise as to why? I'm using Ceph 14.2.9.
I expect old versions of S3 objects to be deleted after 3 days as per my
lifecycle config on the bucket:
{
"Rules": [
{
"Status": "Enabled"
Den tis 28 juli 2020 kl 18:50 skrev David Orman :
> Hi,
>
> As we expand our cluster (adding nodes), we'd like to take advantage of
> better EC profiles enabled by higher server/rack counts. I understand, as
> Ceph currently exists (15.2.4), there is no way to live-migrate from one EC
> profile to
I'm having a hard time understanding the EC usable space vs. raw.
https://ceph.io/geen-categorie/ceph-erasure-coding-overhead-in-a-nutshell/
indicates "nOSD * k / (k+m) * OSD Size" is how you calculate usable space,
but that's not lining up with what i'd expect just from k data chunks + m
parity c
It would be 4/(4+2) = 4/6 =2/3 or k/(k+m)?
-Original Message-
From: David Orman
Sent: Tuesday, July 28, 2020 9:32 PM
To: ceph-users
Subject: [ceph-users] Usable space vs. Overhead
[CAUTION: External Mail]
I'm having a hard time understanding the EC usable space vs. raw.
https://urld
I'm going to resurrect this thread to throw my hat in the ring as I am having
this issue as well.
I just moved to 15.2.4 on Ubuntu 18.04/bionic, and Zabbix is 5.0.2.
> $ ceph zabbix config-show
> Error EINVAL: Traceback (most recent call last):
> File "/usr/share/ceph/mgr/mgr_module.py", line 1
Jason,
Using partitions won't get you into trouble, but depending on the
version of Ceph you are using, you may want to leverage LVM instead of
partitions. For our Filestore cluster, we had two partitions on NVMe
to get more performance and it worked fine. I'm using LVM to carve out
NVMe drives fo
That's what the formula on the ceph link arrives at, a 2/3 or 66.66%
overhead. But if a 4 byte object is split into 4x1 byte chunks data (4
bytes total) + 2x 1 byte chunks parity (2 bytes total), you arrive at 6
bytes, which is 50% more than 4 bytes. So 50% overhead, vs. 33.33% overhead
as the othe
A k=3, m=3 scheme would be 3:3 = 50% , you get to use 4 bytes out of 6 bytes =
4:6 = 2:3 = 66.6%?
From: David Orman
Sent: Wednesday, July 29, 2020 2:17 AM
To: Alan Johnson (System)
Cc: ceph-users
Subject: Re: [ceph-users] Usable space vs. Overhead
[CAUTION: External Mail]
That's what the form
Hi Robert!
Thanks for answering my question. I take it you're working a lot with Ceph
these days! On my pre-octopus clusters I did use LVM backed by partitions, but
I always kind of wondered if it was a good practice or not as it added an
additional layer and obscures the underlying disk topolo
On Tue, Jul 28, 2020 at 01:28:14PM +, Alex Hussein-Kershaw wrote:
> Hello,
>
> I have a problem that old versions of S3 objects are not being deleted. Can
> anyone advise as to why? I'm using Ceph 14.2.9.
How many objects are in the bucket? If it's a lot, then you may run into
RGW's lifecycle
I'm facing the same issue. My cluster will have an expansion and I wanna
modify the ec profile too. What I can think of is to create a new profile
and a new pool, and then migrate the data from the old pool to the new one.
Finally rename the pools as I can use the new pool just like nothing
happene
The advancement tab is one of the basic holds onto the item this is in all
probability subject for the Cash App refund. As such, at the off peril which
you have to get your coins back, with the guide of using then you may use the
help this is given withinside the assistance targets or you may di
At whatever point you are applying to confirm your character on the Cash app,
at that point you will likewise get a warning of further advances. All things
considered, you have to check your record, on the off chance that you need to
go through more cash, or you need to utilize extra services on
Hi,
I deployed rook v0.8.3 with ceph 12.2.7. This is production system deployed for
a long time.
Because unknown reason, mon couldn't form quorum anymore and I tried to restore
mon from osd by following document below,
https://github.com/ceph/ceph/blob/v12.2.7/doc/rados/troubleshooting/troublesho
35 matches
Mail list logo