Hi Ceph users!
We are currently configuring our new production Ceph cluster and I have
some questions regarding Ubuntu and NVMe SSDs.
Basic setup:
- Ubuntu 18.04 with HWE Kernel 5.3
- Deployment via ceph-ansible (Ceph stable "Nautilus")
- 5x Nodes with AMD EPYC 7402P CPUs
- 25Gbit/s NICs and s
I suspect Ceph is configured in their case to send all logs off-node to a
central syslog server, ELK, etc.
With Jewel this seemed to result in daemons crashing, but probably it’s since
been fixed (I haven’t tried).
> that is much less than I experienced of allocated disk space in case
> somet
Hello Martin,
I suspect you're using a central syslog server.
Can you share information which central syslog server you use?
Is this central server running on ceph cluster, too?
Regards
Thomas
Am 23.03.2020 um 09:39 schrieb Martin Verges:
> Hello Thomas,
>
> by default we allocate 1GB per Host o
Hello List,
i use rbd-mirror and i asynchronously mirror to my backup cluster.
My backup cluster only has "spinnung rust" and wont be able to always
perform like the live cluster.
Thats is fine for me, as far as it´s not further behind than 12h.
vm-194-disk-1:
global_id: 7a95730f-451c-4973-8
I suppose the correct syntax is that anything after "client." is the
name? So:
ceph fs authorize cephfs client.bob / r / rw
Would authorize a client named bob?
Yes, exactly:
admin:~ # ceph fs authorize cephfs client.bob / r / rw
[client.bob]
key = AQAyw3leAv9tKxAA+wtNEa40yK6svPE/VPl
Hello Thomas,
we export the Logs using systemd-journald-remote / -upload. Long term
retention can be done configuring an external syslog / elk / .. using our
config file.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
cr
The default rsyslog in centos has been able to do remote logging for
many years.
-Original Message-
Cc: ceph-users
Subject: [ceph-users] Re: Questions on Ceph cluster without OS disks
Hello Martin,
I suspect you're using a central syslog server.
Can you share information which centra
For anybody finding this thread via Google or something, here's a link
to a (so far unresolved) bug report: https://tracker.ceph.com/issues/39264
On 19/03/2020 17:37, Janek Bevendorff wrote:
> Sorry for nagging, but is there a solution to this? Routinely restarting
> my MGRs every few hours isn't
Hi Jarett,
El 23/3/20 a las 3:52, Jarett DeAngelis escribió:
So, I thought I’d post with what I learned re: what to do with this problem.
This system is a 3-node Proxmox cluster, and each node had:
1 x 1TB NVMe
2 x 512GB HDD
I had maybe 100GB of data in this system total. Then I added:
2 x 2
anyone?
On Mon, 23 Mar 2020, 23:39 Abhinav Singh,
wrote:
> please someone help me
>
> On Mon, 23 Mar 2020, 19:44 Abhinav Singh,
> wrote:
>
>>
>>
>> -- Forwarded message -
>> From: Abhinav Singh
>> Date: Mon, Mar 23, 2020 at 7:43 PM
>> Subject: RGW failing to create bucket
>> To
On Tue, Mar 24, 2020 at 6:14 AM Abhinav Singh
wrote:
> anyone?
>
> On Mon, 23 Mar 2020, 23:39 Abhinav Singh,
> wrote:
>
> > please someone help me
> >
> > On Mon, 23 Mar 2020, 19:44 Abhinav Singh,
> > wrote:
> >
> >>
> >>
> >> -- Forwarded message -
> >> From: Abhinav Singh
> >
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,
For multi-node NFS Ganesha over CephFS, is it OK to leave libcephfs
write caching on, or should it be configured off for failover ?
You ca
We're happy to announce the first stable release of Octopus v15.2.0.
There are a lot of changes and new features added, we advise everyone to
read the release notes carefully, and in particular the upgrade notes,
before upgrading. Please refer to the official blog entry
https://ceph.io/releases/v1
RIA Institute of Technology Provides easy to understand French Language Classes
in Bangalore. Learning French is considered to be a part of the curriculum for
many professional jobs in major Multinational companies. We offer French
Training in Marathahalli, Bangalore to aspiring Professionals an
On 24/03/2020 13:35, Daniel Gryniewicz wrote:
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,
For multi-node NFS Ganesha over CephFS, is it OK to leave libcephfs
write caching on, or shoul
On 3/24/20 8:19 AM, Maged Mokhtar wrote:
On 24/03/2020 13:35, Daniel Gryniewicz wrote:
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,
For multi-node NFS Ganesha over CephFS, is it OK to
On Tue, Mar 24, 2020 at 3:50 AM Ml Ml wrote:
>
> Hello List,
>
> i use rbd-mirror and i asynchronously mirror to my backup cluster.
> My backup cluster only has "spinnung rust" and wont be able to always
> perform like the live cluster.
>
> Thats is fine for me, as far as it´s not further behind t
On 24/03/2020 15:14, Daniel Gryniewicz wrote:
On 3/24/20 8:19 AM, Maged Mokhtar wrote:
On 24/03/2020 13:35, Daniel Gryniewicz wrote:
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49 +0200, Maged Mokhtar wrote:
Hello all,
For
Hi.
I'm experiencing some kind of a space leak in Bluestore. I use EC,
compression and snapshots. First I thought that the leak was caused by
"virtual clones" (issue #38184). However, then I got rid of most of the
snapshots, but continued to experience the problem.
I suspected something when
Hi Vitaliy,
You may be coming across the EC space amplification issue,
https://tracker.ceph.com/issues/44213
I am not aware of any recent updates to resolve this issue.
Sincerely,
On Tue, Mar 24, 2020 at 12:53 PM wrote:
> Hi.
>
> I'm experiencing some kind of a space leak in Bluestore. I use
On 24/03/2020 16:48, Maged Mokhtar wrote:
On 24/03/2020 15:14, Daniel Gryniewicz wrote:
On 3/24/20 8:19 AM, Maged Mokhtar wrote:
On 24/03/2020 13:35, Daniel Gryniewicz wrote:
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff Layton wrote:
On Mon, 2020-03-23 at 15:49
Hi Steve,
Thanks, it's an interesting discussion, however I don't think that it's
the same problem, because in my case bluestore eats additional space
during rebalance. And it doesn't seem that Ceph does small overwrites
during rebalance. As I understand it does the opposite: it reads and
wri
FWIW, Igor has been doing some great work on improving performance with
the 4k_min_alloc size. He gave a presentation at a recent weekly
performance meeting on it and it's looking really good. On HDDs I think
he was seeing up to 2X faster 8K-128K random writes at the expense of up
to a 20% se
On 3/24/20 1:16 PM, Maged Mokhtar wrote:
On 24/03/2020 16:48, Maged Mokhtar wrote:
On 24/03/2020 15:14, Daniel Gryniewicz wrote:
On 3/24/20 8:19 AM, Maged Mokhtar wrote:
On 24/03/2020 13:35, Daniel Gryniewicz wrote:
On 3/23/20 4:31 PM, Maged Mokhtar wrote:
On 23/03/2020 20:50, Jeff
I'm trying to install on a fresh CentOs 8 host and get the following
error
# yum install ceph
..
Error:
Problem: package ceph-2:15.2.0-0.el8.x86_64 requires ceph-osd =
2:15.2.0-0.el8, but none of the providers can be installed
- conflicting requests
- nothing provides libleveldb.s
Is it poosible to provide instructions about upgrading from CentOs7+ ceph
14.2.8 to CentOs8+ceph 15.2.0 ?
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Great work! Thanks to everyone involved!
One minor thing I've noticed so far with the Ubuntu Bionic build is it's
reporting the release as an RC instead of being 'stable':
$ ceph versions | grep octopus
"ceph version 15.2.0 (dc6a0b5c3cbf6a5e1d6d4f20b5ad466d76b96247) octopus
(rc)": 1
B
On Tue, 24 Mar 2020, konstantin.ilya...@mediascope.net wrote:
> Is it poosible to provide instructions about upgrading from CentOs7+
> ceph 14.2.8 to CentOs8+ceph 15.2.0 ?
You have ~2 options:
- First, upgrade Ceph packages to 15.2.0. Note that your dashboard will
break temporarily. Then, upg
epel has leveldb for el7 but not el8...A Fedora 30 pkg miiight work...It
resolves the rpm dependency at the very least. Octopus also reqs el8
python3-cherrypy, python3.7dist(six), and python(abi) ... and these req
specific versions of libstdc++
This is pretty much a brick wall for Ceph on el8 sin
Hello Sir/Madam,
We are facing the serious problem for our proxmox with Ceph. I have already
submitted the ticket to Proxmox but they said that only option trying to
recover the mondb.We would like to know any some suggestion in our situation.
So far the only option that I see would be in tryin
30 matches
Mail list logo