On Mon, 10 Feb 2020, Gregory Farnum wrote:
On Sun, Feb 9, 2020 at 3:24 PM Håkan T Johansson wrote:
Hi,
running 14.2.6, debian buster (backports).
Have set up a cephfs with 3 data pools and one metadata pool:
myfs_data, myfs_data_hdd, myfs_data_ssd, and myfs_metadata.
> On 6 Feb 2020, at 11:23, Stefan Kooman wrote:
>
>> Hi!
>>
>> I've confirmed that the write IO to the metadata pool is coming form active
>> MDSes.
>>
>> I'm experiencing very poor write performance on clients and I would like to
>> see if there's anything I can do to optimise the perform
Hi,
We would like to replace the current seagate ST4000NM0034 HDDs in our
ceph cluster with SSDs, and before doing that, we would like to checkout
the typical usage of our current drives, over the last years, so we can
select the best (price/performance/endurance) SSD to replace them with.
I
Dear All,
Following a clunky* cluster restart, we had
23 "objects unfound"
14 pg recovery_unfound
We could see no way to recover the unfound objects, we decided to mark
the objects in one pg unfound...
[root@ceph1 bad_oid]# ceph pg 5.f2f mark_unfound_lost delete
pg has 2 objects unfound and app
We have been using ceph-deploy in our existing cluster running as a non root
user with sudo permissions. I've been working on getting an octopus cluster
working using cephadm. During bootstrap I ran into a
"execnet.gateway_bootstrap.HostNotFound" issue. It turns out that the problem
was caused
There is a 'packaged' mode that does this, but it's a bit different:
- you have to install the cephadm package on each host
- the package sets up a cephadm user and sudoers.d file
- mgr/cephadm will ssh in as that user and sudo as needed
The net is that you have to make sure cephadm is installed
On Mon, Feb 10, 2020 at 12:29 AM Håkan T Johansson wrote:
>
>
> On Mon, 10 Feb 2020, Gregory Farnum wrote:
>
> > On Sun, Feb 9, 2020 at 3:24 PM Håkan T Johansson
> > wrote:
> >
> > Hi,
> >
> > running 14.2.6, debian buster (backports).
> >
> > Have set up a cephfs with 3 data p
Hello MJ,
Perhaps your PGs are a unbalanced?
Ceph osd df tree
Greetz
Mehmet
Am 10. Februar 2020 14:58:25 MEZ schrieb lists :
>Hi,
>
>We would like to replace the current seagate ST4000NM0034 HDDs in our
>ceph cluster with SSDs, and before doing that, we would like to
>checkout
>the typical u
Hello List,
first of all: Yes - i made mistakes. Now i am trying to recover :-/
I had a healthy 3 node cluster which i wanted to convert to a single one.
My goal was to reinstall a fresh 3 Node cluster and start with 2 nodes.
I was able to healthy turn it from a 3 Node Cluster to a 2 Node cluste
Thanks for the quick reply! I am using the cephadm package. I just wasn't aware
that of the user that was created as part of the package install. My
/etc/sudoers.d/cephadm seems to be incorrect. It gives root permission to
/usr/bin/cephadm, but cephadm is installed in /usr/sbin. That is easily f
try from admin node
ceph osd df
ceph osd status
thanks Joe
>>> 2/10/2020 10:44 AM >>>
Hello MJ,
Perhaps your PGs are a unbalanced?
Ceph osd df tree
Greetz
Mehmet
Am 10. Februar 2020 14:58:25 MEZ schrieb lists :
>Hi,
>
>We would like to replace the current seagate ST4000NM0034 HDDs in our
Hi together,
I am new here. I am a little bit confused about the discussion about the
amount RAM of the metadata server.
In the SUSE Deployment Guide for SUSE Enterprise Storage 6 (release
2020-01-27) in the chapter "2.2 Minimum Cluster Configuration" there the
is a sentence:
"... Metadata Ser
Has anyone attempted to use gdbpmp since 14.2.6 to grab data? I have not been
able to successfully do it on my clusters. It has just been hanging at
attaching to process.
If you have been able to, would you be available for a discussion regarding
your configuration?
Thanks,
Joe Bardgett
Sto
I was also confused by this topic and had intended to post a question
this week. The documentation I recall reading said something about 'if
you want to use erasure coding on a CephFS, you should use a small
replicated data pool as the first pool, and your erasure coded pool as
the second.' I
Hello ,
Cephfs operations are slow in our cluster , I see low number of operations or
throughput in the pools and all other resources as well. I think it is MDS
operations that are causing the issue. I increased mds_cache_memory_limit to 3
GB from 1 GB but not seeing any improvements in the us
Ok, I've been digging around a bit with the code and made progress, but haven't
got it all working yet. Here's what I've done:
# yum install cephadm
# ln -s ../sbin/cephadm /usr/bin/cephadm #Needed to reference the correct path
# cephadm bootstrap --output-config /etc/ceph/ceph.conf --output-key
I missed a line while pasting the previous message:
# ceph orchestrator set backend cephadm
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On Mon, 10 Feb 2020, Gregory Farnum wrote:
On Mon, Feb 10, 2020 at 12:29 AM Håkan T Johansson wrote:
On Mon, 10 Feb 2020, Gregory Farnum wrote:
On Sun, Feb 9, 2020 at 3:24 PM Håkan T Johansson wrote:
Hi,
running 14.2.6, debian buster (backports).
Have set up a cephfs
18 matches
Mail list logo