Hi Igor,
Thanks for your input. I tried to gather as much information as I could to
answer your questions. Hopefully we can get to the bottom of this.
> 0) What is backing disks layout for OSDs in question (main device type?,
> additional DB/WAL devices?).
Everything is on a single Intel NVMe P
Hi
we found a very ugly issue in rados df
I have several clusters, all running ceph nautilus (14.2.11), We have
there replicated pools with replica size 4.
On the older clusters "rados df" shows in the used column the net used
space. On our new cluster, rados df shows in the used column the gros
Does anyone know of any new statements from the ceph community or foundation
regarding EAR?
I read the legal page of ceph.com and mentioned some information.
https://ceph.com/legal-page/terms-of-service/
But I am still not sure, if my clients and I are within the scope of the entity
list, whethe
День добрий!
Wed, Aug 26, 2020 at 10:08:57AM -0300, quaglio wrote:
>Hi,
> I could not see in the doc if Ceph has infiniband support. Is there
>someone using it?
> Also, is there any rdma support working natively?
>
> Can anyoune point me where to find more info
Hi Wido/Joost
pg_num is 64. It is not that we use 'rados ls' for operations. We just
noticed as a difference that on this cluster it takes about 15 seconds to
return on pool .rgw.root or rc3-se.rgw.buckets.index and our other
clusters return almost instantaniously
Is there a way that I can determ
Sorry that had to be Wido/Stefan
Another question is: hoe to use this ceph-kvstore-tool tool to compact the
rocksdb? (can't find a lot of examples)
The WAL and DB are on a separate NVMe. The directoy structure for an osd
looks like:
root@se-rc3-st8vfr2t2:/var/lib/ceph/osd# ls -l ceph-174
total 2
Hi Denis
please see my comments inline.
Thanks,
Igor
On 8/27/2020 10:06 AM, Denis Krienbühl wrote:
Hi Igor,
Thanks for your input. I tried to gather as much information as I could to
answer your questions. Hopefully we can get to the bottom of this.
0) What is backing disks layout for OSD
Hi Igor
Just to clarify:
>> I grepped the logs for "checksum mismatch" and "_verify_csum". The only
>> occurrences I could find where the ones that preceed the crashes.
>
> Are you able to find multiple _verify_csum precisely?
There are no “_verify_csum” entries whatsoever. I wrote that wrongly
Can someone shed a light on this? Because it is the difference of
running multiple instances of one task, or running multiple different
tasks.
-Original Message-
To: ceph-users
Subject: [ceph-users] radowsgw still needs dedicated clientid?
I think I can remember reading somewhere t
Ceph cluster is updated from nautilus to octopus. On ceph-osd nodes we have
high I/O wait.
After increasing one of pool’s pg_num from 64 to 128 according to warning
message (more objects per pg), this lead to high cpu load and ram usage on
ceph-osd nodes and finally crashed the whole cluster. Thre
vahideh.alino...@gmail.com
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
On 27/08/2020 14:23, Marc Roos wrote:
Can someone shed a light on this? Because it is the difference of
running multiple instances of one task, or running multiple different
tasks.
As far as I know this is still required because the client talk to each
other using RADOS notifies and thus
Hi Manuel,
this behavior was primarily updated in Nautilus by
https://github.com/ceph/ceph/pull/19454
Per-pool stats under "POOLS" section are now the most precise means to
answer various questions about space utilization.
'STORED" column provides net amount of data for a specific pool.
Yo
Hello,
Same issue with another cluster.
Here is the coredump tag 41659448-bc1b-4f8a-b563-d1599e84c0ab
Thanks,
Carl
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi list (and cephfs devs :-)),
On 2020-04-29 17:43, Jake Grimmett wrote:
> ...the "mdsmap_decode" errors stopped suddenly on all our clients...
>
> Not exactly sure what the problem was, but restarting our standby mds
> demons seems to have been the fix.
>
> Here's the log on the standby mds exa
What is the memory_target for your OSDs? Can you share more details
about your setup? You write about high memory, are the OSD nodes
affected by OOM killer? You could try to reduce the osd_memory_target
and see if that helps bring the OSDs back up. Splitting the PGs is a
very heavy operatio
My 3-node Ceph cluster (14.2.4) has been running fine for months. However,
my data pool became close to full a couple of weeks ago, so I added 12 new
OSDs, roughly doubling the capacity of the cluster. However, the pool size
has not changed, and the health of the cluster has changed for the worse.
Hi,
are the new OSDs in the same root and is it the same device class? Can
you share the output of ‚ceph osd df tree‘?
Zitat von Dallas Jones :
My 3-node Ceph cluster (14.2.4) has been running fine for months. However,
my data pool became close to full a couple of weeks ago, so I added 12
Doubling the capacity in one shot was a big topology change, hence the 53%
misplaced.
OSD fullness will naturally reflect a bell curve; there will be a tail of
under-full and over-full OSDs. If you’d not said that your cluster was very
full before expansion I would have predicted it from the f
The new drives are larger capacity than the first drives I added to the
cluster, but they're all SAS HDDs.
cephuser@ceph01:~$ ceph osd df tree
ID CLASS WEIGHTREWEIGHT SIZERAW USE DATAOMAPMETAAVAIL
%USE VAR PGS STATUS TYPE NAME
-1 122.79410- 123 TiB 42 TiB 4
Is your MUA wrapping lines, or is the list software?
As predicted. Look at the VAR column and the STDDEV of 37.27
> On Aug 27, 2020, at 9:02 AM, Dallas Jones wrote:
>
> 1 122.79410- 123 TiB 42 TiB 41 TiB 217 GiB 466 GiB 81
> TiB 33.86 1.00 -root default
> -3
Hi everyone,
In 30 minutes join us for this month's Ceph Tech Talk: Secure Token Service
in RGW:
https://ceph.io/ceph-tech-talks/
On Thu, Aug 13, 2020 at 1:11 PM Mike Perez wrote:
> Hi everyone,
>
> Join us August 27th at 17:00 UTC to hear Pritha Srivastava present on this
> month's Ceph Tech T
>
> Looking for a bit of guidance / approach to upgrading from Nautilus to
> Octopus considering CentOS and Ceph-Ansible.
>
> We're presently running a Nautilus cluster (all nodes / daemons 14.2.11 as
> of this post).
> - There are 4 monitor-hosts with mon, mgr, and dashboard functions
> consol
How's WAL utilize disk when it shares the same device with DB?
Say device size 50G, 100G, 200G, they are no difference to DB
because DB will take 30G anyways. Does it make any difference
to WAL?
Thanks!
Tony
> -Original Message-
> From: Zhenshi Zhou
> Sent: Wednesday, August 26, 2020 11:1
Dallas;
It looks to me like you will need to wait until data movement naturally
resolves the near-full issue.
So long as you continue to have this:
io:
recovery: 477 KiB/s, 330 keys/s, 29 objects/s
the cluster is working.
That said, there are some things you can do.
1) The near-full rati
On Thu, 27 Aug 2020 at 13:21, Anthony D'Atri
wrote:
>
>
> >
> > Looking for a bit of guidance / approach to upgrading from Nautilus to
> > Octopus considering CentOS and Ceph-Ansible.
> >
> > We're presently running a Nautilus cluster (all nodes / daemons 14.2.11
> as
> > of this post).
> > - The
Am I the only one that thinks it is not necessary to dump these keys
with every command (ls and get)? Either remove these keys from auth ls
and auth get. Or remove the commands "auth print_key" "auth print-key"
and "auth get-key"
___
ceph-users
I am getting this, on a osd node I am able to mount the path.
adding ceph secret key to kernel failed: Operation not permitted
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
This what I mean, this guy is just posting all his keys.
https://www.mail-archive.com/ceph-devel@vger.kernel.org/msg26140.html
-Original Message-
To: ceph-users
Subject: [ceph-users] ceph auth ls
Am I the only one that thinks it is not necessary to dump these keys
with every command
Hi,
I'd like to deploy Ceph in a closed environment (no connectivity
to public). I will build repository and registry to hold required
packages and container images. How do I specify the private
registry when running "cephadm bootstrap"? The same question for
adding OSD.
Thanks!
Tony
___
Please discard this question, I figure it out.
Tony
> -Original Message-
> From: Tony Liu
> Sent: Thursday, August 27, 2020 1:55 PM
> To: ceph-users@ceph.io
> Subject: [ceph-users] [cephadm] Deploy Ceph in a closed environment
>
> Hi,
>
> I'd like to deploy Ceph in a closed environment
Hi All,
We encountered an issue while upgrading our Ceph cluster from Luminous
12.2.12 to Nautilus 14.2.11. We used
https://docs.ceph.com/docs/master/releases/nautilus/#upgrading-from-mimic-or-luminous
and ceph-ansible to upgrade the cluster. We use HDD for data and NVME for
WAL and DB.
*Clust
Hello,
octopus 15.2.4
just as a test, I put my OSDs each inside of a LXD container. Set up
cephFS and mounted it inside a LXD container and it works.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph
i'm running the following
[root@node1 ~]# ceph --versionceph version 15.2.4
(7447c15c6ff58d7fce91843b705a268a1917325c) octopus (stable)
on fedora 32.. installed from the built-in repos. I'm running into a simple
issue that's rather frustrating. Here is a set of commands i'm running and
output:
[
>
>
> partitions after checking disk partitions and whoami information. After
> manually mounting the osd.108, now it's throwing permission error which I'm
> still reviewing (bdev(0xd1be000 /var/lib/ceph/osd/ceph-108/block) open open
> got: (13) Permission denied). Enclosed the log of the OSD fo
35 matches
Mail list logo