Hi,
On 10/05/2016 02:18 PM, Yan, Zheng wrote:
On Wed, Oct 5, 2016 at 5:06 PM, Burkhard Linke
wrote:
Hi,
I've managed to move the data from the old pool to the new one using some
shell scripts and cp/rsync. Recursive getfattr on the mount point does not
reveal any file with a layout ref
Hi,
just an additional comment:
you can disable backfilling and recovery temporarily by setting the
'nobackfill' and 'norecover' flags. It will reduce the backfilling
traffic and may help the cluster and its OSD to recover. Afterwards you
should set the backfill traffic settings to the minimu
Hi,
On 11/15/2016 01:27 PM, Webert de Souza Lima wrote:
Not that I know of. On 5 other clusters it works just fine and
configuration is the same for all.
On this cluster I was using only radosgw, but cephfs was not in use
but it had been already created following our procedures.
This happene
not be satisfied with two host. You either need to put the
metadata pool on the HDD, too, or use a pool size of 2 (which is not
recommended).
Regards,
Burkhard
--
Dr. rer. nat. Burkhard Linke
Bioinformatics and Systems Biology
Justus-Liebig-University Giessen
35392 Giessen, Germa
. However, there are 10x stray directories and removed items are
spread between them, so you should be able to handle deleting a
directory 10x the limit on the size of a stray dir.
Just out of curiosity:
It is possible to increase the number of stray directories?
Regards,
Burkhard
--
Dr. rer. nat
Hi,
On 11/17/2016 08:07 AM, Steffen Weißgerber wrote:
Hello,
just for understanding:
When starting to fill osd's with data due to setting the weigth from 0 to the
normal value
the ceph status displays degraded objects (>0.05%).
I don't understand the reason for this because there's no stora
Hi,
*snipsnap*
# ceph osd tier add cinder-volumes cache
pool 'cache' is now (or already was) a tier of 'cinder-volumes'
# ceph osd tier cache-mode cache writeback
set cache-mode for pool 'cache' to writeback
# ceph osd tier set-overlay cinder-volumes cache
overlay for 'cinder-volumes' is now
Hi,
On 12/16/2016 09:22 AM, sandeep.cool...@gmail.com wrote:
Hi,
I was trying the scenario where i have partitioned my drive (/dev/sdb)
into 4 (sdb1, sdb2 , sdb3, sdb4) using the sgdisk utility:
# sgdisk -z /dev/sdb
# sgdisk -n 1:0:+1024 /dev/sdb -c 1:"ceph journal"
# sgdisk -n 1:0:+1024 /d
Hi,
On 07/30/2018 04:09 PM, Tobias Florek wrote:
Hi!
I want to set up the dashboard behind a reverse proxy. How do people
determine which ceph-mgr is active? Is there any simple and elegant
solution?
You can use haproxy. It supports periodic check for the availability of
the configured ba
Hi,
I'm currently upgrading our ceph cluster to 12.2.7. Most steps are fine,
but all mgr instances abort after restarting:
-10> 2018-08-01 09:57:46.357696 7fc481221700 5 --
192.168.6.134:6856/5968 >> 192.168.6.131:6814/2743 conn(0x564cf2bf9000
:6856 s=STATE_OPEN_MESSAGE_READ_FOOT
Hi,
On 08/01/2018 11:14 AM, Dan van der Ster wrote:
Sounds like https://tracker.ceph.com/issues/24982
Thx, I've added the information to the bug report.
Regards,
Burkhard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/l
Hi,
you are using the kernel implementation of CephFS. In this case some
information can be retrieved from the /sys/kernel/debug/ceph/
directory. Especially the mdsc, monc and osdc files are important, since
they contain pending operations on mds, mon and osds.
We have a similar problem in
Hi,
On 08/09/2018 03:21 PM, Yan, Zheng wrote:
try 'mount -f', recent kernel should handle 'mount -f' pretty well
On Wed, Aug 8, 2018 at 10:46 PM Zhenshi Zhou wrote:
Hi,
Is there any other way excpet rebooting the server when the client hangs?
If the server is in production environment, I can'
Hi,
just some thoughts and comments:
Hardware:
The default ceph setup uses 3 replicates on three different hosts, so
you need at least three hosts for a ceph cluster. Other configurations
with a smaller number of hosts are possible, but not recommended.
Depending on the workload and acces
Hi,
On 08/10/2018 03:10 PM, Matthew Pounsett wrote:
*snipsnap*
advisable to put these databases on SSDs. You can share one SSD for several
OSDs (e.g. by creating partitions), but keep in mind that the failure of
one of these SSDs also renders the OSD content useless. Do not use consumer
grade
Hi,
On 08/13/2018 03:22 PM, Zhenshi Zhou wrote:
Hi,
Finally, I got a running server with files /sys/kernel/debug/ceph/xxx/
[root@docker27 525c4413-7a08-40ca-9a98-0a6df009025b.client213522]# cat mdsc
[root@docker27 525c4413-7a08-40ca-9a98-0a6df009025b.client213522]# cat monc
have monmap 2 want
Hi,
AFAIk SD cards (and SATA DOMs) do not have any kind of wear-leveling
support. Even if the crappy write endurance of these storage systems
would be enough to operate a server for several years on average, you
will always have some hot spots with higher than usual write activity.
This is t
Hi,
On 09/10/2018 02:40 PM, marc-antoine desrochers wrote:
Hi,
I am currently running a ceph cluster running in CEPHFS with 3 nodes each
have 6 osd's except 1 who got 5. I got 3 mds : 2 active and 1 standby, 3
mon.
[root@ceph-n1 ~]# ceph -s
cluster:
id: 1d97aa70-20
Hi,
On 09/27/2018 11:15 AM, Marc Roos wrote:
I have a test cluster and on a osd node I put a vm. The vm is using a
macvtap on the client network interface of the osd node. Making access
to local osd's impossible.
the vm of course reports that it cannot access the local osd's. What I
am getting
Hi,
On 28.09.2018 18:04, Vladimir Brik wrote:
Hello
I've attempted to increase the number of placement groups of the pools
in our test cluster and now ceph status (below) is reporting problems. I
am not sure what is going on or how to fix this. Troubleshooting
scenarios in the docs don't seem
Hi,
we also experience hanging clients after MDS restarts; in our case we
only use a single active MDS server, and the client are actively
blacklisted by the MDS server after restart. It usually happens if the
clients are not responsive during MDS restart (e.g. being very busy).
You can ch
Hi,
a user just stumbled across a problem with directory content in cephfs
(kernel client, ceph 12.2.8, one active, one standby-replay instance):
root@host1:~# ls /ceph/sge-tmp/db/work/06/ | wc -l
224
root@host1:~# uname -a
Linux host1 4.13.0-32-generic #35~16.04.1-Ubuntu SMP Thu Jan 25 10:1
Hi,
On 05.10.2018 15:33, Sergey Malinin wrote:
Are you sure these mounts (work/*06* and work/*6c*) refer to the same
directory?
On 5.10.2018, at 13:57, Burkhard Linke
<mailto:burkhard.li...@computational.bio.uni-giessen.de>> wrote:
root@host2:~# ls /ceph/sge-tmp/db/work/06
peering.
You can try to raise this limit. There are several threads on the
mailing list about this.
Regards,
Burkhard
--
Dr. rer. nat. Burkhard Linke
Bioinformatics and Systems Biology
Justus-Liebig-University Giessen
35392 Giessen, Germany
Phone: (+49)
Hi,
upon failover or restart, or MDS complains that something is wrong with
one of the stray directories:
2018-10-19 12:56:06.442151 7fc908e2d700 -1 log_channel(cluster) log
[ERR] : bad/negative dir size on 0x607 f(v133 m2018-10-19
12:51:12.016360 -4=-5+1)
2018-10-19 12:56:06.442182 7fc908
Hi,
On 05/17/2018 01:09 PM, Kevin Olbrich wrote:
Hi!
Today I added some new OSDs (nearly doubled) to my luminous cluster.
I then changed pg(p)_num from 256 to 1024 for that pool because it was
complaining about to few PGs. (I noticed that should better have been small
changes).
This is the cu
Hi,
I may be wrong, but AFAIK the cluster network is only used to bind the
corresponding functionality to the correct network interface. There's no
check for a common CIDR range or something similar in CEPH.
As long as the traffic is routeable from the current public network and
the new cl
Hi,
On 06/07/2018 02:52 PM, Фролов Григорий wrote:
?Hello. Could you please help me troubleshoot the issue.
I have 3 nodes in a cluster.
*snipsnap*
root@testk8s2:~# ceph -s
cluster 0bcc00ec-731a-4734-8d76-599f70f06209
health HEALTH_ERR
80 pgs degraded
80
Hi,
On 06/21/2018 05:14 AM, dave.c...@dell.com wrote:
Hi all,
I have setup a ceph cluster in my lab recently, the configuration per my understanding
should be okay, 4 OSD across 3 nodes, 3 replicas, but couple of PG stuck with state
"active+undersized+degraded", I think this should be very g
as or
erasure code shards are separated across hosts and a single host failure will not affect
availability."
Best Regards,
Dave Chen
-Original Message-
From: Chen2, Dave
Sent: Friday, June 22, 2018 1:59 PM
To: 'Burkhard Linke'; ceph-users@lists.ceph.com
Cc: Chen2, Dave
Subject:
Hi,
On 06/20/2018 07:20 PM, David Turner wrote:
We originally used pacemaker to move a VIP between our RGWs, but ultimately
decided to go with an LB in front of them. With an LB you can utilize both
RGWs while they're up, but the LB will shy away from either if they're down
until the check sta
Hi,
On 01/11/2017 11:02 AM, Boris Mattijssen wrote:
Hi all,
I'm trying to use/path restriction/ on CephFS, running a Ceph Jewel
(ceph version 10.2.5) cluster.
For this I'm using the command specified in the official docs
(http://docs.ceph.com/docs/jewel/cephfs/client-auth/):
ceph auth get-or
Hi,
On 01/11/2017 12:39 PM, Boris Mattijssen wrote:
Hi Brukhard,
Thanks for your answer. I've tried two things now:
* ceph auth get-or-create client.boris mon 'allow r' mds 'allow r
path=/, allow rw path=/boris' osd 'allow rw pool=cephfs_data'. This is
according to your suggestion. I am howe
Hi,
just for clarity:
Did you parse the slow request messages and use the effective OSD in the
statistics? Some message may refer to other OSDs, e.g. "waiting for sub
op on OSD X,Y". The reporting OSD is not the root cause in that case,
but one of the mentioned OSDs (and I'm currently not a
HI,
we are running two MDS servers in active/standby-replay setup. Recently
we had to disconnect active MDS server, and failover to standby works as
expected.
The filesystem currently contains over 5 million files, so reading all
the metadata information from the data pool took too long, s
Hi,
On 01/26/2017 03:34 PM, John Spray wrote:
On Thu, Jan 26, 2017 at 8:18 AM, Burkhard Linke
wrote:
HI,
we are running two MDS servers in active/standby-replay setup. Recently we
had to disconnect active MDS server, and failover to standby works as
expected.
The filesystem currently
Hi,
On 03/07/2017 05:53 PM, Francois Blondel wrote:
Hi all,
We have (only) 2 separate "rooms" (crush bucket) and would like to
build a cluster being able to handle the complete loss of one room.
*snipsnap*
Second idea would be to use Erasure Coding, as it fits our performance
require
s key and secret key reported by Openstack to authenticate.
The credentials are bound to a Openstack project, so different
credentials can (and have to) be used for accessing buckets owned by
different projects.
Regards,
Burkhard Linke
___
ceph-users mai
Hi,
On 06/02/2017 04:15 PM, Oleg Obleukhov wrote:
Hello,
I am playing around with ceph (ceph version 10.2.7
(50e863e0f4bc8f4b9e31156de690d765af245185)) on Debian Jessie and I
build a test setup:
$ ceph osd tree
ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY
-1 0.014
ng
several million files.
Failover between the MDS works, but might run into problems with a large
number of open files (each requiring a stat operation). Depending on the
number of open files failover takes some seconds up to 5-10 minutes in
our setup.
Hi,
I have a setup with two MDS in active/standby configuration. During
times of high network load / network congestion, the active MDS is
bounced between both instances:
1. mons(?) decide that MDS A is crashed/not available due to missing
heartbeats
2015-12-15 16:38:08.471608 7f880df10700
Hi,
On 12/15/2015 10:22 PM, Gregory Farnum wrote:
On Tue, Dec 15, 2015 at 10:21 AM, Burkhard Linke
wrote:
Hi,
I have a setup with two MDS in active/standby configuration. During times of
high network load / network congestion, the active MDS is bounced between
both instances:
1. mons
distribute based on OSDs instead of hosts.
Regards,
Burkhard
--
Dr. rer. nat. Burkhard Linke
Bioinformatics and Systems Biology
Justus-Liebig-University Giessen
35392 Giessen, Germany
Phone: (+49) (0)641 9935810
___
ceph-users mailing list
ceph-users
Hi,
On 01/08/2016 08:07 AM, Christian Balzer wrote:
Hello,
just in case I'm missing something obvious, there is no reason a pool
called aptly "ssd" can't be used simultaneously as a regular RBD pool and
for cache tiering, right?
AFAIK the cache configuration is stored in the pool entry itself (
Hi,
I want to start another round of SSD discussion since we are about to
buy some new servers for our ceph cluster. We plan to use hosts with 12x
4TB drives and two SSD journals drives. I'm fancying Intel P3700 PCI-e
drives, but Sebastien Han's blog does not contain performance data for
thes
Hi,
On 01/08/2016 03:02 PM, Paweł Sadowski wrote:
Hi,
Quick results for 1/5/10 jobs:
*snipsnap*
Run status group 0 (all jobs):
WRITE: io=21116MB, aggrb=360372KB/s, minb=360372KB/s, maxb=360372KB/s,
mint=6msec, maxt=6msec
*snipsnap*
Run status group 0 (all jobs):
WRITE: io=57
Hi,
On 18.01.2016 10:36, david wrote:
Hello All.
Does anyone provides Ceph rbd/rgw/cephfs through NFS? I have a
requirement about Ceph Cluster which needs to provide NFS service.
We export a CephFS mount point on one of our NFS servers. Works out of
the box with Ubuntu Trusty, a rec
Hi,
there's a rogue file in our CephFS that we are unable to remove. Access
to the file (removal, move, copy, open etc.) results in the MDS starting
to spill out the following message to its log file:
2016-01-25 08:39:09.623398 7f472a0ee700 0 mds.0.cache
open_remote_dentry_finish bad remote
Hi,
On 01/25/2016 01:05 PM, Yan, Zheng wrote:
On Mon, Jan 25, 2016 at 3:43 PM, Burkhard Linke
wrote:
Hi,
there's a rogue file in our CephFS that we are unable to remove. Access to
the file (removal, move, copy, open etc.) results in the MDS starting to
spill out the following message t
Hi,
On 01/25/2016 03:27 PM, Yan, Zheng wrote:
On Mon, Jan 25, 2016 at 9:43 PM, Burkhard Linke
wrote:
Hi,
On 01/25/2016 01:05 PM, Yan, Zheng wrote:
On Mon, Jan 25, 2016 at 3:43 PM, Burkhard Linke
wrote:
Hi,
there's a rogue file in our CephFS that we are unable to remove. Access
t
t;dirino": 1541,
"dname": "10002af7f78",
"version": 52554232
},
{
"dirino": 256,
"dname": "stray5",
"version": 79097792
}
],
"pool
Hi,
On 01/26/2016 10:24 AM, Yan, Zheng wrote:
On Tue, Jan 26, 2016 at 3:16 PM, Burkhard Linke
wrote:
Hi,
On 01/26/2016 07:58 AM, Yan, Zheng wrote:
*snipsnap*
I have a few questions
Which version of ceph are you using? When was the filesystem created?
Did you manually delete
Hi,
On 02/04/2016 03:17 PM, Kyle Harris wrote:
Hello,
I have been working on a very basic cluster with 3 nodes and a single
OSD per node. I am using Hammer installed on CentOS 7
(ceph-0.94.5-0.el7.x86_64) since it is the LTS version. I kept
running into an issue of not getting past the sta
Hi,
On 02/12/2016 03:47 PM, Christian Balzer wrote:
Hello,
yesterday I upgraded our most busy (in other words lethally overloaded)
production cluster to the latest Firefly in preparation for a Hammer
upgrade and then phasing in of a cache tier.
When restarting the ODSs it took 3 minutes (1 min
Hi,
I would like to provide access to a bunch of large files (bio sequence
databases) to our cloud users. Putting the files in a RBD instance
requires special care if several VMs need to access the files; creating
an individual RBD snapshot for each instance requires more effort in the
cloud
Hi,
*snipsnap*
On 09.04.2016 08:11, Christian Balzer wrote:
3 MDS nodes:
-SuperMicro 1028TP-DTR (one node from scale-out chassis)
--2x E5-2630v4
--128GB RAM
--2x 120GB SSD (RAID 1 for OS)
Not using CephFS, but if the MDS are like all the other Ceph bits
(MONs in particular) they are likely to
Hi,
On 04/26/2016 12:32 PM, SCHAER Frederic wrote:
Hi,
One simple/quick question.
In my ceph cluster, I had a disk wich was in predicted failure. It was
so much in predicted failure that the ceph OSD daemon crashed.
After the OSD crashed, ceph moved data correctly (or at least that’s
what
Hi,
On 03.11.18 10:31, jes...@krogh.cc wrote:
I suspect that mds asked client to trim its cache. Please run
following commands on an idle client.
In the mean time - we migrated to the RH Ceph version and deliered the MDS
both SSD's and more memory and the problem went away.
It still puzzles my
Hi,
On 11/19/18 12:49 PM, Thomas Klute wrote:
Hi,
we have a production cluster (3 nodes) stuck unclean after we had to
replace one osd.
Cluster recovered fine except some pgs that are stuck unclean for about
2-3 days now:
*snipsnap*
[root@ceph1 ~]# fgrep remapp /tmp/pgdump.txt
3.83 542
ithms involve per
request parts of the header, so each checksum has to be computed
individually. The check then requires access to the cleartext EC2
password. And the keystone API used in the rados gateway does not expose
this password.
Just my 2ct,
Burkhard
--
Dr. rer. nat. Burkhard
Hi,
On 12/19/18 8:55 PM, Marcus Müller wrote:
Hi all,
we’re running a ceph hammer cluster with 3 mons and 24 osds (3 same
nodes) and need to migrate all servers to a new datacenter and change
the IPs of the nodes.
I found this tutorial:
http://docs.ceph.com/docs/hammer/rados/operations/add
Hi,
just some comments:
CephFS has an overhead for accessing files (capabilities round trip to
MDS for first access, cap cache management, limited number of concurrent
caps depending on MDS cache size...), so using the cephfs filesystem as
storage for a filestore OSD will add some extra ove
Hi,
On 1/17/19 7:27 PM, Void Star Nill wrote:
Hi,
We am trying to use Ceph in our products to address some of the use
cases. We think Ceph block device for us. One of the use cases is that
we have a number of jobs running in containers that need to have
Read-Only access to shared data. The d
Hi,
On 1/18/19 3:11 PM, jes...@krogh.cc wrote:
Hi.
We have the intention of using CephFS for some of our shares, which we'd
like to spool to tape as a part normal backup schedule. CephFS works nice
for large files but for "small" .. < 0.1MB .. there seem to be a
"overhead" on 20-40ms per file.
Hi,
I'm curious.what is the advantage of OSPF in your setup over e.g.
LACP bonding both links?
Regards,
Burkhard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi,
just a comment:
the RBD pool also contain management objects, e.g. the rbd_directory and
rbd_info objects. And AFAIK these objects storage the name->id mapping
for images.
This means in your case, looking up the name backup/gbs requires read
access to these objects in the backup pool
Hi,
On 1/31/19 6:11 PM, shubjero wrote:
Has anyone automated the ability to generate S3 keys for OpenStack
users in Ceph? Right now we take in a users request manually (Hey we
need an S3 API key for our OpenStack project 'X', can you help?). We
as cloud/ceph admins just use radosgw-admin to cr
Hi,
On 2/1/19 11:40 AM, Stuart Longland wrote:
Hi all,
I'm just in the process of migrating my 3-node Ceph cluster from
BTRFS-backed Filestore over to Bluestore.
Last weekend I did this with my first node, and while the migration went
fine, I noted that the OSD did not survive a reboot test: a
Hi,
we have a compuverde cluster, and AFAIK it uses multicast for node
discovery, not for data distribution.
If you need more information, feel free to contact me either by email or
via IRC (-> Be-El).
Regards,
Burkhard
___
ceph-users mailin
Hi,
you can move the data off to another pool, but you need to keep your
_first_ data pool, since part of the filesystem metadata is stored in
that pool. You cannot remove the first pool.
Regards,
Burkhard
--
Dr. rer. nat. Burkhard Linke
Bioinformatics and Systems Biology
Justus-Liebig
generated on osd activation from information stored in
LVM metadata. You do not need an extra external storage for the
information any more.
Regards,
Burkhard Linke
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com
nd in /sys/kernel/debug/ceph/client id>. The later contains the current versions of the MON map, MDS
map and OSD map. These are the settings used by the client to contact
the corresponding daemon (assuming kernel cephfs client, ceph-fuse is
different).
Regards,
Burkhard
--
Dr. rer. nat. B
Hi,
we are about to setup a new Ceph cluster for our Openstack cloud. Ceph
is used for images, volumes and object storage. I'm unsure to handle
these cases and how to move the data correctly.
Object storage:
I consider this the easiest case, since RGW itself provides the
necessary means t
Hi,
On 12.04.19 17:23, lin zhou wrote:
Hi, cephers
we have a ceph cluster with openstack.
maybe long ago, we set debug_rbd in ceph.conf and then boot vm.
but these debug config not exist in the config now.
Now we find the ceph-client.libvirt.log is 200GB.
But I can not using ceph --admin-daemon
Hi,
On 4/29/19 11:19 AM, Rainer Krienke wrote:
I am planning to set up a ceph cluster and already implemented a test
cluster where we are going to use RBD images for data storage (9 hosts,
each host has 16 OSDs, each OSD 4TB).
We would like to use erasure coded (EC) pools here, and so all OSD a
'}, body=(0 bytes)
And compare the request URL to the S3 API spec:
https://docs.aws.amazon.com/AmazonS3/latest/API/v2-RESTBucketGET.html
'delimiter=/' is just a convenience parameter for grouping the results.
The implementation still has to enumerate all objects.
Hi,
I've upgraded our ceph cluster from luminous to nautilus yesterday.
There was a little hickup after activating msgr2, but everything else
went well without any problem.
But the upgrade is not reflected by the output of 'ceph features' (yet?):
# ceph --version
ceph version 14.2.1 (d555a
Hi,
with the upgrade to nautilus I was finally able to adjust the PG number
for our pools. This process is still running, One pool is going to grow
from 256 to 1024 PG since its content has grown significantly over the
last month. As a result of the current imbalance, the OSDs ' used
capacit
Hi Paul,
On 5/9/19 3:27 PM, Paul Emmerich wrote:
Use ceph versions instead
Thanks, ceph versions gives the right output.
Regards,
Burkhard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.
Hi,
On 5/21/19 9:46 PM, Robert LeBlanc wrote:
I'm at a new job working with Ceph again and am excited to back in the
community!
I can't find any documentation to support this, so please help me
understand if I got this right.
I've got a Jewel cluster with CephFS and we have an inconsistent
Hi,
On 5/24/19 9:48 AM, Kevin Flöh wrote:
We got the object ids of the missing objects with|ceph pg 1.24c
list_missing:|
|{
"offset": {
"oid": "",
"key": "",
"snapid": 0,
"hash": 0,
"max": 0,
"pool": -9223372036854775808,
"namespace
Hi,
On 5/22/19 5:53 PM, Robert LeBlanc wrote:
On Wed, May 22, 2019 at 12:22 AM Burkhard Linke
<mailto:burkhard.li...@computational.bio.uni-giessen.de>> wrote:
Hi,
On 5/21/19 9:46 PM, Robert LeBlanc wrote:
> I'm at a new job working with Ceph again and am excit
Hi,
On 5/29/19 5:23 AM, Frank Yu wrote:
Hi Jake,
I have same question about size of DB/WAL for OSD。My situations: 12
osd per OSD nodes, 8 TB(maybe 12TB later) per OSD, Intel NVMe SSD
(optane P4800x) 375G per OSD nodes, which means DB/WAL can use about
30GB per OSD(8TB), I mainly use CephFS
Hi,
On 5/29/19 8:25 AM, Konstantin Shalygin wrote:
We have a similar setup, but 24 disks and 2x P4800X. And the 375GB NVME
drives are _not_ large enough:
*snipsnap*
Your block.db is 29Gb, should be 30Gb to prevent spillover to slow
backend.
Well, it's the usual gigabyte vs. gigibyte f
Hi,
see my post in the recent 'CephFS object mapping.' thread. It describes
the necessary commands to lookup a file based on its rados object name.
Regards,
Burkhard
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/lis
and mount them; the 'rbd' command has the sub commands
'export' and 'import'. You can pipe them to avoid writing data to a
local disk. This should be the fastest way to transfer the RBDs.
Regards,
Burkhard
--
Dr. rer. nat. Burkhard Linke
Bioinformatics and Sys
Hi,
On 7/18/19 8:57 AM, Ravi Patel wrote:
We’ve been debugging this a while. The data pool was originally EC
backed with the bucket indexes on HDD pools. Moving the metadata to
SSD backed pools improved usability and consistency and the change
from EC to replicated improved the rados layer i
Hi,
one particular interesting point in setups with a large number of active
files/caps is the failover.
If your MDS fails (assuming single MDS, multiple MDS with multiple
active ranks behave in the same way for _each_ rank), the monitors will
detect the failure and update the mds map. Cep
Hi,
please keep in mind that due to the rocksdb level concept, only certain
db partition sizes are useful. Larger partitions are a waste of
capacity, since rockdb will only use whole level sizes.
There has been a lot of discussion about this on the mailing list in the
last months. A plain
Hi,
On 8/18/19 12:06 AM, EDH - Manuel Rios Fernandez wrote:
Hi ,
Whats the reason for not allow balancer PG if objects are
inactive/misplaced at least in nautilus 14.2.2 ?
https://github.com/ceph/ceph/blob/master/src/pybind/mgr/balancer/module.py#L874
*snipsnap*
We can understood that
Hi,
On 9/12/19 5:16 AM, Kyriazis, George wrote:
Ok, after all is settled, I tried changing pg_num again on my pool and
it still didn’t work:
# ceph osd pool get rbd1 pg_num
pg_num: 100
# ceph osd pool set rbd1 pg_num 128
# ceph osd pool get rbd1 pg_num
pg_num: 100
# ceph osd require-osd-releas
Hi,
On 9/30/19 2:46 PM, Lars Täuber wrote:
Hi!
What happens when the cluster network goes down completely?
Is the cluster silently using the public network without interruption, or does
the admin has to act?
The cluster network is used for OSD heartbeats and backfilling/recovery
traffic. If
101 - 192 of 192 matches
Mail list logo