Hi Wido,
thanks for the explanation. I think the root cause is the disks are too
slow for campaction.
I add two new mon with ssd to the cluter to speed it up and the issue
resolved.
That's a good advice and I have plan to migrate my mon to bigger SSD disks.
Thanks again.
Wido den Hollander 于20
Hi,
I have a cluster of 14.2.8.
I created OSDs with dedicated PCIE for wal/db when deployed the cluster.
I set 72G for db and 3G for wal on each OSD.
And now my cluster is in a WARN stats until a long health time.
# ceph health detail
HEALTH_WARN BlueFS spillover detected on 1 OSD(s)
BLUEFS_SPILL
On 2020-11-13 21:19, E Taka wrote:
> Hello,
>
> I want to install Ceph Octopus on Ubuntu 20.04. The nodes for have 2
> network interfaces: 192.168.1.0/24 for the cluster network, and a
> 10.10.0.0/16 is the public network. When I bootstrap with cephadm, which
> Network do I use? That means, do i u
Den fre 13 nov. 2020 kl 21:50 skrev E Taka <0eta...@gmail.com>:
> Hi Stefan, the cluster network has its own switch and is faster than the
> public network.
> Thanks for pointing me to the documentation. I must have overlooked this
> sentence.
>
> But let me ask another question: do the OSD use th
Hi Stefan, the cluster network has its own switch and is faster than the
public network.
Thanks for pointing me to the documentation. I must have overlooked this
sentence.
But let me ask another question: do the OSD use the cluster network
"magically"? I did not find this in the docs, but that may
Hello,
I want to install Ceph Octopus on Ubuntu 20.04. The nodes for have 2
network interfaces: 192.168.1.0/24 for the cluster network, and a
10.10.0.0/16 is the public network. When I bootstrap with cephadm, which
Network do I use? That means, do i use cephadm bootstrap --mon-ip
192.168.1.1 or do
We have 12TB HDD OSDs with 32GiB of (Optane) NVMe for block.db, used
for cephfs_data pools, and NVMe-only OSDs used for cephfs_data pools.
The NVMe DB about doubled our random IO performance - a great
investment - doubling max CPU load as a result. We had to turn up "osd
op num threads per shard hd
Thank you Frank for the clarification!
Tony
> -Original Message-
> From: Frank Schilder
> Sent: Friday, November 13, 2020 12:37 AM
> To: Tony Liu ; Nathan Fish
> Cc: ceph-users@ceph.io
> Subject: Re: [ceph-users] Re: which of cpu frequency and number of
> threads servers osd better?
>
>
Thank you for the answers to those questions, Janek.
And in case anyone hasn’t seen it, we do have a tracker for this issue:
https://tracker.ceph.com/issues/47866
We may want to move most of the conversation to the comments there, so
everything’s together.
I do want to follow up on you
1. It seems like those reporting this issue are seeing it strictly
after upgrading to Octopus. From what version did each of these sites
upgrade to Octopus? From Nautilus? Mimic? Luminous?
I upgraded from the latest Luminous release.
2. Does anyone have any lifecycle rules on a bucket exp
I have some questions for those who’ve experienced this issue.
1. It seems like those reporting this issue are seeing it strictly after
upgrading to Octopus. From what version did each of these sites upgrade to
Octopus? From Nautilus? Mimic? Luminous?
2. Does anyone have any lifecycle rules on
Hi,
I was not able to find any complete guide on how to build ceph (14.2.x) from
source, create packages and build containers based on those packages.
Ubuntu or centos, does not matter.
I tried so far:
###
docker pull centos:7
docker run -ti centos:7 /bin/bash
yum install -y git rp
Hi!
To bill our customers we regularly call radosgw-admin bucket stats --uid .
Since upgrading from Mimic to Octopus (with a short stop at Nautilus), we’ve
been seeing much slower response times for this command.
It went from less than a minute for our largest customers, to 5 minutes (with
som
> If each OSD requires 4T
Nobody said that. What was said is HDD=1T, SSD=3T. It depends on the drive
type!
The %-utilisation information is just from top observed during heavy load. It
does not show how the kernel schedules things on physical Ts. So, 2x50%
utilisation could run on the same HT
Hi Brent,
Thanks for your input.
We will use Swift instead of S3. The deletes are mainly done by our
customers using the sync app (i.e they are syncing their folders with
the storage accounts and every file change is translated to a delete in
the cloud). We have a frontend cluster between th
I think this depends on the type of backing disk. We use the following CPUs:
Intel(R) Xeon(R) CPU E5-2660 v4 @ 2.00GHz
Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz
Intel(R) Xeon(R) Silver 4216 CPU @ 2.10GHz
My experience is, that a HDD OSD hardly gets to 100% of 1 hyper thread load
even under heavy
Hi Jeff,
I understand the idea behind patch [1] but it breaks the operation of overlayfs
with cephfs. Should the patch be abandoned and tests be modified or should
overlayfs code be adapted to work with cephfs, if that's possible?
Either way, it'd be nice if overlayfs could work again with cep
Den ons 11 nov. 2020 kl 21:42 skrev Adrian Nicolae <
adrian.nico...@rcs-rds.ro>:
> Hey guys,
> - 6 OSD servers with 36 SATA 16TB drives each and 3 big NVME per server
> (1 big NVME for every 12 drives so I can reserve 300GB NVME storage for
> every SATA drive), 3 MON, 2 RGW with Epyc 7402p and 128
18 matches
Mail list logo