I got deadlock when mounting cephfs with kernel 4.12 and ceph 12.2.5.
The ceph servers had no any write operations to the mounted directories.
But the clients were still hang there until I restarted the servers.
I have not encountered the same issue from then on.
Paul Emmerich 于2019年3月13日周三 下午1
Hi All,
I’ve been investigating compression and (long-story-short,) found that I can
never get better than 50% compression ratio.
My setup:
Mimic 13.2.2
OSDs: Bluestore, Sparse files looped to /dev/loop0, lvm to create logical
volumes. bluestore_compression_mode: passive
Pool: 3-replica, comp
And now, new errors are cliaming..
[image: image.png]
Zhenshi Zhou 于2019年3月13日周三 下午2:58写道:
> Hi,
>
> I didn't set osd_beacon_report_interval as it must be the default value.
> I have set osd_beacon_report_interval to 60 and debug_mon to 10.
>
> Attachment is the leader monitor log, the "mark-do
We're glad to announce the fifth bug fix release of Mimic v13.2.X stable
release series. We recommend that all users upgrade.
Notable Changes
---
* This release fixes the pg log hard limit bug that was introduced in
13.2.2, https://tracker.ceph.com/issues/36686. A flag called
`pg
sorry for not make it clearly, you may need to set one of your osd's
osd_beacon_report_interval = 5
and debug_ms=1 and then restart the osd process, then check the osd
log by 'grep beacon /var/log/ceph/ceph-osd.$id.log'
to make sure osd send beacons to mon, if osd send beacon to mon, you
should als
Hi all,
how are your experiences with different disk sizes in one pool
regarding the overall performance?
I hope someone could shed some light on the following scenario:
Let's say I mix an equal amount of 2TB and 8TB disks in one pool,
with a crush map that tries to fill all disks to the same perc
I may be wrong but CEPH won’t split it into % based on disk size.
A block is wrote of 4MB to each PG that CEPH decided to use for that I/O
and replication. Yes the 8TB ones would be used more as the size / crush
algorithm will put them higher in the “chance” list but every write will be
the same s
tim taler 于2019年3月13日周三 下午11:05写道:
>
> Hi all,
> how are your experiences with different disk sizes in one pool
> regarding the overall performance?
> I hope someone could shed some light on the following scenario:
>
> Let's say I mix an equal amount of 2TB and 8TB disks in one pool,
> with a crus
Hi Kai,
El 12/3/19 a las 9:13, Kai Wembacher escribió:
Hi everyone,
I have an Intel D3-S4610 SSD with 1.92 TB here for testing and get
some pretty bad numbers, when running the fio benchmark suggested by
Sébastien Han
(http://www.sebastien-han.fr/blog/2014/10/10/ceph-how-to-test-if-your-ssd
I have a cluster where for some OSD the weight-set is defined, while for
other OSDs it is not [*].
The OSDs with weight-set defined are Filestore OSDs created years ago using
"ceph-disk prepare"
The OSDs where the weight set is not defined are Bluestore OSDs installed
recently using
ceph-volume
After restarting several OSD daemons in our ceph cluster a couple days ago, a
couple of our OSDs won’t come online. The services start and crash with the
below error. We have one pg marked as incomplete, and will not peer. The pool
is erasure coded, 2+1, currently set to size=3, min_size=2. The
Hello Kai,
there are tons of bad SSDs on the market. You cannot buy any brand without
having some bad and maybe some good models.
Here as an example some performance values from Intel:
Intel SSD DC S4600 960GB, 2.5", SATA (SSDSC2KG960G701)
jobs=1 - iops=23k
jobs=5 - iops=51k
Intel SSD D3-S4510
On Tue, Mar 12, 2019 at 11:09 PM Vikas Rana wrote:
>
> Hi there,
>
>
>
> We are replicating a RBD image from Primary to DR site using RBD mirroring.
>
> On Primary, we were using 10.2.10.
Just a note that Jewel is end-of-life upstream.
> DR site is luminous and we promoted the DR copy to test th
Hi
I set the config on every osd and check whether all osds send beacons
to monitors.
The result shows that only part of the osds send beacons and the monitor
receives all beacons from which the osd send out.
But why some osds don't send beacon?
huang jun 于2019年3月13日周三 下午11:02写道:
> sorry for
osd will not send beacons to mon if its not in ACTIVE state,
so you maybe turn on one osd's debug_osd=20 to see what is going on
Zhenshi Zhou 于2019年3月14日周四 上午11:07写道:
>
> What's more, I find that the osds don't send beacons all the time, some osds
> send beacons
> for a period of time and then s
Hi,
One of the log says the beacon not sending as below:
2019-03-14 12:41:15.722 7f3c27684700 10 osd.5 17032 tick_without_osd_lock
2019-03-14 12:41:15.722 7f3c27684700 20 osd.5 17032 can_inc_scrubs_pending
0 -> 1 (max 1, active 0)
2019-03-14 12:41:15.722 7f3c27684700 20 osd.5 17032 scrub_time_perm
I have a cluster where for some OSD the weight-set is defined, while for
other OSDs it is not [*].
The OSDs with weight-set defined are Filestore OSDs created years ago using
"ceph-disk prepare"
The OSDs where the weight set is not defined are Bluestore OSDs installed
recently using
ceph-volume
[root@c-mon-01 /]# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZEUSE AVAIL %USE VAR PGS TYPE NAME
-1 1.95190- 1.95TiB 88.4GiB 1.87TiB00 - root
default
-2 0- 0B 0B 0B00 - rack
Rack15-PianoAlto
-3 0.39038
what's the output of 'ceph mon feature ls'?
from the code, maybe mon features not contain luminous
6263 void OSD::send_beacon(const ceph::coarse_mono_clock::time_point& now)
6264 {
6265 const auto& monmap = monc->monmap;
6266 // send beacon to mon even if we are just connected, and the
m
On 3/14/19 12:42 PM, Massimo Sgaravatto wrote:
[root@c-mon-01 /]# ceph osd df tree
ID CLASS WEIGHT REWEIGHT SIZE USE AVAIL %USE VAR PGS TYPE
NAME
-1 1.95190 - 1.95TiB 88.4GiB 1.87TiB 0 0 - root
default
-2 0 - 0B 0B 0B 0 0 -
# ceph mon feature ls
all features
supported: [kraken,luminous,mimic,osdmap-prune]
persistent: [kraken,luminous,mimic,osdmap-prune]
on current monmap (epoch 2)
persistent: [none]
required: [none]
huang jun 于2019年3月14日周四 下午1:50写道:
> what's the output of 'ceph mon f
ok, if this is a **test environment**, you can try
for f in 'kraken,luminous,mimic,osdmap-prune'; do
ceph mon feature set $f --yes-i-really-mean-it
done
If it is a production environment, you should eval the risk first, and
maybe setup a test cluster to testing first.
Zhenshi Zhou 于2019年3月14日周
Thanks
I will try to set the weight-set for the new OSDs
But I am wondering what I did wrong to be in such scenario.
Is it normal that a new created OSD has no weight-set defined ?
Who is supposed to initially set the weight-set for a OSD ?
Thanks again, MAssimo
On Thu, Mar 14, 2019 at 6:52 AM
sorry, the script should be
for f in kraken luminous mimic osdmap-prune; do
ceph mon feature set $f --yes-i-really-mean-it
done
huang jun 于2019年3月14日周四 下午2:04写道:
>
> ok, if this is a **test environment**, you can try
> for f in 'kraken,luminous,mimic,osdmap-prune'; do
> ceph mon feature set $
On 3/14/19 1:11 PM, Massimo Sgaravatto wrote:
Thanks
I will try to set the weight-set for the new OSDs
But I am wondering what I did wrong to be in such scenario.
You don't. You just use legacy. But why? Jewel clients? Old kernels?
Is it normal that a new created OSD has no weight-set defi
Hi huang,
It's a pre-production environment. If everything is fine, I'll use it for
production.
My cluster is version mimic, should I set all features you listed in the
command?
Thanks
huang jun 于2019年3月14日周四 下午2:11写道:
> sorry, the script should be
> for f in kraken luminous mimic osdmap-prun
Ok, understood, thanks !
So if I try to run the balancer in the current compat mode, should this
define the weight-set also for the new OSDs ?
But if I try to create a balancer plan, I get an error [*] (while it worked
before adding the new OSDs).
[*]
[root@c-mon-01 balancer]# ceph balancer st
On 3/14/19 1:53 PM, Massimo Sgaravatto wrote:
So if I try to run the balancer in the current compat mode, should
this define the weight-set also for the new OSDs ?
But if I try to create a balancer plan, I get an error [*] (while it
worked before adding the new OSDs).
Nope, balancer creates
28 matches
Mail list logo