Hi all,
rocksdb failed to open when the ceph-osd process was restarted after
unplugging the OSD data disk with Ceph 14.2.5 on Centos 7.6.
1) After unplugging the OSD data disk, the ceph-osd process exist.
-3> 2020-07-13 15:25:35.912 7f1ad7254700 -1 bdev(0x559d1134f
I suppose that the write operations may use wal more when block size is
small.
[image: image.png]
rainning 于2020年7月16日周四 上午10:39写道:
> I tested osd bench with different block size: 1MB, 512KB, 256KB, 128KB,
> 64KB, and 32KB. osd.2 is one from the cluster where osds have better 4KB
> osd bench, a
I tested osd bench with different block size: 1MB, 512KB, 256KB, 128KB, 64KB,
and 32KB. osd.2 is one from the cluster where osds have better 4KB osd bench,
and osd.30 is from the cluster where osds have lower 4KB osd bench.
Before 32KB, osd.30 was better than osd.2, however, there was a big dro
Hi Zhenshi,
I did try with bigger block size. Interestingly, the one whose 4KB osd bench
was lower performed slightly better in 4MB osd bench.
Let me try some other bigger block sizes, e.g. 16K, 64K, 128K, 1M etc, to see
if there is any pattern.
Moreover, I did compare two SSDs, they respec
Maybe you can try writing with bigger block size and compare the results.
For bluestore, the write operations contain two modes. One is COW, the
other is RMW. AFAIK only RMW uses wal in order to prevent data from
being interrupted.
rainning 于2020年7月15日周三 下午11:04写道:
> Hi Zhenshi, thanks very much
Hi Will,
I once changed monitor IPs on Nautilus cluster. What I did is change the
monitor information by monmap one by one.
Both old and new IPs can communicate with each other of course.
If it's a new cluster, I suggest deploying a new cluster instead of
changing the monitor IPs.
Amit Ghadge 于2
Hi Liam, All,
We have also run into this bug:
https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/PCYY2MKRPCPIXZLZV5NNBWVHDXKWXVAG/
Like you, we are also running Octopus 15.2.3
Downgrading the RGWs at this point is not ideal, but if a fix isn't found
soon we might have to.
Has a bug
On 7/15/20 9:58 AM, Dan van der Ster wrote:
Hi Mark,
On Mon, Jul 13, 2020 at 3:42 PM Mark Nelson wrote:
Hi Frank,
So the osd_memory_target code will basically shrink the size of the
bluestore and rocksdb caches to attempt to keep the overall mapped (not
rss!) memory of the process below the
I need to change the network my monitors are on. It seems this is not a trivial
thing to do. Are there any up-to-date instructions for doing so on a
cephadm-deployed cluster?
I’ve found some steps in older versions of the docs but not sure if these are
still correct - they mention using the ce
one more thing it seems that WAL does have more impact on small write.
---Original---
From: "Zhenshi Zhou"
Hi Zhenshi, thanks very much for the reply.
Yes I know it is ood that the bluestore is deployed only with a separate db
device but no a WAL device. The cluster was deployed in k8s using rook. I
was told it was because the rook we used didn't support that.
Moreover, the comparison was made on
Hi Mark,
On Mon, Jul 13, 2020 at 3:42 PM Mark Nelson wrote:
>
> Hi Frank,
>
>
> So the osd_memory_target code will basically shrink the size of the
> bluestore and rocksdb caches to attempt to keep the overall mapped (not
> rss!) memory of the process below the target. It's sort of "best
> effor
Hi all,
Does anyone know when we can expect Crimson/Seastor to be "Production
Ready" and/or what level of performance increase can be expected?
thx
Frank
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@
you can try, ceph mon set-addrs a [v2:1.2.3.4:1112,v1:1.2.3.4:],
https://docs.ceph.com/docs/nautilus/rados/configuration/msgr2/#msgr2-ceph-conf
On Wed, Jul 15, 2020 at 4:43 PM Will Payne wrote:
> I need to change the network my monitors are on. It seems this is not a
> trivial thing to do.
Hi Dan,
I now added it to ceph.conf and restarted all MONs. The running config now
shows as:
# ceph config show mon.ceph-01 | grep -e NAME -e mon_osd_down_out_subtree_limit
NAME VALUE SOURCE
OVERRIDESIGNORES
mon_osd_down_out_subtree_limit
Hello,
just delete the old one and deploy a new one.
Make sure to have a quorum (2 of 3 or 3 of 5) online while doing so.
--
Martin Verges
Managing director
Mobile: +49 174 9335695
E-Mail: martin.ver...@croit.io
Chat: https://t.me/MartinVerges
croit GmbH, Freseniusstr. 31h, 81247 Munich
CEO: Ma
Setting it in ceph.conf is exactly what I wanted to avoid :). I will give it a
try though. I guess this should become an issue in the tracker?
Is it, by any chance, required to restart *all* daemons or should MONs be
enough?
Best regards,
=
Frank Schilder
AIT Risø Campus
Bygning
Hi Dan,
it still does not work. When I execute
# ceph config set global mon_osd_down_out_subtree_limit host
2020-07-15 09:17:11.890 7f36cf7fe700 -1 set_mon_vals failed to set
mon_osd_down_out_subtree_limit = host: Configuration option
'mon_osd_down_out_subtree_limit' may not be modified at runt
Hi Bobby,
Thank you for your answer. You are saying "Whenever there is a change in the
map, the monitor will inform the client." Can you please give me some ceph
documentation link where I could read these details? For me it is logical to
have the monitors update the clients about changes in th
Dear all,
a few more results regarding virtio-version, RAM size and ceph RBD caching.
I got some wrong information from our operators. We are using
virtio-win-0.1.171 and found that this version might have a regression that
affects performance:
https://forum.proxmox.com/threads/big-discovery-o
Hi Budai,
When you ask "*how often the client is retrieving the Cluster Map?*" . The
obvious answer to that is there is nothing 'often' in it. Whenever there is
a change in the map, the monitor will inform the client.
I think you need to read about the CRUSH algorithm in Ceph. Because that
will
I deployed the cluster either with separate db/wal or put db/wal/data
together. Never tried to have only a seperate db.
AFAIK wal does have an effect on writing but I'm not sure if it could be
two times of the bench value. Hardware and
network environment are also important factors.
rainning 于202
Hi,
Any Ceph related event in Red Hat Summit 2020 happening today?
BR
Bobby !
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
Hi all,
I am wondering if there is any performance comparison done on osd bench with
and without a separate WAL device deployed given that there is always a
separate db device deployed on SSD in both cases.
The reason I am asking this question is that we have two clusters and osds in
one hav
Hrmm that is strange.
We set it via /etc/ceph/ceph.conf, not the config framework. Maybe try that?
-- dan
On Wed, Jul 15, 2020 at 9:59 AM Frank Schilder wrote:
>
> Hi Dan,
>
> it still does not work. When I execute
>
> # ceph config set global mon_osd_down_out_subtree_limit host
> 2020-07-15 09
25 matches
Mail list logo