In version 10.2.2, fio firstly run 2000 IOPS, then I break fio, and
continue run fio, it run 6000 IOPS.
But in version 0.94, fio always run 6000 IOPS. With or without repeated
fio.
what is the different between this two versions about this.
my config is that :
hello cepher , I use ceph-10.2.2 source deploy a cluster.Since I am the
source deployment , I deploy it without ceph-deploy.
how to deploy a bluestore ceph cluster without ceph-deploy.No official
online documentation.
Where relevant documents?
hello cepher , I use ceph-10.2.2 source deploy a cluster.Since I am the
source deployment , I deploy it without ceph-deploy.
how to deploy a bluestore ceph cluster without ceph-deploy.No official
online documentation.
Where relevant documents?
hello cepher , I use ceph-10.2.2 source deploy a cluster.Since I am
the source deployment , I deploy it without ceph-deploy.
how to deploy a bluestore ceph cluster without ceph-deploy.No official
online documentation.
Where relevant documents?
But the 0.94 version works fine(In fact, IO was greatly improved).
This problem occurs only in version 10.x.
Like you said that the IO was going to the cold storage mostly . And IO is
going slowly.what can I do to improve IO performance of cache tiering in
version 10.x ? How does cache tiering w
I have configured ceph.conf with "osd_tier_promote_max_bytes_sec" in [osd]
Attributes. But it still invalid.I do command --show-config discovery that it
has not been modified.
[root@node01 ~]# cat /etc/ceph/ceph.conf | grep tier
osd_tier_promote_max_objects_sec=20
osd_tier_promote_max_bytes_s
thank you very much!
On Monday, July 18, 2016 5:31 PM, Oliver Dzombic
wrote:
Hi,
everything is here:
http://docs.ceph.com/docs/jewel/
except
osd_tier_promote_max_bytes_sec
and other stuff, but its enough there that you can make it work.
--
Mit freundlichen Gruessen / Best regard
Where to find base docu?Official website does not update the document
On Monday, July 18, 2016 5:16 PM, Oliver Dzombic
wrote:
Hi
i suggest you to read some base docu about that.
osd_tier_promote_max_bytes_sec = how much bytes per second are going on tier
ceph osd pool set ssd-pool t
what is "osd_tier_promote_max_bytes_sec" in ceph.conf file and command "ceph
osd pool set ssd-pool target_max_bytes" are not the same ?
On Monday, July 18, 2016 4:40 PM, Oliver Dzombic
wrote:
Hi,
osd_tier_promote_max_bytes_sec
is your friend.
--
Mit freundlichen Gruessen / Best re
hello cepher! I have a problem like this : I want to config a cache
tiering to my ceph with writeback mode.In ceph-0.94,it runs ok. IO is First
through hot-pool. then it flush to cold-pool. But in ceph-10.2.2,it doesn't
like tihs. IO wrties to hot-pool and cold-pool at the same time. I
is the "cleanup"label pull request (just removing something unneeded) will be
merged to master?___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
what do pull request label "cleanup" mean?
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hi everyone, I have a problem。I want to see statistics of more than one image,
but show only one image
Here are the steps:
I create two images:[root@node173 ~]# rbd -p test lstest_10Gtest_20G
I use iscsi-tgtd to mount out this two images,and iscsi-tgtd config file file
like this:
include /etc/t
Official website of the developer mailing list (ceph-devel) address is wrong,
Who can give me a correct address to subscribe . thanks!___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
14 matches
Mail list logo