Same problem:
“Versions
latest octopus nautilus
“
This week I had to look up Jewel, Luminous, and Mimic docs and had to do so at
GitHub.
>
>> Hello,
>> maybe I missed the announcement but why is the documentation of the
>> older ceph version not accessible anymore on docs.ceph.com
>
> It's
On 11/14/2020 10:56 AM, Martin Palma wrote:
Hello,
maybe I missed the announcement but why is the documentation of the
older ceph version not accessible anymore on docs.ceph.com
It's changed UI because we're hosting them on readthedocs.com now. See
the dropdown in the lower right corner.
__
Hello All,
I am looking to understand some of the internal details on how multisite is
architected. On the Ceph user list, I see mentions of metadata logs, bucket
index shard logs etc. but there is just no documentation anywhere I could
find on how multisite works using these.
Could someone pleas
Hmm, so maybe your hardware is good enough that cache is actually not helping?
This is not unheard of. I don't really see any improvement from caching to
begin with. On the other hand, a synthetic benchmark is not really a test that
utilises the good sides of cache (in particular, write merges w
On Thu, 2020-11-19 at 13:38 -0500, Eric Ivancich wrote:
> Hey Timothy,
>
> Did you ever resolve this issue, and if so, how?
Unfortunately, I was never able to resolve it; the bucket(s) in question had to
be recreated and then removed.
>
> > Thank you..I looked through both logs and noticed thi
Do you have test results for the same test without caching?
I have seen periodic stalls in any RBD IOP/s benchmark on ceph. The benchmarks
create IO requests much faster than OSDs can handle them. At some point all
queues run full and you start seeing slow ops on OSDs.
I would also prefer if IO
Hey all,
We will be having a Ceph science/research/big cluster call on Wednesday
November 25th. If anyone wants to discuss something specific they can
add it to the pad linked below. If you have questions or comments you
can contact me.
This is an informal open call of community members most
If the rbd cache = false, and run the same two tests, the read iops is
stable(this is a new cluster without stress):
109 274471 2319.41 9500308.72
110 276846 2380.81 9751782.65
111 278969 2431.40 9959023.39
112 280924 2287.21 9368428.23
113 282886 2227.82
Hi,
I am using the Ceph development cluster through vstart.sh script. I would
like to measure/benchmark read and write performance (benchmark ceph at a
low level). For that I want to use the fio tool.
Can we use fio on the development cluster? AFAIK, we can. I have seen
the fio option in the
Hi All,
We're testing the rbd cache setting for openstack(Ceph 14.2.5 Bluestore
3-replica), and an odd problem found:
1. Setting librbd cache
[client]
rbd cache = true
rbd cache size = 16777216
rbd cache max dirty = 12582912
rbd cache target dirty = 8388608
rbd cache max dirty age = 1
r
Thomas,
This is config controled by mds's mds_cap_revoke_eviction_timeout(300s
by default). If the client crashed or hung for long time, the cluster
will evict the client.
It can prevent others hung(waiting for locks). If you're the client will
recover later, you can set it zero.
Hoping this he
Den fre 20 nov. 2020 kl 10:17 skrev Bernhard Krieger :
> Hello,
>
> today i came across a strange behaviour.
> After stoppping an osd, im not able to restart or /stop/start a radosgw
> daemon.
> The boot proccess will stuck until i have started the osd again.
>
>
> Specs:
> 3 ceph nodes
>
What is
Hello,
today i came across a strange behaviour.
After stoppping an osd, im not able to restart or /stop/start a radosgw
daemon.
The boot proccess will stuck until i have started the osd again.
Specs:
3 ceph nodes
2 radosgw
nautilus 14.2.13
CentOS7
Steps:
* stopping radosgw daemon on rgw
* s
13 matches
Mail list logo