Hi Ceph Users,
Is there a way around to minimize rocksdb compacting event so that it won't
use all the spinning disk IO utilization and avoid it being marked as down
due to fail to send heartbeat to others?
Right now we have frequent high IO disk utilization for every 20-25 minutes
where the rock
On 4/10/19 9:07 AM, Charles Alva wrote:
> Hi Ceph Users,
>
> Is there a way around to minimize rocksdb compacting event so that it
> won't use all the spinning disk IO utilization and avoid it being marked
> as down due to fail to send heartbeat to others?
>
> Right now we have frequent high I
> On 4/10/19 9:07 AM, Charles Alva wrote:
>> Hi Ceph Users,
>>
>> Is there a way around to minimize rocksdb compacting event so that it
>> won't use all the spinning disk IO utilization and avoid it being marked
>> as down due to fail to send heartbeat to others?
>>
>> Right now we have frequent hi
On 4/10/19 9:25 AM, jes...@krogh.cc wrote:
>> On 4/10/19 9:07 AM, Charles Alva wrote:
>>> Hi Ceph Users,
>>>
>>> Is there a way around to minimize rocksdb compacting event so that it
>>> won't use all the spinning disk IO utilization and avoid it being marked
>>> as down due to fail to send hear
Hi everyone,
So if the kernel is able to reclaim those pages, is there still a point
in running the heap release on a regular basis?
Regards,
Frédéric.
Le 09/04/2019 à 19:33, Olivier Bonvalet a écrit :
Good point, thanks !
By making memory pressure (by playing with vm.min_free_kbytes), memo
It's ceph-bluestore-tool.
On 4/10/2019 10:27 AM, Wido den Hollander wrote:
On 4/10/19 9:25 AM, jes...@krogh.cc wrote:
On 4/10/19 9:07 AM, Charles Alva wrote:
Hi Ceph Users,
Is there a way around to minimize rocksdb compacting event so that it
won't use all the spinning disk IO utilization an
Hello,
Another thing that crossed my mind aside from failure probabilities caused
by actual HDDs dying is of course the little detail that most Ceph
installations will have have WAL/DB (journal) on SSDs, the most typical
ratio being 1:4.
And given the current thread about compaction killing pur
On 10/04/2019 18.11, Christian Balzer wrote:
> Another thing that crossed my mind aside from failure probabilities caused
> by actual HDDs dying is of course the little detail that most Ceph
> installations will have have WAL/DB (journal) on SSDs, the most typical
> ratio being 1:4.
> And given th
In fact the autotuner does it itself every time it tunes the cache size:
https://github.com/ceph/ceph/blob/master/src/os/bluestore/BlueStore.cc#L3630
Mark
On 4/10/19 2:53 AM, Frédéric Nass wrote:
Hi everyone,
So if the kernel is able to reclaim those pages, is there still a
point in runni
Den ons 10 apr. 2019 kl 13:31 skrev Eugen Block :
>
> While --show-config still shows
>
> host1:~ # ceph --show-config | grep osd_recovery_max_active
> osd_recovery_max_active = 3
>
>
> It seems as if --show-config is not really up-to-date anymore?
> Although I can execute it, the option doesn't a
If you don't specify which daemon to talk to, it tells you what the
defaults would be for a random daemon started just now using the same
config as you have in /etc/ceph/ceph.conf.
I tried that, too, but the result is not correct:
host1:~ # ceph -n osd.1 --show-config | grep osd_recovery_max_ac
Den ons 10 apr. 2019 kl 13:37 skrev Eugen Block :
> > If you don't specify which daemon to talk to, it tells you what the
> > defaults would be for a random daemon started just now using the same
> > config as you have in /etc/ceph/ceph.conf.
>
> I tried that, too, but the result is not correct:
>
I always end up using "ceph --admin-daemon
/var/run/ceph/name-of-socket-here.asok config show | grep ..." to get what
is in effect now for a certain daemon.
Needs you to be on the host of the daemon of course.
Me too, I just wanted to try what OP reported. And after trying that,
I'll keep it t
On Wed, Apr 10, 2019 at 1:46 AM Brayan Perera wrote:
>
> Dear All,
>
> Ceph Version : 12.2.5-2.ge988fb6.el7
>
> We are facing an issue on glance which have backend set to ceph, when
> we try to create an instance or volume out of an image, it throws
> checksum error.
> When we use rbd export and u
On 10/04/2019 07:46, Brayan Perera wrote:
Dear All,
Ceph Version : 12.2.5-2.ge988fb6.el7
We are facing an issue on glance which have backend set to ceph, when
we try to create an instance or volume out of an image, it throws
checksum error.
When we use rbd export and use md5sum, value is mat
Hello,
I am have been managing a ceph cluster running 12.2.11. This was running
12.2.5 until the recent upgrade three months ago. We build another cluster
running 13.2.5 and synced the data between clusters and now would like to
run primarily off the 13.2.5 cluster. The data is all S3 buckets.
To summarize this discussion:
There are two ways to change the configuration:
1. ceph config * is for permanently changing settings
2. ceph injectargs is for temporarily changing a setting until the
next restart of that daemon
* ceph config get or --show-config shows the defaults/permanent
settin
On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote:
>
>
> Hello,
>
> Another thing that crossed my mind aside from failure probabilities caused
> by actual HDDs dying is of course the little detail that most Ceph
> installations will have have WAL/DB (journal) on SSDs, the most typical
> rati
On 4/9/2019 1:59 PM, Yury Shevchuk wrote:
Igor, thank you, Round 2 is explained now.
Main aka block aka slow device cannot be expanded in Luminus, this
functionality will be available after upgrade to Nautilus.
Wal and db devices can be expanded in Luminous.
Now I have recreated osd2 once agai
Hello,
On Wed, 10 Apr 2019 20:09:58 +0200 Paul Emmerich wrote:
> On Wed, Apr 10, 2019 at 11:12 AM Christian Balzer wrote:
> >
> >
> > Hello,
> >
> > Another thing that crossed my mind aside from failure probabilities caused
> > by actual HDDs dying is of course the little detail that most Ceph
Hello,
I am have been managing a ceph cluster running 12.2.11. This was running
12.2.5 until the recent upgrade three months ago. We build another cluster
running 13.2.5 and synced the data between clusters and now would like to
run primarily off the 13.2.5 cluster. The data is all S3 buckets.
Dear Jason,
Thanks for the reply.
We are using python 2.7.5
Yes. script is based on openstack code.
As suggested, we have tried chunk_size 32 and 64, and both giving same
incorrect checksum value.
We tried to copy same image in different pool and resulted same
incorrect checksum.
Thanks & R
22 matches
Mail list logo