Hi all,
just a quick writeup. Over the last two days I was able to evict a lot
of those 0-byte files by setting "target_max_objects" to 2 millions.
After we hit that limit I set it to 10 millions for now. So
target_dirty_ratio of 0.6 would mean evicting should start at around 6
million objec
Hi,
Am 30.09.2016 um 09:45 schrieb Christian Balzer:
> [...]
> Gotta love having (only a few years late) a test and staging cluster that
> is actually usable and comparable to my real ones.
>
> So I did create a 500GB image and filled it up.
> The cache pool is set to 500GB as well and will flu
I just love the sound of my own typing...
See inline, below.
On Fri, 30 Sep 2016 12:18:48 +0900 Christian Balzer wrote:
>
> Hello,
>
> On Thu, 29 Sep 2016 20:15:12 +0200 Sascha Vogt wrote:
>
> > Hi Burkhard,
> >
> > On 29/09/16 15:08, Burkhard Linke wrote:
> > > AFAIK evicting an object als
Am 30.09.2016 um 05:18 schrieb Christian Balzer:
> On Thu, 29 Sep 2016 20:15:12 +0200 Sascha Vogt wrote:
>> On 29/09/16 15:08, Burkhard Linke wrote:
>>> AFAIK evicting an object also flushes it to the backing storage, so
>>> evicting a live object should be ok. It will be promoted again at the
>>>
Hello,
On Thu, 29 Sep 2016 20:15:12 +0200 Sascha Vogt wrote:
> Hi Burkhard,
>
> On 29/09/16 15:08, Burkhard Linke wrote:
> > AFAIK evicting an object also flushes it to the backing storage, so
> > evicting a live object should be ok. It will be promoted again at the
> > next access (or whatever
Hi Burkhard,
On 29/09/16 15:08, Burkhard Linke wrote:
AFAIK evicting an object also flushes it to the backing storage, so
evicting a live object should be ok. It will be promoted again at the
next access (or whatever triggers promotion in the caching mechanism).
For the dead 0-byte files: Shou
Hi,
On 09/29/2016 02:52 PM, Sascha Vogt wrote:
Hi,
Am 29.09.2016 um 13:45 schrieb Burkhard Linke:
On 09/29/2016 01:34 PM, Sascha Vogt wrote:
We have a huge amount of short lived VMs which are deleted before they
are even flushed to the backing pool. Might this be the reason, that
ceph doesn'
Hi,
Am 29.09.2016 um 13:45 schrieb Burkhard Linke:
> On 09/29/2016 01:34 PM, Sascha Vogt wrote:
>> We have a huge amount of short lived VMs which are deleted before they
>> are even flushed to the backing pool. Might this be the reason, that
>> ceph doesn't handle that particular thing well? Eg. w
Hi,
Am 29.09.2016 um 14:00 schrieb Burkhard Linke:
> On 09/29/2016 01:46 PM, Sascha Vogt wrote:
Can you check/verify that the deleted objects are actually gone on the
backing pool?
>> How do I check that? Aka how to find out on which OSD a particular
>> object in the cache pool ends up i
Hi,
On 09/29/2016 01:46 PM, Sascha Vogt wrote:
A quick follow up question:
Am 29.09.2016 um 13:34 schrieb Sascha Vogt:
Can you check/verify that the deleted objects are actually gone on the
backing pool?
How do I check that? Aka how to find out on which OSD a particular
object in the cache p
A quick follow up question:
Am 29.09.2016 um 13:34 schrieb Sascha Vogt:
>> Can you check/verify that the deleted objects are actually gone on the
>> backing pool?
How do I check that? Aka how to find out on which OSD a particular
object in the cache pool ends up in the backing pool?
Ie. I have a
Hi,
On 09/29/2016 01:34 PM, Sascha Vogt wrote:
*snipsnap*
We have a huge amount of short lived VMs which are deleted before they
are even flushed to the backing pool. Might this be the reason, that
ceph doesn't handle that particular thing well? Eg. when deleting an
object / RBD image which h
Hi,
Am 29.09.2016 um 02:44 schrieb Christian Balzer:
> I don't think the LOG is keeping the 0-byte files alive, though.
Yeah, don't think so either. The difference did stay at around the same
level.
> In general these are objects that have been evicted from the cache and if
> it's very busy you w
Hello,
On Wed, 28 Sep 2016 19:36:28 +0200 Sascha Vogt wrote:
> Hi Christian,
>
> Am 28.09.2016 um 16:56 schrieb Christian Balzer:
> > 0.94.5 has a well known and documented bug, it doesn't rotate the omap log
> > of the OSDs.
> >
> > Look into "/var/lib/ceph/osd/ceph-xx/current/omap/" of the c
Hi Christian,
Am 28.09.2016 um 16:56 schrieb Christian Balzer:
> 0.94.5 has a well known and documented bug, it doesn't rotate the omap log
> of the OSDs.
>
> Look into "/var/lib/ceph/osd/ceph-xx/current/omap/" of the cache tier and
> most likely discover a huge "LOG" file.
You're right, it was
On Wed, 28 Sep 2016 14:08:43 +0200 Sascha Vogt wrote:
> Hi all,
>
> we currently experience a few "strange" things on our Ceph cluster and I
> wanted to ask if anyone has recommendations for further tracking them
> down (or maybe even an explanation already ;) )
>
> Ceph version is 0.94.5 and we
Hi Burkhard,
thanks a lot for the quick response.
Am 28.09.2016 um 14:15 schrieb Burkhard Linke:
> someone correct me if I'm wrong, but removing objects in a cache tier
> setup result in empty objects which acts as markers for deleting the
> object on the backing store.. I've seen the same patter
Hi,
someone correct me if I'm wrong, but removing objects in a cache tier
setup result in empty objects which acts as markers for deleting the
object on the backing store.. I've seen the same pattern you have
described in the past.
As a test you can try to evict all objects from the cache
Hi all,
we currently experience a few "strange" things on our Ceph cluster and I
wanted to ask if anyone has recommendations for further tracking them
down (or maybe even an explanation already ;) )
Ceph version is 0.94.5 and we have a HDD based pool with a cache pool on
NVMe SSDs in front if it.
19 matches
Mail list logo