Hi all,

Our team at Cloud Foundry is building a RiakCS service for CF users and one
of our deployments is seeing an issue with deleting objects from the
blobstore.

We were seeing that our disk usage was approaching 100%, so we deleted some
of the stale objects in the blobstore using s3cmd. If we run `s3cmd du`, it
seems that we successfully freed up space, but when we ran `df` inside the
RiakCS host, we still saw that our disk usage was close to 100%.

We understand now that Riak will remove deleted keys asynchronously, but we
haven't succeeded in configuring GC so that it is more responsive to
deletions, despite having tried tweaking several parameters. On Friday, we
uploaded several files and deleted them, hoping to see that they were gone
from the disk on Monday. When we came back after the weekend, we saw that
garbage collection still had not occurred. If it helps, you can look at our
configuration for Riak
<https://github.com/cloudfoundry/cf-riak-cs-release/blob/master/jobs/riak-cs/templates/riak.app.config.erb>
and RiakCS
<https://github.com/cloudfoundry/cf-riak-cs-release/blob/master/jobs/riak-cs/templates/riak_cs.app.config.erb>
.

Has anyone else encountered this issue, where garbage collection appears to
never occur? Would be great to get help configuring RiakCS so that GC
happens more often. Maybe there is a way to run GC manually when disk is
filling up?

Thanks,
David & Raina
CF Services Team
_______________________________________________
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to