Thanks! I'll stay tuned ;)
till there I will patch my code an pray not to introduce any issues here..
The patch is bellow for anyone to try (just send the decoded string to be
signed)
>From a439f6126317a2b66fc08baf31b24b47e8ec4ed9 Mon Sep 17 00:00:00 2001
From: "Patrick F. Marques"
Date: Thu, 26
Hi Zeeshan,
We have typically seen this issue when we have lots of indexes created in
that instance. On a t2.medium machine we already have around 512+ indexes
created in data folder. In such case, if we trying to create any new
indexes it's taking time. Association of Index to Bucket is failing e
Hey Santi, Baskar,
Are you noticing increased CPU load as you create more and more indexes?
Running `riak-admin top -interval 2` a few times may bring sometime to light.
I’d see how you could increase resources or think more critically on how you’re
indexing data for Solr. Does the data share m
Hello Zeeshan,
We create a new set of buckets/indices when a new tenant is created in a
multi-tenancy environment. Alternate approach for us is to use single set
of index/buckets and filter by a tenant identifier. Before moving to the
second approach we want to confirm if we expect to see signific
The second approach would most probably cut down on index creation time.
However, you should definitely spend a little time testing it out and
benchmarking accordingly. And, as I mentioned, please take a look at CPU load
as indexes are created, as well as experiment with solrconfig and increasi
Hi Niels
Object lifecycle management is not yet supported in Riak CS. It is however
a feature that we plan on supporting in the future. We will be planning our
upcoming CS releases shortly and I will keep you posted on the timeline.
Let me know if you have any questions or concerns. Thanks!
--
I would love to see this make it in the next (major?) riak-cs release. It would
put it in parity with riak’s bitcask auto-expiry.
On 3/9/15, 11:31 AM, "Seema Jethani" mailto:se...@basho.com>>
wrote:
Hi Niels
Object lifecycle management is not yet supported in Riak CS. It is however a
feature
Sorry, being late. I thought I've replied to you, but it was a very
close one where I think you're hitting the same problem as this:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2015-February/016845.html
Riak CS often includes `=' in uploadId of multipart uploads while S3
doesn't (
Hi Niels,
Thank you for your interest on Riak CS.
Some questions about 400 - InvalidDigest:
- Can you confirm which MD5 was correct for the log
2015-02-11 16:34:17.854 [debug]
<0.23568.18>@riak_cs_put_fsm:is_digest_valid:326
Calculated = <<"pIFX5fpeo7+sPPNjtSBWBg==">>,
Reported = "0B