So when I try to use pagination, it doesn't seem to be picking up my
continuation. I'm having trouble parsing the json I get back using stream=true
(and there is still a timeout) so I went to just using pagination. Perhaps I'm
doing it wrong, (likely, it has been a long day) but riak seems to be
For a work around you could use streaming and pagination.
Request smaller pages of data (i.e. sub 60 seconds worth) and use streaming to
get the results to your client sooner.
In HTTP this would look like
http://127.0.0.1:8098/buckets/mybucket/index/test_bin/myval?max_results=1&stream=true
Currently, Yokozuna has some slight differences from stock Riak. Don't follow
the public docs instructions. Follow these instead:
https://github.com/basho/yokozuna/blob/master/docs/INSTALL.md
Yokozuna requires R15B02.
Eric
On Jul 26, 2013, at 1:52 AM, Erik Andersen wrote:
> Hi!
>
> I want
Hi guys,
I'm having this trouble after migrating to 1.4 from 1.3. I get a bunch of
notfound errors, even though I know the objects are there.
10> riakc_pb_socket:get_index(Conn, <<"iops">>, {integer_index, "time"},
1374787015, 1374787025).
{ok,{index_results_v1,[<<"C3e5e4ffc0001H0N525400f52ae
Thank you for looking in to this. This is a major problem for our production
cluster, and we're in a bit of a bind right now trying to figure out a
workaround in the interim. It sounds like maybe a mapreduce might handle the
timeout properly, so hopefully we can do that in the meantime.
If there
Hi Sean,
I'm very sorry to say that you've found a featurebug.
There was a fix put in here https://github.com/basho/riak_core/pull/332
But that means that the default timeout of 60 seconds is now honoured. In the
past it was not.
As far as I can see the 2i endpoint never accepted a timeout argu
I should have mentioned that I also tried:
curl -H "X-Riak-Timeout: 26"
"http://127.0.0.1:8098/buckets/mybucket/index/test_bin/myval?timeout=26"; -i
but still receive the 500 error below exactly at the 60 second mark. Is this a
bug?
Secondary to getting this working at all, is this docum
Sean -
The timeout isn't via a header, it's a query param -> &timeout=
You can also use stream=true to stream the results.
- Roach
Sent from my iPhone
On Jul 26, 2013, at 3:43 PM, Sean McKibben wrote:
> We just upgraded to 1.4 and are having a big problem with some of our larger
> 2i qu
We just upgraded to 1.4 and are having a big problem with some of our larger 2i
queries. We have a few key queries that takes longer than 60 seconds (usually
about 110 seconds) to execute, but after going to 1.4 we can't seem to get
around a 60 second timeout.
I've tried:
curl -H "X-Riak-Timeou
Hi Folks,
We're looking to migrate our systems to a new database and we really like what
Riak provides. But we have one question lingering. We need to store and query a
lot of floating point data. How well does Riak perform with searching for data
that falls within a range? For example, if i wa
A vacuum command would be most appropriate...
On Fri, Jul 26, 2013 at 11:16 AM, Jordan West wrote:
> To clarify a bit further:
>
> If you started with a fresh 1.4 cluster (or explicitly changed the
> app.config setting) you are using a new on-disk format that applies to any
> backend used by Ri
Hi Deyan,
As mentioned, it is recommended to write reduce phases it so that it can run
recursively [1], so that you can avoid having to use the 'reduce_phase_only_1'
parameter. Once you have a reduce function that behaves this way, you can tune
it by overriding the size of the 'reduce_phase_bat
Hi Christian,
thank you for the detailed reply, It sheds light on some other issues that
we're having and I'm beginning to believe that our map_js_vm_count and
reduce_js_vm_count settings are set too low for our ring size.
I will definitely be learning some Erlang.
best regards,
Deyan
On Jul 2
Hot on the heels of 1.4.0 ...
After releasing 1.4.0 it was reported to us that if you tried to
switch to using protocol buffers in existing code and you were already
using protocol buffers 2.5.0 ... the client would crash.
Apparently Google has introduced breaking changes in Protocol Buffers
2.5.
Hi Kathleen,
Is there anyway you could email me the entire crash dump files? Feel free
to email me directly and I'll post the analysis of it back to the list.
- Chris
On Fri, Jul 26, 2013 at 7:14 AM, kzhang wrote:
> Hi,
>
> Yeah, the cluster is running. Once noticing the other four nodes bei
To clarify a bit further:
If you started with a fresh 1.4 cluster (or explicitly changed the
app.config setting) you are using a new on-disk format that applies to any
backend used by Riak, including LevelDB. There new format is more compact
but, like MvM said, the majority of savings here probabl
Vladimir,
I have created a branch off the 1.3.2 release tag: mv-error-logging-hack
This has two changes:
- removes a late fix for database level locking that was added in 1.3.2 (to see
if that was the problem source prior to its fix)
- add test of all background file operations and log errors
Hi Matthew,
Thanks for correcting my misunderstanding!
--
Dave Brady
- Original Message -
From: "Matthew Von-Maszewski"
To: "Dave Brady"
Cc: riak-users@lists.basho.com
Sent: Friday, July 26, 2013 4:31:36 PM GMT +01:00 Amsterdam / Berlin / Bern /
Rome / Stockholm / Vienna
Sub
Dave,
Glad you are happy.
The truth is that you gained space via the backup/restore process. The data
formats of 1.3.1 and 1.4 are the same.
leveldb only removes dead / old key/values during its background compaction.
It could be days, even weeks in some cases, between when you write fresh
Hi,
Yeah, the cluster is running. Once noticing the other four nodes being down,
we quickly upgraded and brought them back up. Riak control is running and
everything looks good. My only concern is why the other nodes went down
while the first node was being upgraded.
Thanks!
Kathleen
--
View
Also - if nobody on list or in IRC is able to help you today, I'll try to
spin up an AWS instance and get a build going.
Which version of CentOS are you using?
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Cloudera Certified Developer for Apache Hadoop
On Fri,
Well, to be fair I believe that Yokozuna also requires Riak 1.3 or higher
and/or a version of CentOS that is newer than 5.2.
That being said, I build everything with R15B03, so I may not be the most
reliable source.
---
Jeremiah Peschka - Founder, Brent Ozar Unlimited
MCITP: SQL Server 2008, MVP
Thank you Basho for the new on-disk format changes to eLevelDB!
We have just migrating from 1.3.1 to 1.4.0, and since we wanted to change the
ring size too, we used Dan Kerrigan's Data Migrator to backup/restore our
buckets.
The space savings are very impressive! Each node went from using a
Have you followed the "Installing Erlang" instructions[1]?
They include a reference on how to get Erlang R15B01 up and running on your
machine if you have to build Erlang from source.
[1]:
http://docs.basho.com/riak/1.3.2/tutorials/installation/Installing-Erlang/
---
Jeremiah Peschka - Founder,
Hi!
I want to compile both Riak and Yokozuna from source but it would seem that
Riak won't compile with Erlang R15B02. At least not under CentOS.
http://docs.basho.com/riak/1.2.0/tutorials/installation/Installing-on-RHEL-and-CentOS/
What should I do?
Regards,
Erik
__
25 matches
Mail list logo