: "Matthew Von-Maszewski"
To: "Joe Olson"
Cc: "riak-users"
Sent: Monday, June 8, 2015 10:56:24 AM
Subject: Re: LevelDB
Joe,
Long story short, I am slowly rebuilding my debug setup. Taking longer than I
thought. I suspect, but have not yet verified, that if you
Suppose I have come to the conclusion each of my LevelDB backed Riak nodes
needs to hold 2TB of data.
Also suppose I have the ability to store data on more expensive SSD drives, and
less expensive magnetic drives.
My question: What should leveldb.tiered be set to in /etc/riak/riak.conf?
I
>From http://docs.basho.com/riak/latest/theory/concepts/:
"In general, large numbers of buckets within a Riak cluster is not a problem.
In practice, there are two potential restrictions on the maximum number of
buckets a Riak cluster can handle"
Is there any further, more specific documentati
Is it possible to automatically index custom X-Riak-Meta-* fields with Solr? Do
I have to create a custom extractor or modify the default search schema as
outlined at http://docs.basho.com/riak/latest/dev/search/custom-extractors/ ?
Here is my python code I am using to test:
# Create search
Using the default YZ index schema, I know I can index:
dataset={
indexed_s:"Blah"
}
I also know I can index:
dataset={
indexed_s:"Blah",
notindexed: 52
}
However, when I add:
dataset={
indexed_s:"Blah",
notindexed: 52,
otherstuff:{"something":1, "something_else":2}
}
the indexi
Two quick questions about X-RIak-Meta-* headers:
1. Is it possible to pull the headers for a key without pulling the key itself?
The reason I am interested in this is because the values for our keys are in
the 1.2-1.6 MB range, so the headers are a lot smaller in comparison. I know I
can index
is an elegance to storing both the data and
the metadata at the same time and in the same place via the same operation, so
that is the preferred direction.
From: "Damien Krotkine"
To: "Dmitri Zagidulin"
Cc: "Joe Olson" , "riak-users"
Sent: Tuesday, D
I am trying to use the Solr ( Yokozuna ) 'group' options with fulltext_search.
I get expected results when I use the http interface, but not with
python-riak-client fulltext pbc search.
Here's what I'm trying to do:
# This works fine
curl "http://:8098/search/query/solrdefault?wt=json&q=id_s:
I'm trying to get a Solr join query to work on our Riak KV cluster.
The Solr join query is documented here:
https://wiki.apache.org/solr/Join
Using the example under the "Compared to SQL" heading, I am formatting my http
request to Riak as:
curl "http://:8098/search/query/?wt=json&df=_yz_r
I am trying to set up a simple test environment. This environment consists of a
single Riak KV node which has not joined a cluster.
I can populate the single un-clustered node with KV pairs just fine using curl.
However, when I stop the node, and then restart it, all the KV pairs that were
wr
f them.
I will try to build another Vagrant machine with the default riak.conf and see
if I can get this to repeat. It is almost as if the KV pairs are not persisting
to disk at all.
From: "Matthew Von-Maszewski"
To: "Joe Olson"
Cc: "riak-users" , "
Index design question
Suppose I have N customers I am tracking data for. All customer data is
basically the same structure, and I have determined I need a simple secondary
index on this data in order to satisfy a business goal.
Is it better to have N indexes (N ~ 100), or a single index,
According to the documentation at
https://docs.basho.com/riak/ts/1.4.0/using/querying/guidelines/
"A query covering more than a certain number of quanta (5 by default) will
generate the error too_many_subqueries and the query system will refuse to run
it. Assuming a default quantum of 15 minut
Two questions about the RiakTS TTL functionality (and its future direction):
1. Is it possible to replace the standard delete upon TTL expiry with a user
defined delete?
2. Can the current global setting for the TTL timeout be changed? Will that
affect new records going forward?
Bonus questi
Is anyone storing timestamps with microsecond resolution in RiakTS?
I'm interested in hearing if anyone is doing this, and how they are doing it.
My gut reaction is to have a compound timestamp + integer primary key, with the
microsecond part of the timestamp (least significant digits) going in
Are there any ramifications of setting search = off in riak.conf on RiakTS if
you are not using solr, and only accessing data via the primary keys on tables?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/
Is there a Kafka Connect (https://www.confluent.io/product/connectors/)
connector for RiakTS?
___
riak-users mailing list
riak-users@lists.basho.com
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com
Suppose I have the following table in RiakTS:
CREATE TABLE T1 (
idVARCHAR NOT NULL,
eventtime TIMESTAMP NOT NULL,
field2 SINT64,
data BLOB NOT NULL,
primary key (id, QUANTUM(eventtime, 365, 'd')),id)
)
Assume the BLOB field is close to the max size for a
its correct place) record, and just update the object component? Or when I do a
duplicate insert, am I paying the price for a delete + insert?
Thanks again!
From: Andrei Zavada
Sent: Wednesday, March 29, 2017 3:14:58 PM
To: Alexander Sicular
Cc: Joe Olson;
ovel data
into a scalable Kafka cluster, and have it land automatically in a scalable
RiakTS cluster is pretty appealing...
From: Andrei Zavada
Sent: Thursday, April 6, 2017 4:04 AM
To: Joe Olson
Cc: riak-users@lists.basho.com
Subject: Re: Kafka Connector For
20 matches
Mail list logo