Also https://issues.apache.org/jira/browse/CASSANDRA-7437 and
https://issues.apache.org/jira/browse/CASSANDRA-7465 for rc3, although the
CounterCacheKey assertion looks like an independent (though comparatively
benign) bug I will file a ticket for.
Can you try this against rc3 to see if the proble
Hi Diane,
On 17/07/14 06:19, Diane Griffith wrote:
We have been struggling proving out linear read performance with our cassandra
configuration, that it is horizontally scaling. Wondering if anyone has any
suggestions for what minimal configuration and approach to use to demonstrate
this.
We
Are you still seeing the same exceptions about too many open files?
On Thu, Jul 17, 2014 at 6:28 AM, Bhaskar Singhal
wrote:
> Even after changing ulimits and moving to the recommended production
> settings, we are still seeing the same issue.
>
> root@lnx148-76:~# cat /proc/17663/limits
> Lim
Yes, I am.
lsof lists around 9000 open file handles.. and there were around 3000 commitlog
segments.
On Thursday, 17 July 2014 1:24 PM, Benedict Elliott Smith
wrote:
Are you still seeing the same exceptions about too many open files?
On Thu, Jul 17, 2014 at 6:28 AM, Bhaskar Singhal
Thanks christian,
I'll check on my side.
Have you an idea about FlushWriter 'All time blocked'
Thanks,
2014-07-16 16:23 GMT+02:00 horschi :
> Hi Ahmed,
>
> this exception is caused by you creating rows with a key-length of more
> than 64kb. Your key is 394920 bytes long it seems.
>
> Keys and
Well with 4k maximum open files that still looks to be your culprit :)
I suggest you increase the size of your CL segments; the default is 32Mb,
and this is probably too small for the size of record you are writing. I
suspect that a 'too many open files' exception is crashing a flush which
then ca
Hi, All,
I need to make a Cassandra keyspace to be read-only.
Does anyone know how to do that?
Thanks
Boying
Think about managing it via authorization and authentication support
On Thu, Jul 17, 2014 at 4:00 PM, Lu, Boying wrote:
> Hi, All,
>
>
>
> I need to make a Cassandra keyspace to be read-only.
>
> Does anyone know how to do that?
>
>
>
> Thanks
>
>
>
> Boying
>
>
>
Hi,
I have an issue in my environment running with cassandra 2.0.5, It is build
with 9 nodes, with 3 nodes in each datacenter. After loading the data, I am
able to do token range lookup or list in cassandra-cli, but when I do get
x[rowkey], the system hangs. Similar query in CQL also has same
Hi Ahmed,
for that you should increase the flush queue size setting in your
cassandra.yaml
kind regards,
Christian
On Thu, Jul 17, 2014 at 10:54 AM, Kais Ahmed wrote:
> Thanks christian,
>
> I'll check on my side.
>
> Have you an idea about FlushWriter 'All time blocked'
>
> Thanks,
>
>
> 20
Create a table with a set as one of the columns using cqlsh, populate with a
few records.
Connect using the cassandra-cli, run list on your table/cf and you'll see how
the sets work.
Ben Bromhead
Instaclustr | www.instaclustr.com | @instaclustr | +61 415 936 359
On 13/07/2014, at 11:19 AM,
Duncan,
Thanks for that feedback. I'll give a bit more info and then ask some more
questions.
*Our Goal*: Not to produce the fastest read but show horizontal scaling.
*Test procedure*:
* Inserted 54M rows where one third of that represents a unique key, 18M
keys. End result given our schema i
It sounds as if you are actually testing “vertical scalability” (load on a
single node) rather than Cassandra’s sweet spot of “horizontal scalability”
(add more nodes to handle higher load.) Maybe you could clarify your intentions
and specific use case.
Also, it sounds like you are trying to fo
This is a follow on re-post to clarify what we are trying to do, providing
information that was missing or not clear.
Goal: Verify horizontal scaling for random non duplicating key reads using
the simplest configuration (or minimal configuration) possible.
Background:
A couple years ago we
Definitely not trying to show vertical scaling. We have a query use case
we are trying to show will scale as we add more nodes should performance
fall below adequate. But to show the scaling we do the test on a 1 node
cluster, then 2 node cluster, then 4 node cluster with a goal that query
throu
Hi Diane,
Sounds a bit like the client might be the limiting factor in your test -
not the server. Especially if you're using one single threaded client, you
might not be loading the backend in any significant way. Have you done any
vertical scaling tests (identical client, bigger server)? if the
How many partitions are you spreading those 18 million rows over? That many
rows in a single partition will not be a sweet spot for Cassandra. It’s not
exceeding any hard limit (2 billion), but some internal operations may cache
the partition rather than the logical row.
And all those rows in a
Hi Tyler,
Thanks for replying. This is good to know that I am not going crazy! :)
I will post a JIRA, along with directions on how to get this to
happen. The tricky thing, though, is that this doesn't always happen,
and I cannot reproduce it on my laptop or in a VM.
BTW you mean the datastax
On Thu, Jul 17, 2014 at 4:59 PM, Clint Kelly wrote:
>
> I will post a JIRA, along with directions on how to get this to
> happen. The tricky thing, though, is that this doesn't always happen,
> and I cannot reproduce it on my laptop or in a VM.
>
Even if you can't reproduce, just include as man
So do partitions equate to tokens/vnodes?
If so we had configured all cluster nodes/vms with num_tokens: 256 instead
of setting init_token and assigning ranges. I am still not getting why in
Cassandra 2.0, I would assign my own ranges via init_token and this was
based on the documentation and eve
What is the proper way to perform a column slice using CQL with 1.2?
I have a CF with a primary key X and 3 composite columns (A, B, C). I'd
like to find records at:
key=X
columns > (A=1, B=3, C=4) AND
columns <= (A=2)
The Query:
SELECT * FROM CF WHERE key='X' AND column1=1 AND column2=3 AND
The last term in this query is redundant. Any time column1 = 1, we
may reasonably expect that it is also <= 2 as that's where 1 is found.
If you remove the last term, you elimiate the error and non of the
selection logic.
SELECT * FROM CF WHERE key='X' AND column1=1 AND column2=3 AND
column3>4 AN
On Thu, Jul 17, 2014 at 3:21 PM, Diane Griffith
wrote:
> So do partitions equate to tokens/vnodes?
>
A partition is what used to be called a "row".
Each individual token in the token ring can contain a partition, which you
request using the token as the key.
A "token range" is the space betwee
Michael,
So if I switch to:
SELECT * FROM CF WHERE key='X' AND column1=1 AND column2=3 AND column3>4
That doesn't include rows where column1=2, which breaks the original slice
query.
Maybe a better way to put it, I would like:
SELECT * FROM CF WHERE key='X' AND column1>=1 AND column2>=3 AND co
Hi everyone,
I am trying to design a schema that will keep the N-most-recent
versions of a value. Currently my table looks like the following:
CREATE TABLE foo (
rowkey text,
family text,
qualifier text,
version long,
value blob,
PRIMARY KEY (rowkey, family, qualifier, ve
So I stripped out the number of clients experiment path information. It is
unclear if I can only show horizontal scaling by also spawning many client
requests all working at once. So that is why I stripped that information
out to distill what our original attempt was at how to show horizontal
sca
For this type of query, you really want the tuple notation introduced in
2.0.6 (https://issues.apache.org/jira/browse/CASSANDRA-4851):
SELECT * FROM CF WHERE key='X' AND (column1, column2, column3) > (1, 3, 4)
AND (column1) < (2)
On Thu, Jul 17, 2014 at 6:01 PM, Mike Heffner wrote:
> Michael,
On Thu, Jul 17, 2014 at 5:16 PM, Diane Griffith
wrote:
> I did tests comparing 1, 2, 10, 20, 50, 100 clients spawned all querying.
> Performance on 2 nodes starts to degrade from 10 clients on. I saw
> similar behavior on 4 nodes but haven't done the official runs on that yet.
>
>
Ok, if you'v
I would say that would work, but since already familiar with storage model from
hbase and trying to emulate it may want to look into thrift interfaces. They
little more similar to hbase interface (not as friendly to use and you cant use
the very useful new client libraries from datastax) and a
Hi,
I've been testing an in-place upgrade of a 1.2.11 cluster to 2.0.9. The
1.2.11 nodes all have a schema defined through CQL with existing data
before I perform the rolling upgrade. While the upgrade is in progress,
services are continuing to read and write data to the cluster (strictly
using
The information about how the servers are connected is important, because we
have exactly these types of situations in some of our applications (not using
Cassandra) when firewall administrators/configurators get “creative” about
“enhancing” security. Other things can cause this type of situatio
Sorry I may have confused the discussion by mentioning tokens – I wasn’t
intending to refer to vnodes or the num_tokens property, but merely referring
to the token range of a node and that the partition key hashes to a token value.
The main question is what you use for your primary key and wheth
The problem with starting without vnodes is moving to them is a bit
hairy. In particular, nodetool shuffle has been reported to take an
extremely long time (days, weeks). I would start with vnodes if you
have any intent on using them.
On Thu, Jul 17, 2014 at 6:03 PM, Robert Coli wrote:
> On Thu
Hello,
I still experience a similar issue after a 'DROP KEYSPACE' command with C*
2.1-rc3. Connection to the node may fail after a 'DROP'.
But I did not see this issue with 2.1-rc1 (-> it seems like to be a
regression brought with 2.1-rc2).
Fabrice LARCHER
2014-07-17 9:19 GMT+02:00 Benedict El
In C* 2.1, the new row cache implementation keeps the most recent N
partitions in memory, it might be of interest for you:
http://www.datastax.com/dev/blog/row-caching-in-cassandra-2-1
On Fri, Jul 18, 2014 at 3:39 AM, Chris Lohfink
wrote:
> I would say that would work, but since already familia
35 matches
Mail list logo