Hello,
We have a performance problem when trying to ramp up cassandra (as a
mongo replacement) on a very specific use case. We store a blob indexed
by a key and expire it after a few days:
CREATE TABLE views.views (
viewkey text PRIMARY KEY,
value blob
) WITH bloom_filter_fp_chance =
Hello,
We are having a 3 node cluster and one of the node went down due to a
hardware memory failure looks like. We followed the steps below after the
node was down for more than the default value of *max_hint_window_in_ms*
I tried to restart cassandra by following the steps @
1.
http://
Is that a seed node?
On Mon, Nov 16, 2015, 05:21 Anishek Agarwal wrote:
> Hello,
>
> We are having a 3 node cluster and one of the node went down due to a
> hardware memory failure looks like. We followed the steps below after the
> node was down for more than the default value of *max_hint_wind
Hi Donfeng,
I'm interested in convert a timeuuid already generated in a timestamp,
similar to dateOf function of the Cassandra, but in Java code. The your
sugestion is for generate a timeuuid.
2015-11-15 19:42 GMT-03:00 Dongfeng Lu :
> You can use long java.util.UUID.timestamp().
>
>
>
> On Sund
http://www.tutorialspoint.com/java/util/uuid_timestamp.htm
On Mon, Nov 16, 2015 at 7:38 AM, Marlon Patrick
wrote:
> Hi Donfeng,
>
> I'm interested in convert a timeuuid already generated in a timestamp,
> similar to dateOf function of the Cassandra, but in Java code. The your
> sugestion is for
Oh thanks. I had misunderstood the application function. I will test soon.
2015-11-16 9:43 GMT-03:00 Laing, Michael :
> http://www.tutorialspoint.com/java/util/uuid_timestamp.htm
>
> On Mon, Nov 16, 2015 at 7:38 AM, Marlon Patrick > wrote:
>
>> Hi Donfeng,
>>
>> I'm interested in convert a timeu
nope its not
On Mon, Nov 16, 2015 at 5:48 PM, sai krishnam raju potturi <
pskraj...@gmail.com> wrote:
> Is that a seed node?
>
> On Mon, Nov 16, 2015, 05:21 Anishek Agarwal wrote:
>
>> Hello,
>>
>> We are having a 3 node cluster and one of the node went down due to a
>> hardware memory failure l
no.. we can't allow you to leave.
On Mon, Nov 16, 2015 at 4:25 AM, Tanuj Kumar wrote:
>
Sis you set the JVM_OPTS to replace address? That is usually the error I get
when I forget to set the replace_address on Cassandra-env.
JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=address_of_dead_node
From: Anishek Agarwal [mailto:anis...@gmail.com]
Sent: Monday, November 16, 2015 9:25 AM
T
Hi Abhishek,
In my opinion, you already have data and bootstrapping is not needed here. You
can set auto_bootstrap to false in Cassandra.yaml and once the cassandra is
rebooted, you should run repair to fix the inconsistent data.
ThanksAnuj
On Monday, 16 November 2015 10:34 PM, Josh Smi
Sis you set the JVM_OPTS to replace address? That is usually the error I get
when I forget to set the replace_address on Cassandra-env.
JVM_OPTS="$JVM_OPTS -Dcassandra.replace_address=address_of_dead_node
From: Anishek Agarwal [mailto:anis...@gmail.com]
Sent: Monday, November 16, 2015
On Sat, Nov 14, 2015 at 9:58 AM, Peddi, Praveen wrote:
> I checked tpstats and there are no dropped mutations (though I checked it
> after restating the affected nodes). If the problem occurs again, I will
> check tpstats again. Is there any stat that shows failed hints? The only
> abnormality I
Hi,
We are using Cassandra 2.0.9 and we currently have "using timestamp" clause in
all our update queries. We did this to fix occasional issues with ntp drift on
AWS. We recently introduced conditional update in couple of our API and we
realized that I can't have "using timestamp" and "if column
Hey list,
Is there a URL available for downloading Cassandra that abstracts away the
mirror selection (eg. just 302's to a mirror URL?) We've got a few
self-configuring Cassandras (for example, the Docker container our devs
use), and using the same mirror for the containers or for any bulk
provisi
Perhaps you should fix your clock drift issues instead of trying to use a
workaround?
> On Nov 16, 2015, at 11:39 AM, Peddi, Praveen wrote:
>
> Hi,
> We are using Cassandra 2.0.9 and we currently have “using timestamp” clause
> in all our update queries. We did this to fix occasional issues wi
We have some rapid fire updates (multiple updates with in few millis). I wish
we had control over ntp drifts but AWS doesn't guarantee "0 drift". In North
America, its minimal (<5 to 10 ms) but Europe has longer drifts. We override
the timestamp only if we see current timestamp on the row is in
LWT uses the coordinator’s machine’s timestamp to generate a timeuuid, which is
used as the timestamp of the paxos ballot. You cannot supply a paxos ballot
that’s behind the current time because it’s invalid.
You’re issuing multiple updates within a few ms in a distributed system, it
sounds li
Jon,
Thanks for your response. Our custom supplied timestamp is only provided if
current timestamp on the row is in future. We just add few millis to current
timestamp value and override the timestamp. That will ensure the updates are
read in the correct order. We don't completely manage the tim
Obviously you will get a better answer from someone directly with
datastax... but IMO, I would look to either
* use package manager like apt or yum, they are usually up-to-date if you
use the ppa route.
* keep the package locally in your own infrastructure. I have had mirror
issue or content unav
Hi guys,
Doesn't Devcenter support C* 3.0?
When I tried to use Devcenter with C* 3.0, I got this error.
The specified host(s) could not be reached.
All host(s) tried for query failed (tried: /{ipaddress}:9042
(com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured
table schema_k
On 11/16/2015 04:24 PM, John Wong wrote:
Obviously you will get a better answer from someone directly with
datastax... but IMO, I would look to either
The ASF handles the Apache Cassandra download infrastructure, not
DataStax. (I work for DataStax, fyi)
I believe the OP is asking about links
On 11/16/2015 04:56 PM, Bosung Seo wrote:
Hi guys,
Doesn't Devcenter support C* 3.0?
When I tried to use Devcenter with C* 3.0, I got this error.
The specified host(s) could not be reached.
All host(s) tried for query failed (tried: /{ipaddress}:9042
(com.datastax.driver.core.exceptions.Invali
So you are reading the row before writing as you say you have the timestamp.
If you really need CAS for the write *and* the timestamp you read is in the
future (by local reckoning), why not delay that write until the future
arrives and forget about explicitly setting the timestamp?
Backtracking o
Hi,
Has anyone used Protobuff with spark-cassandra connector? I am using
protobuff-3.0-beta with spark-1.4 and cassandra-connector-2.10. I keep
getting "Unable to find proto buffer class" in my code. I checked version
of protobuff jar and it is loaded with 3.0-beta in classpath. Protobuff is
comin
Hi Anuj,
Did you mean streaming_socket_timeout_in_ms? If not, then you definitely
want that set. Even the best network connections will break occasionally,
and in Cassandra < 2.1.10 (I believe) this would leave those connections
hanging indefinitely on one end.
How far away are your two DC's from
Hi Bryan,
Thanks for the reply !!
I didnt mean streaming_socket_tomeout_in_ms. I meant when you run netstats
(Linux cmnd) on node A in DC1, you will notice that there is connection in
established state with node B in DC2. But when you run netstats on node B, you
wont find any connection with
Adding sleep was our last resort but I was hoping to find a way that doesn't
affect our API latencies. Thanks for the suggestion though.
Praveen
On Nov 16, 2015, at 6:29 PM, Laing, Michael
mailto:michael.la...@nytimes.com>> wrote:
So you are reading the row before writing as you say you have t
Hey Josh
I did set the replace address which was same as the address of the machine
which went down so it was in place.
anishek
On Mon, Nov 16, 2015 at 10:33 PM, Josh Smith
wrote:
> Sis you set the JVM_OPTS to replace address? That is usually the error I
> get when I forget to set the replace_
hey Anuj,
Ok I will try that next time, so you are saying since i am replacing the
machine in place(trying to get the same machine back in cluster) which
already has some data, I dont clean the commitlogs/data directories and set
auto_bootstrap = false and then restart the node, followed by repair
30 matches
Mail list logo