Hi Alain,
Thanks for your response :)
> A replication factor of 3 for a 3 node cluster does not balance the load:
> since you ask for 3 copies of the data (rf=3) on 3 nodes cluster,
> each node will have a copy of the data and you are overloading all nodes.
> May be you should try with a rf = 2
hi,
I am creating a trigger in cassandra
---
public class GenericAuditTrigger implements ITrigger
{
private static SimpleDateFormat dateFormatter = new SimpleDateFormat
("/MM
Ok I found the additional handler file and I added but at runtime I am still
getting the error message about the timer class not found and I looked in the
jar and did not see that class. I was using netty-3.0.9.0.Final with Cassandra
driver cassandra-driver-core-3.0.2.jar and netty-handler-4.0.3
On 25/05/2016 17:56, bastien dine wrote:
Hi,
I'm running a 3 nodes Cassandra 2.1.x cluster. Each node has 8vCPU and 30
Go RAM.
Replication factor = 3 for my keyspace.
...
Is there a problem with the Java Driver ? The load balancing is not
Hi Bastien,
A replication factor of 3 for a 3 node
Hi All,
I downloaded the latest cassandra driver but when used I get an error about
class io.netty.util.timer (netty-3.9.0.Final) not being found during runtime.
If I get the latest netty-alll-4.0.46.Final.jar during runtime I get an
exception about not having a java.security.cert.x509Certificat
Literally just encountered this exact same thing. I couldn't find anything
in the official docs related to this but there is at least this blog that
explains it:
http://www.jsravn.com/2015/05/13/cassandra-tombstones-collections.html
and this entry in ScyllaDB's documentation:
http://www.scylladb.co
If increasing or disabling streaming_socket_timeout_in_ms on the source
node does not fix it, you may want to have a look on your tcp keep alive
settings on the source and destination nodes as intermediate
routers/firewalls may be killing the connections due to inactivity. See
this for more informa
Thanks a lot for your help. I will try that tomorrow. The first time that I
tried to rebuild, streaming_socket_timeout_in_ms was 0 and still failed.
Below is the directly previous error on the source node:
ERROR [STREAM-IN-/172.31.22.104] 2016-05-24 22:32:20,437
StreamSession.java:505 - [Stream #2
> Workaround is to set to a larger streaming_socket_timeout_in_ms **on the
source node**., the new default will be 8640ms (1 day).
2016-05-25 17:23 GMT-03:00 Paulo Motta :
> Was there any other ERROR preceding this on this node (in particular the
> last few lines of [STREAM-IN-/172.31.22.104]
Was there any other ERROR preceding this on this node (in particular the
last few lines of [STREAM-IN-/172.31.22.104])? If it's a
SocketTimeoutException, then what is happening is that the default
streaming socket timeout of 1 hour is not sufficient to stream a single
file and the stream session is
Hello again,
Here is the error message from the source
INFO [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,275
StreamResultFuture.java:180 - [Stream
#2c290460-20d4-11e6-930f-1b05ac77baf9] Session with /172.31.22.104 is
complete
WARN [STREAM-IN-/172.31.22.104] 2016-05-25 00:44:57,276
StreamResul
After thinking about it more, I have no idea how that worked at all. I
must have not cleared out the working directory or something
Regardless, I did something weird with my initial joining of the cluster
and then wasn't using repair -full. Thank y'all very much for the info.
On Wed, May 25
So I figured out the main cause of the problem. The seed node was itself.
That's what got it in a weird state. The second part was that I didn't
know the default repair is incremental as I was accidently looking at the
wrong version documentation. After running a repair -full, the 3 other
nodes
This is the log of the destination/rebuilding node, you need to check what
is the error message on the stream source node (192.168.1.140).
2016-05-25 15:22 GMT-03:00 George Sigletos :
> Hello,
>
> Here is additional stack trace from system.log:
>
> ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:4
Hello,
Here is additional stack trace from system.log:
ERROR [STREAM-IN-/192.168.1.140] 2016-05-24 22:44:57,704
StreamSession.java:620 - [Stream #2c290460-20d4-11e6-930f-1b05ac77baf9]
Remote peer 192.168.1.140 failed stream session.
ERROR [STREAM-OUT-/192.168.1.140] 2016-05-24 22:44:57,705
Stream
You could also follow this related issue:
https://issues.apache.org/jira/browse/CASSANDRA-8844
On Wed, May 25, 2016 at 12:04 PM, Aaditya Vadnere wrote:
> Thanks Eric and Mark, we were thinking along similar lines. But we already
> need Cassandra for regular database purpose, so instead of having
Thanks Eric and Mark, we were thinking along similar lines. But we already
need Cassandra for regular database purpose, so instead of having both
Kafka and Cassandra, the possibility of using Cassandra alone was explored.
Another usecase where update notification can be useful is when we want to
s
Hi,
I'm running a 3 nodes Cassandra 2.1.x cluster. Each node has 8vCPU and 30
Go RAM.
Replication factor = 3 for my keyspace.
Recently, i'm using the Java Driver (within Storm) to read / write data and
I've encountered a problem :
All of my cluster nodes are sucessfully discovered by the driver.
If you replace an entire collection, whether it's a map, set, or list, a
range tombstone will be inserted followed by the new collection. If you
only update a single element, no tombstones are generated.
On Wed, May 25, 2016 at 9:48 AM, Matthias Niehoff <
matthias.nieh...@codecentric.de> wrote:
The stack trace from the rebuild command not show the root cause of the
rebuild stream error. Can you check the system.log for ERROR logs during
streaming and paste here?
Hi,
we have a table with a Map Field. We do not delete anything in this table,
but to updates on the values including the Map Field (most of the time a
new value for an existing key, Rarely adding new keys). We now encounter a
huge amount of thumbstones for this Table.
We used sstable2json to tak
Found it!
ie how to convert or represent the C* uuid using Spark CQL.
uuid.UUID(int=idval)
So putting into the context
...
import uuid
...
sparkSQLl ="SELECT distinct id, dept, workflow FROM samd WHERE
workflow='testWK'
new_df = sqlContext.sql(sparkSQLl)
results = new_df.collect()
for ro
Hi Mike,
Yes I am using NetworkTopologyStrategy. I checked
cassandra-rackdc.properties on the new node:
dc=DCamazon-1
rack=RACamazon-1
I also checked the jira link you sent me. My network topology seems
correct: I have 4 nodes in DC1 and 1 node in DCamazon-1 and I can verify
that when running "no
23 matches
Mail list logo