not valid (see below). I tried to trace
the code in the connector and it appears that the row size (3 below) is
different from the column count (which turns out be 1). I am trying to follow
the example from
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
with
turns out be 1). I am trying to follow
the example from
https://github.com/datastax/spark-cassandra-connector/blob/master/doc/2_loading.md
with customer having two more fields than just the id as mentioned in the
example. In case of the example I think it will work because it has only 1
column
Hi Revanth,
I took a quick look and don't think you can override existing mappings. You
should see a warning like this -
18/04/15 19:50:27 WARN CodecRegistry: Ignoring codec CustomTSTypeCodec [date
<-> java.lang.Long] because it collides with previously registered codec
CustomTSTypeCodec [date <
Hi Revanth,
How do you register the custom codec? Do you get any errors? Have you tried
using a pre-existing codec? It would be helpful if you can give more
information.
DineshOn Saturday, April 14, 2018, 7:29:30 PM PDT, Revanth Reddy
wrote:
Hi Team ,
I want to write a custom cassand
Hi Team ,
I want to write a custom cassandra codec and i want to use that codec in my
spark application while reading the data from cassandra table .
Basically the custom codecs are used to convert one column type to another
while reading from cassandra. for example i have a timestamp column in
c
>Your best bet for a response will be on the spark-cassandra-connector mailing
>list:
>https://groups.google.com/a/lists.datastax.com/forum/#!forum/spark-connector-user
Didn’t know about that list. Thanks Carl.
This email is confidential and may be subject to privilege. If you ar
Hi Joaquin,
Your best bet for a response will be on the spark-cassandra-connector
mailing list:
https://groups.google.com/a/lists.datastax.com/forum/#!forum/spark-connector-user
Hope you find your answer.
-Carl
On Wed, Jun 22, 2016 at 4:58 AM, Joaquin Alzola
wrote:
> Hi List
>
>
Hi List
I am trying to install the Spark-Cassandra connector through maven or sbt but
neither works.
Both of them try to connect to the Internet (which I do not have connection) to
download certain files.
Is there a way to install the files manually?
I downloaded from the maven repository
Hi,
Has anyone used Protobuff with spark-cassandra connector? I am using
protobuff-3.0-beta with spark-1.4 and cassandra-connector-2.10. I keep
getting "Unable to find proto buffer class" in my code. I checked version
of protobuff jar and it is loaded with 3.0-beta in classpath. Pr
https://repo1.maven.org/maven2/com/datastax/spark/spark-cassandra-connector-java_2.11/1.5.0-M1/spark-cassandra-connector-java_2.11-1.5.0-M1.jar
https://repo1.maven.org/maven2/com/datastax/spark/spark-cassandra-connector_2.11/1.5.0-M1/spark-cassandra-connector_2.11-1.5.0-M1.jar
https://repo1
Hi,
At https://github.com/datastax/spark-cassandra-connector I see that you
are extending API that Spark provides for interacting with RDDs to
leverage some native Cassandra features. We are using Apache Cassandra
together with PySpark to do some analytics and since we have community
version
Hi,
I am trying to insert into the following column family using Spark Cassandra
connector.
CREATE TABLE myks.mycf (
id bigint,
msg text,
type text,
ts, timestamp,
primary key (id, msg)
)
Is there a way to to have the ts field automatically generate :
// dataRdd is of Type RDD[(Int,String
12 matches
Mail list logo