Aaron,
Ticket is at
http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/issues/detail?id=61
Andy
On 1 Feb 2013, at 18:01, aaron morton wrote:
> I think
> http://code.google.com/a/apache-extras.org/p/cassandra-jdbc/issues/list is
> the place to raise the issue.
>
> Can you update
As you may be aware I've been trying to track down a problem using JDBC 1.1.2
with Cassandra 1.2.0 I was getting a null pointer exception in the result set.
I've done some digging into the JDBC driver and found the following.
In CassandraResultSet.java the new result set is Instantiated in
uary 30, 2013, Edward Capriolo wrote:
> > You really can't mix cql2 and cql3. Cql2 does not understand cql3s sparse
> > tables. Technically it ,barfs all over the place. Cql2 is only good for
> > contact tables.
> >
> > On Wednesday, January 30, 2013, Andy Cobley
&g
Well this is getting stranger, for me with this simple table definition,
select key,gender from users
is also failing with a null pointer exception
Andy
On 29 Jan 2013, at 13:50, Andy Cobley wrote:
> When connecting to Cassandra 1.2.0 from CQLSH the table was created with:
>
&g
> What is your table spec ?
> Do you have the full stack trace from the exception ?
>
> Cheers
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
> http://www.thelastpickle.com
>
> On 29/01/2013, at 8:15 AM
I have the following code in my app using the JDBC (cassandra-jdbc-1.1.2.jar)
drivers to CQL:
try {
rs= stmt.executeQuery("SELECT * FROM users");
}catch(Exception et){
System.out.println("Can not execute statement "+et);
}
When connecting to a CQL2 server (cassandra 1.1.5) the co
Apologies,
I was missing a few cassandra jar libs in the tomcat library.
Andy
On 28 Jan 2013, at 11:31, Andy Cobley wrote:
> I tried to add a CQL3 jdbc resource to tomcat 7 in a context.xml file (in a
> Eclipse project) as follows:
>
> name="jdbc/CF1"
I tried to add a CQL3 jdbc resource to tomcat 7 in a context.xml file (in a
Eclipse project) as follows:
JDBC driver is cassandra-jdbc-1.1.2. When Tomcat (7.035) restarts it throws a
series of errors. Is this known, or expected ? Removing the resource from
contact.xml allows the server to
I'm starting to move to JBC for Cassandra (away from Hector).In his strange
loop 2012 anti-pattern presentation, Mathew Dennis writes:
"Sometimes people try to restrict clients to a single node. This actually takes
work, and causes problems. Don’t do it."
Now, I note that the JDBC pooled co
There are some interesting results in the benchmarks below:
http://www.slideshare.net/renatko/couchbase-performance-benchmarking
Without starting a flame war etc, I'm interested if these results should
be considered "Fair and Balanced" or if the methodology is flawed in some
way ? (for instance i
e've had anyone from the Cassandra community join
us and give a talk.If your interested, drop me a line and let me know
what your proposing.
I should point out, as this is a free conference, we can't pay speakers and
unless we get a big sponsor, it's doubtful we can manage much
Pi (for educational reason) and have
Have you written about your experiences anywhere ?
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 3/07/2012, at 3:02 AM, Andy Cobley wrote:
> I've tested this and added a not to issue 4400. Hop
est.
>
>--
>Sylvain
>
>On Sun, Jul 1, 2012 at 3:26 PM, Andy Cobley
> wrote:
>> I'm running Cassandra on Raspberry Pi (for educational reason) and have
>>been successfully running 1.1.0 for some time. However there is no
>>native build of SnappyCompresso
hat case. I've created
>https://issues.apache.org/jira/browse/CASSANDRA-4400 to fix that. If
>you could try the patch on that issue and check it works for you that
>would be awesome since I don't have a Raspberry Pi myself to test.
>
>--
>Sylvain
>
>On Sun, Jul 1, 2012 at 3:26 PM
I'm running Cassandra on Raspberry Pi (for educational reason) and have been
successfully running 1.1.0 for some time. However there is no native build of
SnappyCompressor for the platform (I'm currently working n rectifying that if I
can) so that compression is unavailable. When I try and sta
My (limited) experience of moving form 0.8 to 1.0 is that you do have to use
rebuildsstables. I'm guessing BlukLoading is bypassing the compression ?
Andy
On 28 Jun 2012, at 10:53, jmodha wrote:
> Hi,
>
> We are migrating our Cassandra cluster from v1.0.3 to v1.1.1, the data is
> migrated us
16 matches
Mail list logo