Forwarding to the group in case this helps out anyone else.
>>If so, should I set gc_grace_seconds to a lower non-zero value like 1-2 days?
Yes.
A
-
Aaron Morton
New Zealand
@aaronmorton
Co-Founder & Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
Thanks Vivek. I'll look over those links tonight.
On Wed, Oct 23, 2013 at 4:20 PM, Vivek Mishra wrote:
> Hi,
> CREATE TABLE sensor_data (
> sensor_id text,
> date text,
> dat
Hi.
I have a table with about 300k rows in it, and am doing a query that returns
about 800 results.
select * from fc.co WHERE thread_key = 'fastcompany:3000619';
The read latencies seem really high (upwards of 500ms)? Or is this expected? Is
this bad schema, or…? What's the best way to trace t
Hi,
CREATE TABLE sensor_data (
sensor_id text,
date text,
data_time_stamptimestamp,
reading int,
PRIMA
> Can I use the Cassandra data storage engine only?
>
should be able to, it's pretty well architected.
I did a talk at Cassandra EU last week about the internals which will be
helpful, look on the Planet Cassandra site it will be posted there soon. (I did
the same talk at Cassandra SF this y
> Also, there are plenty of
> compactions running - it just seems like the number of pending tasks
> is never affected.
Is there ever a time when the pending count is non zero but nodetool
compactionstats does not show any running tasks ?
If compaction cannot keep up you may be generating data f
On a plane and cannot check jira but…
> ERROR [FlushWriter:216] 2013-10-07 07:11:46,538 CassandraDaemon.java (line
> 186) Exception in thread Thread[FlushWriter:216,5,main]
> java.lang.AssertionError
> at
> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:198)
H
As far as I know this had not been done before. I would be interested
in hearing how it turned out.
On 10/23/2013 09:47 AM, Yasin Celik wrote:
I am developing an application for data storage. All the replication,
routing and data retrieving types of business are handled in my
application.
On 10/21/2013 07:03 PM, Hobin Yoon wrote:
Another question is how do you get the local DC name?
Have a look at org.apache.cassandra.db.EndpointSnitchInfo.getDatacenter
When debugging gossip related problems (is this node really
down/dead/some-werid state) you might have better luck looking at
`nodetool gossipinfo`. The "UN even though everything is bad thing"
might be https://issues.apache.org/jira/browse/CASSANDRA-5913
I'm not sure what exactly what happen
On 10/15/2013 08:41 AM, José Elias Queiroga da Costa Araújo wrote:
- is that is there a way that we can warm-up the cache, after the
file-based bulk loading, so that we can allow the data to be cached first
in the memory, and then afterwards, when we issue the bulk retrieval, the
performance can
Question - is https://issues.apache.org/jira/browse/CASSANDRA-6102 in 1.2.11 or
not? CHANGES.txt says it's not, JIRA says it is.
/Janne (temporarily unable to check out the git repo)
On Oct 22, 2013, at 13:48 , Sylvain Lebresne wrote:
> The Cassandra team is pleased to announce the release of
Another idea is the open source Energy Databus project which does time series
data and is based on PlayORM actually(ORM is a bad name since it is more noSQL
patterns and not really relational).
http://www.nrel.gov/analysis/databus/
That Energy Databus project is mainly time series data with som
Thanks Dean. I'll check that page out.
Les
On Wed, Oct 23, 2013 at 7:52 AM, Hiller, Dean wrote:
> PlayOrm supports different types of wide rows like embedded list in the
> object, etc. etc. There is a list of nosql patterns mixed with playorm
> patterns on this page
>
> http://buffalosw.com/w
Hi Vivek,
What I'm looking for are a couple of things as I'm gaining an understanding
of Cassandra. With wide rows and time series data, how do you (or can you)
handle this data in an ORM manner? Now I understand that with CQL3, doing a
"select * from time_series_data" will return the data as mult
On Wed, Oct 23, 2013 at 5:23 AM, java8964 java8964 wrote:
> We enabled the major repair on every node every 7 days.
>
This is almost certainly the cause of your many duplicates.
If you don't DELETE heavily, consider changing gc_grace_seconds to 34 days
and then doing a repair on the first of the
PlayOrm supports different types of wide rows like embedded list in the object,
etc. etc. There is a list of nosql patterns mixed with playorm patterns on
this page
http://buffalosw.com/wiki/patterns-page/
From: Les Hartzman mailto:lhartz...@gmail.com>>
Reply-To: "user@cassandra.apache.org
On 23 October 2013 15:05, Michael Theroux wrote:
> When we made a similar move, for an unknown reason (I didn't hear any
> feedback from the list when I asked why this might be), compaction didn't
> start after we moved from SizedTiered to leveled compaction until I ran
> "nodetool compact ".
One more note,
When we did this conversion, we were on Cassandra 1.1.X. You didn't mention
what version of Cassandra you were running,
Thanks,
-Mike
On Oct 23, 2013, at 10:05 AM, Michael Theroux wrote:
> When we made a similar move, for an unknown reason (I didn't hear any
> feedback from th
When we made a similar move, for an unknown reason (I didn't hear any feedback
from the list when I asked why this might be), compaction didn't start after we
moved from SizedTiered to leveled compaction until I ran "nodetool compact
".
The thread is here:
http://www.mail-archive.com/user@cas
I am developing an application for data storage. All the replication,
routing and data retrieving types of business are handled in my
application. Up to now, the data is stored in memory. Now, I want to use
Cassandra storage engine to flush data from memory into hard drive. I am
not sure if that
Hi,
We have a cluster which we've recently moved to use
LeveledCompactionStrategy. We were experiencing some disk space
issues, so we added two additional nodes temporarily to aid
compaction. Once the compaction had completed on all nodes, we
decommissioned the two temporary nodes.
All nodes now
We enabled the major repair on every node every 7 days.
I think you mean 2 cases of "failed" write.
One is the replication failure of a writer. Duplication generated from this
kind of "failed" should be very small in my case, because I only parse the data
from 12 nodes, which should NOT contain
http://www.datastax.com/documentation/cql/3.1/webhelp/index.html#cql/cql_reference/select_r.html
On Wed, Oct 23, 2013 at 6:50 AM, Alex N wrote:
> Thanks!
> I can't find it in the documentation...
>
>
>
> 2013/10/23 Cyril Scetbon
>
>> Hi,
>>
>> Now you can ask for the TTL and the TIMESTAMP as s
Thanks!
I can't find it in the documentation...
2013/10/23 Cyril Scetbon
> Hi,
>
> Now you can ask for the TTL and the TIMESTAMP as shown in the following
> example :
>
> cqlsh:k1> select * FROM t1 ;
>
> *ise*| *filtre* | *value_1*
> ++-
> *cyril1* | *2* |
Hi,
Now you can ask for the TTL and the TIMESTAMP as shown in the following example
:
cqlsh:k1> select * FROM t1 ;
ise| filtre | value_1
++-
cyril1 | 2 | 49926
cyril2 | 1 | 18584
cyril3 | 2 | 31415
cqlsh:k1> select filtre,writetime(filtre),t
Hi,
I was wondering how could I select column timestamp with CQL. I've been
using Hector so far, and it gives me this option. But I want to use
datastax CQL driver now.
I don't want to mess with this value! just read it. I know I should
probably have separate column with timestamp value created by
Thanks robert,
For info if it helps to fix the bug i'm starting the downgrade, i restart
all the node and do a repair and there are a lot of error like this :
EERROR [ValidationExecutor:2] 2013-10-23 08:39:27,558 Validator.java (line
242) Failed creating a merkle tree for [repair
#9f9b7fc0-3bbe-1
Can Kundera work with wide rows in an ORM manner?
What specifically you looking for? Composite column based implementation
can be built using Kundera.
With Recent CQL3 developments, Kundera supports most of these. I think POJO
needs to be aware of number of fields needs to be persisted(Same as CQL
29 matches
Mail list logo