On Thu, Feb 27, 2014 at 12:53 AM, Clint Kelly wrote:
> Thanks for your help everyone.
>
> Sylvain, as I understand it, the scenario I described above is not
> resolved by CASSANDRA-6561, correct?
>
Well, no, my point is that it kind of is resolved. At least if we're still
talking about:
"If th
On Thu, Feb 27, 2014 at 1:00 AM, Clint Kelly wrote:
> Hi all,
>
> Is there any way to use the DataStax Java driver to combine multiple
> SELECT statements into a single RPC? I assume not (I could not find
> anything about this in the documentation), but I just wanted to check.
>
The short answe
Hi, All,
I'm using Netflix/Astyanax as a java cassandra client to access Cassandra DB.
I need to paginate through all columns in a row and I found the document at
https://github.com/Netflix/astyanax/wiki/Reading-Data
about how to do that.
But my requirement is a little different. I don't want
Hi,
we have a cluster with 3 DC, and for one DC ( stats ), RF=0 for a
keyspace using NetworkTopologyStrategy.
cqlsh> SELECT * FROM system.schema_keyspaces WHERE keyspace_name='foobar';
keyspace_name | durable_writes | strategy_class
| strategy_options
+
HI all,
I trying to do a cogroup with five relations that I load from cassandra
previously.
In single node and local casandra testing environment the script works fine
but when I try to execute in a cluster over AWS instances with only one
slave in hadoop cluster and One seed cassandra node I ha
I just caught that a node was down based on running nodetool status on a
different node. I tried to ssh into the downed node at that time and it
was very slow logging on. Looking at the gc.log file, there was a ParNew
that only took 0.09 secs. Yet the overall application threads stop time is
315
That sounds a lot like death by paging.
On 27 February 2014 16:29, Frank Ng wrote:
> I just caught that a node was down based on running nodetool status on a
> different node. I tried to ssh into the downed node at that time and it
> was very slow logging on. Looking at the gc.log file, there
I forgot I am using cassandra 2.04 , hadoop 1.2.1 and pig 0.12
Thanks
2014-02-27 17:29 GMT+01:00 Miguel Angel Martin junquera <
mianmarjun.mailingl...@gmail.com>:
> HI all,
>
> I trying to do a cogroup with five relations that I load from cassandra
> previously.
>
> In single node and local cas
Hello everyone,
I'm trying to update some column families to start using the CQL3 drivers
instead of Hector (the Java driver that uses Thrift, I assume any changes
that would allow Thrift to work would let Hector work, but there may be
some idiosyncrasies with Hector I don't know about. I'll repo
We have swap disabled. Can death by paging still happen?
On Thu, Feb 27, 2014 at 11:32 AM, Benedict Elliott Smith <
belliottsm...@datastax.com> wrote:
> That sounds a lot like death by paging.
>
>
> On 27 February 2014 16:29, Frank Ng wrote:
>
>> I just caught that a node was down based on run
I'd recommend starting with the very latest Astyanax+DS Client hybrid as
that will make the transition easier. See this Astyanax wiki page for
details:
https://github.com/Netflix/astyanax/wiki/Astyanax-over-Java-Driver
CQL3 is meta data is basically just composites under the hood, so it will
most
Yes, it is expected behavior since
1.2.5(https://issues.apache.org/jira/browse/CASSANDRA-5424).
Since you set foobar not to replicate to stats dc, primary range of
foobar keyspace for nodes in stats is empty.
On Thu, Feb 27, 2014 at 10:16 AM, Fabrice Facorat
wrote:
> Hi,
>
> we have a cluster wi
So if I understand well from CASSANDRA-5424 and CASSANDRA-5608, as
stats dc doesn't own data, repair -pr will not repair the data. Only a
full repair will do it.
Once we will add a RF to stats DC, repair -pr will work again. That's correct ?
2014-02-27 19:15 GMT+01:00 Yuki Morishita :
> Yes, it i
Yes.
On Thu, Feb 27, 2014 at 12:49 PM, Fabrice Facorat
wrote:
> So if I understand well from CASSANDRA-5424 and CASSANDRA-5608, as
> stats dc doesn't own data, repair -pr will not repair the data. Only a
> full repair will do it.
>
> Once we will add a RF to stats DC, repair -pr will work again.
I am having OOM during major compaction on one of the column family where
there are lot of SStables (33000) to be compacted. Is there any other way
for them to be compacted? Any help will be really appreciated.
Here are the details
/opt/cassandra/current/bin/nodetool -h us1emscsm-01 compact tom
Hey folks,
I am dealing with a legacy CFs where super_column has been used and python
client pycassa is being used. An example is given below. My question here
is, can I make use of include_timestamp to select data between two
returned timestamps e.g between 1393516744591751 and 1393516772131811.
What was the impetus for turning up the commitlog_segment_size_in_mb?
Also, in nodetool tpstats, do what are the values for the FlushWriter line?
On Wed, Feb 26, 2014 at 12:18 PM, Christopher Wirt wrote:
> We're running 2.0.5, recently upgraded from 1.2.14.
>
>
>
> Sometimes we are seeing Commi
One big downside about major compaction is that (depending on your
cassandra version) the bloom filters size is pre-calculated. Thus cassandra
needs enough heap for your existing 33 k+ sstables and the new large
compacted one. In the past this happened to us when the compaction thread
got hung up,
On Thu, Feb 27, 2014 at 11:09 AM, Nish garg wrote:
> I am having OOM during major compaction on one of the column family where
> there are lot of SStables (33000) to be compacted. Is there any other way
> for them to be compacted? Any help will be really appreciated.
>
You can use user defined c
Well, paging still happens due to mmapped file I/O, however whilst this
could easily cause a slow login it would struggle to cause a 315s GC pause.
A slow network should also never cause this, though: the network threads
are simply caught by any safepoint on returning to the VM, so don't delay
GC.
Thanks for replying.
We are on Cassandra 1.2.9.
We have time series like data structure where we need to keep only last 6
hours of data. So we expire data using expireddatetime column on column
family and then we run expire script via cron to create tombstones. We
don't use ttl yet and planning
If you can programmatically roll over onto a new column family every 6
hours (or every day or other reasonable increment), and then just drop your
existing column family after all the columns would have been expired, you
could skip your compaction entirely. It was not clear to me from your
descript
Hello Tupshin,
Yes all the data needs to be kept for just last 6 hours. Yes changing to
new CF every 6 hours solves the compaction issue, but between the change we
will have less than 6 hours of data. We can use CF1 and CF2 and truncate
them one at a time every 6 hours in loop but we need some kin
You are right that modifying your code to access two CFs is a hack, and not
an ideal solution, but I think it should be pretty easy to implement, and
would help you get out of this jam pretty quickly. Not saying you should go
down that path, but if you lack better options, that would probably be my
Folks,
Is there a way to name the variables in a prepared statement when using the
DataStax Java driver?
For example, instead of doing:
ByteBuffer byteBuffer = ... // some application logic
String query = "SELECT * FROM foo WHERE bar = ?";
PreparedStatement preparedStatement = session.prepare(qu
Ah never mind, I see, currently you can refer to the ?'s by name by using
the name of the column to which the ? refers. And this works as long as
each column is present only one in the statement.
Sorry for the extra list traffic!
On Thu, Feb 27, 2014 at 7:33 PM, Clint Kelly wrote:
> Folks,
>
All,
Is there any way to have inequalities comparisons on multiple clustering
columns in a WHERE clause in CQL? For example, I'd like to do:
select * from foo where fam = 'Info' and qual > 'A' and qual < 'D' and
version > 2013 ALLOW FILTERING;
I get an error:
Bad Request: PRIMARY KEY part
Clint, what you want is this :
https://issues.apache.org/jira/browse/CASSANDRA-4851
select * from foo where key=something and fam = 'Info' and (qual,version) >
('A',2013) and qual < 'D' ALLOW FILTERING
On Fri, Feb 28, 2014 at 6:57 AM, Clint Kelly wrote:
> All,
>
> Is there any way to have ineq
You can page yourself using the withColumnRange method (see the slice query
example on the page you linked to). What you do is that you save the last
column you got from the previous query, and you set that as the start of
the range you pass to withColumnRange. You don't need to set an end of a
ran
29 matches
Mail list logo