Thanks.

I run it on a Linux Server,  Dual Processor, Intel(R) Xeon(R) CPU  E5440  @
2.83GHz, 4 core each and  8 GM RAM.

Just to give an example of data inserted:
INSERT INTO traffic_by_day(segment_id, day, event_time, traffic_value)
VALUES (100, 84, '2013-04-03 07:02:00', 79);

Here is the schema:

CREATE TABLE traffic_by_day (
  segment_id int,
  day int,
  event_time timestamp,
  traffic_value int,
  PRIMARY KEY ((segment_id, day), event_time)
) WITH
  bloom_filter_fp_chance=0.010000 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.000000 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=0.100000 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'LZ4Compressor'};


On Tue, Mar 25, 2014 at 4:58 PM, Michael Shuler <mich...@pbandjelly.org>wrote:

> On 03/25/2014 10:36 AM, shahab wrote:
>
>> In our application, we need to insert roughly 300000 sensor data
>> every 30 seconds (basically we need to store time-series data).  I
>> wrote a simple java code to insert 300000 random data every 30
>> seconds for 10 iterations, and measured the number of  entries in the
>> table after each insertion. But after iteration 8,  (i.e. inserting
>> 1500000 sensor data), the "select count(') ...)  throws time-out
>> exception and doesn't work anymore. I even tried  to execute "select
>> count(*)..." using Datastax DevCenter GUI, but I got same result.
>>
>
> If you could post your schema, folks may be able to help a bit better.
> Your C* version couldn't hurt.
>
> cqlsh> DESC KEYSPACE $your_ks;
>
> --
> Kind regards,
> Michael
>

Reply via email to