On Mon, Dec 29, 2014 at 5:20 PM, Sam Klock wrote:
>
>
> Our investigation led us to logic in Cassandra used to paginate scans
> of rows in indexes on composites. The issue seems to be the short
> algorithm Cassandra uses to select the size of the pages for the scan,
> partially given on the follo
Hi!
Yes, since all the writes for a partition (or row if you speak Thrift) always
go to the same replicas, you will need to design to avoid hotspots - a pure day
row will cause all the writes for a single day to go to the same replicas, so
those nodes will have to work really hard for a day, a
We are having a lot of problems with release 2.1.2. It was suggested here
we should downgrade to 2.1.1 if possible.
For the experts out there, do you foresee any issues in doing this?
Thanks!
Phil
Hi there
I was facing a similar requirement recently, e.g. UPDATE IF EXISTS and
I found a work-around.
CREATE TABLE my_table(
partition_key int,
duplicate_partition_key int,
value text,
PRIMARY KEY(partition_key));
At the beginning, I tried to query with : UPDATE my_tab
On Tue, Dec 30, 2014 at 9:42 AM, Phil Burress
wrote:
> We are having a lot of problems with release 2.1.2. It was suggested here
> we should downgrade to 2.1.1 if possible.
>
> For the experts out there, do you foresee any issues in doing this?
>
Not sure if advice from the person who suggested
On Mon, Dec 29, 2014 at 3:24 PM, mck wrote:
>
> Especially in CASSANDRA-6285 i see some scary stuff went down.
>
> But there are no outstanding bugs that we know of, are there?
>
Right, the question is whether you believe that 6285 has actually been
fully resolved.
It's relatively plausible tha
Thanks Rob.
On Tue, Dec 30, 2014 at 1:38 PM, Robert Coli wrote:
> On Tue, Dec 30, 2014 at 9:42 AM, Phil Burress
> wrote:
>
>> We are having a lot of problems with release 2.1.2. It was suggested here
>> we should downgrade to 2.1.1 if possible.
>>
>> For the experts out there, do you foresee an
On Mon, Dec 29, 2014 at 6:05 AM, Ajay wrote:
> In my case, Cassandra is the only storage. If the counters get incorrect,
> it could't be corrected.
>
Cassandra counters are not appropriate for this use case, if correctness is
a requirement.
=Rob
Hi,
We have a table in our production Cassandra that is stored on 11369
SSTables. The average SSTable count for the other tables is around 15, and
the read latency for them is much smaller.
I tried to run manual compaction (nodetool compact my_keyspace my_table)
but then the node starts spending ~
On Tue, Dec 30, 2014 at 3:12 PM, Mikhail Strebkov
wrote:
> We have a table in our production Cassandra that is stored on 11369
> SSTables. The average SSTable count for the other tables is around 15, and
> the read latency for them is much smaller.
>
Unthrottle compaction, that's an insane numbe
We also suffer some problem from 2.1.2 . But I think we can deal with .
First I don’t use incremental repair.
Second we restart node after repair . It will release sstable tmplink .
Third , don’t use stop COMPACTION command.
If we read 2.1.2 release notes ,we find it solve some issues wit
Thanks Janne and Rob.
The idea is like this : To store the User clicks on Cassandra and a
scheduler to count/aggregate the clicks per link or ad
hourly/daily/monthly and store in My SQL (or may be in Cassandra itself).
Since tombstones will be deleted only after some days (as per
configuration),
12 matches
Mail list logo