Hi Andre,
I am just a cassandra user, the following suggestions may not be valid.
I assume you are using cassandra-cli and connecting to some specific node.
You can check the following steps:
1. Can you still reproduce this issue? (not -> maybe the system/node issue)
2. What's the result when q
Can anyone shed some light on this matter, please? I don't want to just
increase the timeout without understanding why this is happening. Some pointer
for me to investigate would be helpful.
I'm running Cassandra 1.1.5 and these are wide rows (lots of small columns). I
would think that fetching
I don't know enough about the code level implementation to comment on the
> validity of the fix. My main issue is that we use a lot of TTL columns and
> in many cases all columns have a TTL that is less than gc_grace. The
> problem arises when the columns are gc-able and are compacted away on one
On Tue, Nov 6, 2012 at 8:27 AM, horschi wrote:
>
>
>> it is a big itch for my use case. Repair ends up streaming tens of
>> gigabytes of data which has expired TTL and has been compacted away on some
>> nodes but not yet on others. The wasted work is not nice plus it drives up
>> the memory usa
It is licensed under Apache so the answer is no. If you are interested, you
can read the license agreement on Apache.
http://www.apache.org/licenses/LICENSE-2.0.html
On Tue, Nov 6, 2012 at 10:35 AM, Manuel Alejandro Ortiz Gil <
manuel24or...@gmail.com> wrote:
> Hi, I want to use Cassandra for a
Thoughts ?
On Tue, Nov 6, 2012 at 3:58 AM, Ertio Lew wrote:
> I need to store (1)posts written by users, (2)along with activity data by
> other users on these posts & (3) some counters for each post like views
> counts, likes counts, etc. So for each post, there is 3 category of data
> associa
Hi Bryan,
As the OP of this thread,
sorry for stealing for thread btw ;-)
> it is a big itch for my use case. Repair ends up streaming tens of
> gigabytes of data which has expired TTL and has been compacted away on some
> nodes but not yet on others. The wasted work is not nice plus it driv
Sure, in our playing around, we have an awesome log back configuration for
development time only that shows warning, severe in red in eclipse and
let's you click on every single log taking you right to the code that
logged it…(thought you might enjoy it)...
https://github.com/deanhiller/playorm/bl
Hello.
I have a SCF that is acting strange. See these 2 query times:
get NamespaceRevision[3cd88d97-ffde-44ca-8ae9-5336caaebc4e] limit 33;
...
Returned 33 results.
Elapsed time: 41 msec(s).
get NamespaceRevision[3cd88d97-ffde-44ca-8ae9-5336caaebc4e] limit 34;
...
Returned 34 results.
Elapsed ti
Nice Dean
I'm not so sure we would run the server, but we'd definitely be interested
in the logback adaptor.
(We would then just access the data via Virgil (over REST), with a thin
javascript UI)
Let me/us know if you end up putting it out there. We intend centralize
logging sometime over the n
When I did the upgrade from 1.0.9 to 1.1.6, I had this same issue.
And then I fixed it with the following steps below, in each of the nodes.
[default@unknown] use system;
Authenticated to keyspace: system
[default@system] list HintsColumnFamily;
Using default limit of 100
Using default column limi
11 matches
Mail list logo