correctly into the 3-nodes? Or is something more complex involved?
FYI, I'm following the instructions below, but only doing per column family
backup and restore.
http://www.datastax.com/docs/1.2/operations/backup_restore
Thanks,
Ron
supposed to be the only one -
shrug.
Ron
On Feb 8, 2013, at 10:32 AM, Ron Siemens wrote:
> INFO [Thrift:1977] 2013-02-07 17:58:01,292 MigrationManager.java (line 174)
> Drop Keyspace 'Recommender'
doesn't change anything.
Ron
Here is part of my application log and the Cassandra log from the relevant time:
2013-02-07 17:58:02,012 [pool-10-thread-26] INFO publisher.GraphPublisher -
RelationProcessor started fo
r items [47070, 34334, 34334, 34334, 42297, 34334, 34334]
2013-02-07 1
"-Dcassandra.replace_token="
when bringing up the new node, this problem wasn't exhibited. Everything
worked smoothly.
Ron
On Oct 10, 2012, at 12:38 PM, Ron Siemens wrote:
>
> I witnessed the same behavior as reported by Edward and James.
>
> Removing the host from its own s
I witnessed the same behavior as reported by Edward and James.
Removing the host from its own seed list does not solve the problem. Removing
it from config of all nodes and restarting each, then restarting the failed
node worked.
Ron
On Sep 12, 2012, at 4:42 PM, Edward Sargisson wrote
ervice side did.
Unless I see the error again, I'm guessing there was some data leftover between
trials.
Ron
On May 18, 2012, at 3:38 PM, Ron Siemens wrote:
>
> We have some production Solaris boxes so I can't use SnappyCompressor (no
> library included for Solaris), so I
250)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
at java.lang.Thread.run(Thread.java:722)
It sure seems like my pass-through compressor triggered this. Any thoughts?
Ron
>>
>> Does the updated reporting in 1.1.0 include the replicated data and before
>> it didn't?
>
> Yes.
>
Thanks for verifying that.
Ron
ression - Solaris not included. So I had to update all my column
family creation code to explicitly set compression to the previous JavaDeflate
default. That new default was annoying.
Ron
. I
can just create a ColumnFamily per field being indexed. I can now easily
access and update the indexes for a particular field.
I'm wondering if anyone has also contemplated this
Column-Family-per-Index-Field option or is using it, and has any thoughts or
critique regarding it.
Ron
ra in your
setup and teardown.
Cheers, Ron
Van: Yang mailto:tedd...@gmail.com>>
Beantwoorden - Aan:
"user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Datum: Thu, 9 Jun 2011 21:36:30 +0200
Aan: "
in 17 + 758 ms
DEBUG Retrieved LIT / 7291 rows, in 17 + 745 ms
On Feb 24, 2011, at 3:39 PM, Ron Siemens wrote:
>
> I failed to mention: this is just doing repeated data retrievals using the
> index.
>
>> ...
>>
>> Sample run: Secondary index.
>>
>> DEBUG
I failed to mention: this is just doing repeated data retrievals using the
index.
> ...
>
> Sample run: Secondary index.
>
> DEBUG Retrieved THS / 7293 rows, in 2012 ms
> DEBUG Retrieved THS / 7293 rows, in 1956 ms
> DEBUG Retrieved THS / 7293 rows, in 1843 ms
...
wns. Both implementations are using the
same column-processing/deserialization code so that doesn't seem to be to
blame. What gives?
Ron
Sample run: Secondary index.
DEBUG Retrieved THS / 7293 rows, in 2012 ms
DEBUG Retrieved THS / 7293 rows, in 1956 ms
DEBUG Retrieved THS / 7293 rows, in 1843 ms
ange predicates.)
At this point I'm left with no other alternatives (since I want to delete a
whole row and not specific columns/supercolumns within a row).
Using the remove command in a loop has serious implications in terms of
performance.
Is there any solution for this problems?
Thanks,
Ron
15 matches
Mail list logo