plications).
2. How do we need to do upgrade process? Currently we have 3 node 1.0.6
cluster in production. Can we upgrade node by node? If we upgrade node by
node, will the other 1.0.6 nodes identify 1.1.X nodes without any issue?
Appreciate experts comments on this. Many Thanks.
/Roshan
--
Thanks Aaron. My major concern is upgrade node by node. Because currently we
are using 1.0.6 in production and plan is to upgrade singe node to 1.1.2 at
a time.
Any comments?
Thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Concerns-about
Hi Brian
This is basically a wonderful news for me, because we are using lots of
spring support in the project. Good luck and keep post.
Cheers
/Roshan.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/An-experiment-using-Spring-Data-w
r);
Other than the above 2 statements, I am passing any configuration to hector
to build up the connections.
What I noticed is, every time hector uses zero element from the server list
url and still try to connect to the same server, if it fails.
Could someone help me to solve this hector fa
Hi
I got the below exception to Cassandra log while doing the *drain* via
*nodetool* operation before shutting down one node in 3 node development
Cassandra 1.1.2 cluster.
2012-07-30 09:37:45,347 ERROR [CustomTThreadPoolServer] Thrift error
occurred during processing of message.
org.apache.thrift
should I need to do the drain without
these exceptions?
Thanks
/Roshan
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-1-0-6-nodetool-drain-gives-lots-of-batch-mutate-exceptions-tp7581497.html
Sent from the cassandra-u
(ThreadPoolExecutor.java:886)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
at java.lang.Thread.run(Thread.java:662)
Any thoughts??
/Roshan
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Mixed-cluster
Thanks to point me the solution. So that means, I want to upgrade 1.0.6
cluster to 1.0.11 first, then upgrade to 1.1.2 version. Is I am right?
Thanks
/Roshan
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Mixed-cluster-node-with-version-1-1
safely ignore??
Thanks
/Roshan
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/One-Cassandra-1-0-11-node-continuously-doing-the-hint-hand-off-tp7581599.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
for token: 113427455640312814857969558651062452224 with IP: /10.50.50.111
2012-08-08 08:56:54,383 INFO [HintedHandOffManager] Finished hinted handoff
of 0 rows to endpoint /10.50.50.111
How can I remove this hints from node?
Thanks
/Roshan
--
View this message in context:
http://cassandra
I managed to delete the hints from JConsole by using HintedHadOffManager
MBean.
Thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/How-to-purge-the-Hinted-data-from-Cassandra-1-0-11-tp7581614p7581641.html
Sent from the cassandra-u...@incub
Hi
In our production, we have 3 Cassandra 1.0.11 nodes.
Due to a reason, I want to move the current seed node to another node and
once seed node change, the previous node want to remove from cluster.
How can I do that?
Thanks.
--
View this message in context:
http://cassandra-user-incubat
Hi
Cassandra - 2.0.8
DataStax driver - 2.0.2
I have create a keyspace and a table with indexes like below.
CREATE TABLE services.messagepayload (
partition_id uuid,
messageid bigint,
senttime timestamp,
PRIMARY KEY (partition_id)
) WITH compression =
{ 'sstable_compression' : 'LZ4Com
Hi All
Time to time I am seen this below warning in Cassandra logs.
WARN [Memtable] setting live ratio to minimum of 1.0 instead of
0.21084217381985554
Not sure what the exact cause for this and the solution to eliminate this.
Any help is appreciated. Thanks.
--
View this message in context:
h
Exactly, I am also getting this when server moving idle to high load. May be
Cassandra Experts can help to us.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/WARN-Memtable-live-ratio-tp7238582p7238603.html
Sent from the cassandra-u...@incubator.a
Thanks for the explanation.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/WARN-Memtable-live-ratio-tp7238582p7242021.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
It happens to me like this.
I have 2 node Cassandra cluster with one column space. No super columns.
Start server freshly (no commit logs, no SSTables, no saves caches,
basically nothing). Then write process starts (not much write load). See
what my log says:
2012-02-06 13:06:13,598 INFO [Gossi
Hi All
I have 2 Cassandra data center (DC1=>1 node, DC2=>2 nodes).
XXX.XXX.XXX.XXX DC2 RAC1Up Normal 44.3 KB
33.33% 0
YYY.YYY.YY.YYY DC1 RAC1Up Normal 48.71 KB
33.33% 567137278201564074289847793
I have create the Keyspace like below, but the result is same. All the data
getting replicated to all over the cluster instead of DC1.
create keyspace WSDC
with placement_strategy =
'org.apache.cassandra.locator.NetworkTopologyStrategy' and strategy_options
= {DC1:1,DC2:0};
--
View this messa
Thanks for the reply. But I can see the data setting inserted in DC1 in DC2.
So that means data also getting replicated to DC2.
Want to know how to restrict this?
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-1-0-6-multi-data-center-
I have deployed 2 node Cassandra 1.0.6 cluster in production and it running
almost t weeks without any issue. But I can see lots of (more than 90) 0
bytes tmp data and index files in the data directory.
So far this is not a issue for me, but want to know why is that. Seems like
this data/index tm
Hi
I have deployed Cassandra 1.0.6 to a 2 data center and one data center (DC1)
having one node and the other data center (DC2) having two nodes. But when I
do a nodetool ring using one IP, the output says 0% owns of DC1 node. Please
see the output below.
# sh nodetool -h 10.XXX.XXX.XX ring
Addre
Hi
I got the below exception to the system.log after upgrade to 1.0.7 from
1.0.6 version. I am using the same configuration files which I used in 1.0.6
version.
2012-02-14 10:48:12,379 ERROR [AbstractCassandraDaemon] Fatal exception in
thread Thread[OptionalTasks:1,5,main]
java.lang.NullPointerEx
Issue seems related to https://issues.apache.org/jira/browse/CASSANDRA-3677
or exactly same.
I am happy to create another if this is different. Please confirm.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Got-fatal-exception-after-upgrade-to-
Hi
I am using Cassandra 1.0.6 version and having one column family in my
keyspace.
create column family TestCF
with comparator = UTF8Type
and column_metadata = [
{column_name : userid,
validation_class : BytesType,
index_name : userid_idx,
index_type : KEYS
Thanks Aaron for the indormation.
I increased the VM size to 2.4G from 1.4G. Please check my current CF in
below.
Keyspace: WCache:
Replication Strategy: org.apache.cassandra.locator.SimpleStrategy
Durable Writes: true
Options: [replication_factor:3]
Column Families:
ColumnFamily: W
Hi Experts
After getting an OOM error in production, I reduce the
-XX:CMSInitiatingOccupancyFraction to .45 (from .75) and
flush_largest_memtables_at to .45 (from .75). But still I am get an warning
message in production for the same Cassandra node regarding OOM. Also reduce
the concurrent compact
As a configuration issue, I haven't enable the heap dump directory.
Is there another way to find the cause to this and identify possible
configuration changes?
Thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassndra-1-0-6-GC-query-tp73
Hi
Currently I am taking daily snapshot on my keyspace in production and
already enable the incremental backups as well.
According to the documentation, the incremental backup option will create an
hard-link to the backup folder when new sstable is flushed. Snapshot will
copy all the data/index/e
Tamar
Please don't jump to other users discussions. If you want to ask any issue,
create a new one, please.
Thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-backup-question-regarding-commitlogs-tp7508823p7511913.html
Sent from
Hi
Currently I am taking daily snapshot on my keyspace in production and
already enable the incremental backups as well.
According to the documentation, the incremental backup option will create an
hard-link to the backup folder when new sstable is flushed. Snapshot will
copy all the data/index/e
the same keys with values?
Appreciate your reply on this.
Kind Regards
/Roshan
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/deleted-tp7508823p7512499.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Many Thanks Aaron.
According to the datastax restore documentation, they ask to remove the
commitlogs before restoring (Clear all files the
/var/lib/cassandra/commitlog (by default)).
In that case better not to follow this step in a server rash situation.
Thanks
/Roshan
--
View this
http://www.datastax.com/docs/1.0/operations/backup_restore
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-backup-queston-regarding-commitlogs-tp7508823p7515679.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive
Many thanks Aaron. I will post a support issue for them. But will keep the
snapshot + incremental backups + commitlogs to recover any failure
situation.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-backup-queston-regarding-commitlogs-
read operation on the production, will that read operation goes to
DR as well? If so can I disable that call?
My primary purpose is to keep the DR upto date and won't to communicate the
production with DR.
Thanks.
/Roshan
--
View this message in context:
http://cassandra-user-incubator-apach
Hi
Hope this will help to you.
http://www.datastax.com/docs/1.0/install/upgrading
http://www.datastax.com/docs/1.1/install/upgrading
Thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-upgrade-from-0-8-1-to-1-1-0-tp7580198p7580210
_mb: 200
in_memory_compaction_limit_in_mb: 16 (from 64MB)
Key cache = 1
Row cache = 0
Could someone please help me on this.
Thanks
/Roshan
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-1-0-6-data-flush-query-tp7580733.html
S
Hello
We are using Hector and it perfectly matching to our case.
https://github.com/hector-client/hector
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/which-high-level-Java-client-tp7580842p7580844.html
Sent from the cassandra-u...@incubator.a
Hi
Haven't visit to this forum couple of months and want to upgrade our current
production Cassandra cluster (4 nodes 1.0.11) to 1.2.X latest versions.
Is this kind of the straight upgrade or different?
Thanks & Regards
/Roshan
--
View this message in context:
http://cassa
org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:581)
at
org.apache.cassandra.service.StorageProxy$5.runMayThrow(StorageProxy.java:555)
at
org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1643)
... 6 more
Any reason?
/Roshan
--
View this message in context
Thanks for your short answer. It explains lot.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Upgrade-from-1-0-11-to-1-2-X-tp7587786p7587824.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
Thanks Aaron for the reply. I will need to upgrade 1.0.X to 1.1.X first.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Upgrade-from-1-0-11-to-1-2-X-tp7587786p7587825.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive a
I found this bug, seems it is fixed. But I can see that in my situation, the
decommission node still I can see from the JMX console LoadMap attribute.
Might this is the reason why hector says not enough replica??
Experts, any thoughts??
Thanks.
--
View this message in context:
http://cassand
Thanks. This is kind of a expert advice for me.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Decommission-nodes-starts-to-appear-from-one-node-1-0-11-tp7587842p7587876.html
Sent from the cassandra-u...@incubator.apache.org mailing list archi
Hello
First get some understanding about secondary indexes.
http://www.datastax.com/docs/1.1/ddl/indexes
Thanks.
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Cassandra-Secondary-Index-implementation-tp7589792p7589906.html
Sent from the cas
Is it possible to update the column-metadata of a column family definition
programmatically? If yes, can someone please point me to the right classes
to use?
Thanks.
On Thu, Nov 29, 2012 at 3:58 PM, Roshan Dawrani wrote:
> Hi,
>
> I have an existing column family that is cre
from cassandra.yaml. Is
it not possible to do the same with Cassandra / Hector 0.8.x?
Can someone throw some light please?
Thanks.
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
ties"
Our is a Grails application, and log4j configuration is initialized in a
different way, and I do not want to feed embedded server a dummy
log4j.properties file just to satisfy the chain above. Is there any way I
can avoid it?
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Hi,
Quick check: is there a tentative date for release of Cassandra 0.8.5?
Thanks.
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
On Wed, Sep 7, 2011 at 9:15 PM, Jeremy Hanna wrote:
> The voting started on Monday and is a 72 hour vote. So if there aren't any
> problems that people find, it should be released sometime Thursday (7
> September).
>
Great. Thanks for quick info. Looking forward to it.
--
tp://goo.gl/A5YmF (CHANGES.txt)
> [2]: http://goo.gl/J5Iix (NEWS.txt)
> [3]: https://issues.apache.org/jira/browse/CASSANDRA
>
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
now[3] if you were to
> encounter
> any problem.
>
> Have fun!
>
>
> [1]: http://goo.gl/A5YmF (CHANGES.txt)
> [2]: http://goo.gl/J5Iix (NEWS.txt)
> [3]: https://issues.apache.org/jira/browse/CASSANDRA
>
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
ll :-(
rgds,
Roshan
7 -ea not found" preventing cassandra from run the process README
>> file says it is suppose to start.
>>
>> Any help would be very appreciated.
>>
>> Thnx!
>>
>>
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
Hi,
Do you have JAVA_HOME exported? If not, can you export it and retry?
Cheers.
On Tue, Sep 13, 2011 at 8:59 AM, Hernán Quevedo
wrote:
> Hi, Roshan. This is great support, amazing support; not used to it :)
>
> Thanks for the reply.
>
> Well I think java is installed correct
at
org.apache.cassandra.service.AbstractCassandraDaemon.activate(AbstractCassandraDaemon.java:341)
at
org.apache.cassandra.thrift.CassandraDaemon.main(CassandraDaemon.java:97)
-
--
Roshan
han Ellis wrote:
> Just remove the row cache files.
>
> On Tue, Sep 13, 2011 at 8:23 AM, Roshan Dawrani
> wrote:
> > Hi,
> > I am in the process of upgrading Cassandra to the recently released
> v0.8.5
> > and facing an issue.
> > We had two Cassandra en
On Tue, Sep 13, 2011 at 7:03 PM, Jonathan Ellis wrote:
> Just remove the row cache files.
>
Thanks a lot. The 0.8.5 Cassandra started just fine after getting rid of
those *KeyCache files.
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://tw
ently did migration of a simple cassandra DB from 0.7.0 to 0.8.5 and
found quite a few differences in structure of "cassandra.yaml" - the biggest
one that affected us was that "cassandra.yaml" couldn't hold the defintion
of a keyspace, which we used for embedded cassandr
n, is anyone aware of any cassandra 0.8.5
cofiguration that can be tweaked to at least get the performance we were
getting with 0.7.2? Exactly after the upgrade, our test execution times have
gone up at least by 60-70%.
Some pointers please?
Thanks.
--
Roshan
Blog: http://roshandawrani.wordpres
. I am not sure what I will gain
there in terms of performance. I was hoping data truncation leaving schema
there would be faster than that.
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
l blog post here
about it: "Grails, Cassandra: Giving each test a clean DB to work
with<http://roshandawrani.wordpress.com/2011/09/30/grails-cassandra-giving-each-test-a-clean-db-to-work-with/>"
For
someone in a similar situation, it may present an alternative.
Cheers.
On Fri, Sep
/group/usergrid-user
>
> It's still a work-in-progress and there's a lot in there that still
> needs to be documented.
>
> Hope you'll check it out and find it interesting. Even if it's not
> something you'd have use of yourself, please forward this on to
ll the keys and then deleting them is the
> fastest way
Exactly the thing we also found out (and hence ditched "truncate" for DB
cleanup between tests):
http://roshandawrani.wordpress.com/2011/09/30/grails-cassandra-giving-each-test-a-clean-db-to-work-with/
.
Worked much better for us
se let automatic compaction do
> it's thing.
> Cheers
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 25/01/2012, at 12:47 PM, Roshan wrote:
>
> Thanks for the reply. Is the major compa
Thanks Peter for the replies.
Previously it was a typing mistake and it should be "getting". I checked
the DC2 (with having replica 0) and noticed that there is no SSTables
created.
I use java hector sample program to insert data to the keyspace. After I
insert a data item, I
1) Login to one of n
Hi Experts
Under massive write load what would be the best value for Cassandra *
flush_largest_memtables_at* setting? Yesterday I got an OOM exception in
one of our production Cassandra node under heavy write load within 5 minute
duration.
I change the above setting value to .45 and also change t
ype: KEYS
Does anyone see why this must be happening? I have created many such column
families before and never run into this issue.
--
Roshan
http://roshandawrani.wordpress.com/
ype: KEYS
Does anyone see why this must be happening? I have created many such column
families before and never run into this issue.
--
Roshan
http://roshandawrani.wordpress.com/
:32 PM, "Roshan Dawrani" wrote:
>
>> Hi,
>>
>> I use Cassandra 0.8.5 and am suddenly noticing some strange behavior. I
>> run a "create column family" command with some column meta-data and it runs
>> fine, but when I do "describe keyspace", i
he Cassandra schema changes done.
Why am I seeing the error only now with this particular CF?
Cheers,
Roshan
On Tue, Jun 12, 2012 at 8:56 PM, Jayesh Thakrar wrote:
> Subscribe
>
Attempt unsuccessful,
** Was expecting a voice-command in mp3 format **
if I have got it correctly, I can only
use ByteBufferSerializer in the Hector API slice query call and then do
further data-type specific conversion myself at the app level?
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
ou suggest be used with the 2ndry index filtering? I
> would appreciate very much seeing an example.
> >
> > Does it make any performance difference whether that conversion is done
> by Hector/Cass or by the app?
> >
> > Thanks.
> > Roshan
> > --
- newest first?
I am using RangeSuperSlicesQuery to query the super columns and setting a
range on it with *reverse = true*, but that only sorts the data by super
column names.
How can I tell RangeSuperSlicesQuery to get the sub-columns also in reverse
order?
Thanks.
--
Roshan
Blog: http
ays come in the same order - oldest to newest
RangeSuperSlicesQuery#setRange (null, null, reverse, Integer.MAX_VALUE) //
reverse = true | false
====
Anything I am doing wrong here?
--
Roshan
Blog: http://roshandawrani.wordpress.com
Hi,
Is it correct that mutations that delete subcolumns of a super column can't
be batched - unlike inserts and deletes of normal columns?
If yes, could someone share why that is so?
Thanks.
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twi
uot; : "Val2", "Col3" : "Val3"]
TimeUUIDKeyB: ["Col1" : "Val1", "Col2" : "Val2", "Col3" : "Val3"]
TimeUUIDKeyY: ["Col1" : "Val1", "Col2" : "Val2", &q
you are slicing within a single supercolumn,
> the reverse parameter will affect the order of subcolumns.
>
> On Sun, Dec 26, 2010 at 6:11 AM, Roshan Dawrani
> wrote:
> > Hi Ran,
> > I am not doing it the YAML way.
Which "No"?
1) No, it is "not" correct correct that they can't be batched, or
1) No, they can't be batched
:-)
On Mon, Dec 27, 2010 at 10:04 AM, Jonathan Ellis wrote:
> On Sun, Dec 26, 2010 at 9:14 AM, Roshan Dawrani
> wrote:
> > Is it correct
, Dec 27, 2010 at 10:07 AM, Roshan Dawrani wrote:
> Which "No"?
>
> 1) No, it is "not" correct correct that they can't be batched, or
>
> 1) No, they can't be batched
>
> :-)
>
>
>
> On Mon, Dec 27, 2010 at 10:04 AM, Jonathan Ellis w
This silly question is retrieved back with apology. There couldn't be
anything easier to handle at the application level.
rgds,
Roshan
On Mon, Dec 27, 2010 at 9:04 AM, Roshan Dawrani wrote:
> Hi,
> I have the following 2 column families - one being used to store full rows
> for
ren
>
>
> On Mon, Dec 27, 2010 at 6:12 AM, Roshan Dawrani
> wrote:
>
>> This silly question is retrieved back with apology. There couldn't be
>> anything easier to handle at the application level.
>>
>> rgds,
>> Roshan
>>
>>
>&g
ed to use OPP to perform range scans. Look for Range Queries on
>> http://wiki.apache.org/cassandra/DataModel
>>
>> Look at this to understand why range queries are not supported for
>> RamdomPartitioner (https://issues.apache.org/jira/browse/CASSANDRA-1750)
>>
>> T
Hi,
Is there a GUI client for a Cassandra database for a Windows based setup?
I tried the one available at http://code.google.com/p/cassandra-gui/, but it
always fails to connect with "error: Cannot read. Remote site has closed.
Tried to read 4 bytes, but only got 0 bytes."
--
R
ank you for bringing
> this up.
>
> On Sun, Dec 26, 2010 at 10:47 PM, Roshan Dawrani
> wrote:
> > There doesn't really seem to be an inherent limitation in batching
> > sub-column deletes.
> >
> > Pelops seem to be doing it -
> >
> http://pelops.google
t; Thanks,
> Naren
>
>
> On Mon, Dec 27, 2010 at 7:37 PM, Roshan Dawrani
> wrote:
>
>> Hi,
>>
>> Is there a GUI client for a Cassandra database for a Windows based setup?
>>
>> I tried the one available at http://code.google.com/p/cassandra-gui/, but
&
that has not helped.
Can anyone please suggest where I should change the cassandra configuration
to avoid the above error?
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
Yes, that was it. Thanks a lot.
Changed the JMX port in "cassandra.bat", where I didn't look earlier.
On Fri, Dec 31, 2010 at 5:53 AM, Jonathan Ellis wrote:
> probably the JMX port which defaults to 8080.
>
> On Thu, Dec 30, 2010 at 6:16 PM, Roshan Dawrani
>
are different */
System.out.println(otherUUID);
===
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/roshandawrani>
Skype: roshandawrani
equivalent of the previous one in terms
of its timestamp portion - i.e., I should be able to give this U2 and filter
the data from a column family - and it should be same as if I had used the
original UUID U1.
Does it make any more sense than before? Any way I can do that?
rgds,
Roshan
On Tue, Jan
long" timestamp
than comparing UUIDs. Then for the "long" timestamp chosen by the client, I
need to re-create the equivalent time UUID and go and filter the data from
Cassandra database.
--
Roshan
Blog: http://roshandawrani.wordpress.com/
Twitter: @roshandawrani <http://twitter.com/rosha
Any suggestion on how I can achieve the equivalent using Hector library's
TimeUUIDUtils?
On Wed, Jan 5, 2011 at 7:21 AM, Roshan Dawrani wrote:
> Hi Victor / Patricio,
>
> I have been using Hector library's TimeUUIDUtils. I also just looked at
> TimeUUIDUtilsTest also but didn
== t1);
}
}
On Wed, Jan 5, 2011 at 8:15 AM, Roshan Dawrani wrote:
> If I use *com.eaio.uuid.UUID* directly, then I am able to do what I need
> (attached a Java program for the same), but unfortunately I need to
Hi Patricio,
Thanks for your comment. Replying inline.
2011/1/5 Patricio Echagüe
> Roshan, just a comment in your solution. The time returned is not a simple
> long. It also contains some bits indicating the version.
I don't think so. The version bits from the most significant 64
Hi Patricio,
Some thoughts inline.
2011/1/6 Patricio Echagüe
> Roshan, the first 64 bits does contain the version. The method
> UUID.timestamp() indeed takes it out before returning. You are right in that
> point. I based my comment on the UUID spec.
>
I know 64 bits have the
On Fri, Jan 7, 2011 at 11:39 AM, Arijit Mukherjee wrote:
> Hi
>
> I've a quick question about supercolumns.
> EventRecord = {
>eventKey2: {
>e2-ts1: {set of columns},
>e2-ts2: {set of columns},
>...
>e2-tsn: {set of columns}
>}
>
> }
>
> If I want to
On Fri, Jan 7, 2011 at 12:12 PM, Arijit Mukherjee wrote:
> Thank you. And is it similar if I want to search a subcolumn within a
> given supercolumn? I mean I have the supercolumn key and the subcolumn
> key - can I fetch the particular subcolumn?
>
> Can you share a small piece of example code fo
ID. Is this something to do with the piece of code in the FAQ?
>
> Arijit
>
>
> --
> "And when the night is cloudy,
> There is still a light that shines on me,
> Shine on until tomorrow, let it be."
>
--
Roshan
Blog: http://roshandawrani.wordpress.
1 - 100 of 151 matches
Mail list logo