What consistency level were the writes?
-Original Message-
From: "Robert Wille"
Sent: 8/20/2015 18:25
To: "user@cassandra.apache.org"
Subject: Written data is lost and no exception thrown back to the client
I wrote a data migration application which I was testing, and I pushed it to
Hi Eric, I agree with you.
On Wed, Aug 1, 2018 at 11:15 PM, Eric Evans wrote:
> On Tue, Jul 31, 2018 at 11:42 PM James Tobin wrote:
>> Hello, I'm working with an employer that is looking to hire (for their
>> Montreal office) a permanent development manager that has extensive
>> hands-on Java co
maybe print out value into the logfile and that should lead to some
clue where it might be the problem?
On Tue, May 7, 2019 at 4:58 PM Paul Chandler wrote:
>
> Roy, We spent along time trying to fix it, but didn’t find a solution, it was
> a test cluster, so we ended up rebuilding the cluster, r
My reading of the tick-rock cycle, is that we've moved from a stable train that
receives mostly bug fixes until the next major stable, to one where every odd
minor version is a bug fix-only...likely mostly for the previous even. The goal
being a relatively continuously stable code base in odd mi
t; Regards,
>
> Carlos Juzarte Rolo
> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>
> Pythian - Love your data
>
> rolo@pythian | Twitter: @cjrolo | Linkedin: linkedin.com/in/carlosjuzarterolo
> Mobile: +351 91 891 81 00 | Tel: +1 613 565 8696 x1
nt to be caught because
replication/repair is silently failing. I noticed that there is always an "some
repair failed" amongst the repair output but that is so completely unhelpful
and has always been present.
Thanks,
Jason
compact error.
Thanks,
Jason
From: Romain Hardouin
To: "user@cassandra.apache.org" ; Jason Kania
Sent: Wednesday, June 8, 2016 8:30 AM
Subject: Re: Nodetool repair inconsistencies
Hi Jason,
It's difficult for the community to help you if you don't share the error
;
letely eliminate the sstables in a directory on one machine, run 'nodetool
repair' followed by 'nodetool compact', that directory remains empty. My
understanding has been that these equivalently named directories should contain
roughly the same amount of content.
Thanks,
ill
persist, wipe the node and it happen again, then i change the hardware
(disk and mem). things went good.
hth
jason
On Fri, Aug 12, 2016 at 9:20 AM, Alaa Zubaidi (PDF)
wrote:
> Hi,
>
> I have a 16 Node cluster, Cassandra 2.2.1 on Windows, local installation
> (NOT on the cloud)
>
be removed.
Thanks,
Jason
these unused directories?
Thanks,
Jason Kania
From: Vladimir Yudovin
To: user@cassandra.apache.org; Jason Kania
Sent: Saturday, October 8, 2016 2:05 PM
Subject: Re: Understanding cassandra data directory contents
Each table has unique id (suffix). If you drop and then recreate
this command:
SELECT keyspace_name, table_name, id FROM system_schema.tables ;
Can someone indicate why some would have suffixes and others not?
Thanks,
Jason
on the node 192.168.2.100, did you run repair after its status is UN?
On Wed, Jun 24, 2015 at 2:46 AM, Jean Tremblay <
jean.tremb...@zen-innovations.com> wrote:
> Dear Alain,
>
> Thank you for your reply.
>
> Ok, yes I did not drain. The cluster was loaded with tons of records,
> and no new re
te e di distruggere il messaggio originale e
> ogni file allegato senza farne copia alcuna o riprodurne in alcun modo il
> contenuto. * This e-mail and its attachments are intended
> for the addressee(s) only and are confidential and/or may contain legally
> privileg
same here too, on branch 1.1 and have not seen any high cpu usage.
On Wed, Jul 1, 2015 at 2:52 PM, John Wong wrote:
> Which version are you running and what's your kernel version? We are still
> running on 1.2 branch but we have not seen any high cpu usage yet...
>
> On Tue, Jun 30, 2015 at 11:1
nodetool cfstats?
On Wed, Jul 1, 2015 at 8:08 PM, Neha Trivedi wrote:
> Hey..
> nodetool compactionstats
> pending tasks: 0
>
> no pending tasks.
>
> Dont have opscenter. how do I monitor sstables?
>
>
> On Wed, Jul 1, 2015 at 4:28 PM, Alain RODRIGUEZ
> wrote:
>
>> You also might want to check
you should check the network connectivity for this node and also its system
average load. is that typo or literary what it is, cassandra 1.2.15.*1* and
java 6 update *85* ?
On Thu, Jul 2, 2015 at 12:59 AM, Shashi Yachavaram
wrote:
> We have a 28 node cluster, out of which only one node is expe
3. How do we rebuild System keyspace?
wipe this node and start it all over.
hth
jason
On Tue, Jul 7, 2015 at 12:16 AM, Shashi Yachavaram
wrote:
> When we reboot the problematic node, we see the following errors in
> system.log.
>
> 1. Does this mean hints column family is corrupt
just a guess, gc?
On Mon, Jul 20, 2015 at 3:15 PM, Marcin Pietraszek
wrote:
> Hello!
>
> I've noticed a strange CPU utilisation patterns on machines in our
> cluster. After C* daemon restart it behaves in a normal way, after a
> few weeks since a restart CPU usage starts to raise. Currently on o
ubject:* Validation of Data after data migration from RDBMS to Cassandra
>
>
>
> Hi,
>
>
>
> We have to migrate the data from Oracle/mysql to Cassandra.
>
> I wanted to understand, if we have any tool/utilitiy which can help in
> validation the data after the data migration to Cassandra.
>
>
>
> Thanks
> Surbhi
>
--
Jason Kushmaul | 517.899.7852
Engineering Manager
I'm trying to run nodetool from one node, connecting to another. I
can successfully connect to the majority of nodes in my ring, but two
nodes throw the following error.
nodetool: Failed to connect to ':7199' NoSuchObjectException: 'no
such object in table'.
Any idea why this is happening? Misc
un.management.jmxremote.password.file=/etc/cassandra/jmxremote.password"
fi
On Tue, Aug 25, 2015 at 4:35 PM, Michael Shuler wrote:
> On 08/25/2015 02:19 PM, Jason Lewis wrote:
>>
>> I'm trying to run nodetool from one node, connecting to another. I
>> can successful
After enabling that option, I'm seeing errors like this on the node I
can't connect to.
Sep 04, 2015 2:35:48 AM sun.rmi.server.UnicastServerRef logCallException
FINE: RMI TCP Connection(4)-127.0.0.1: [127.0.0.1] exception:
javax.management.InstanceNotFoundException:
org.apache.cassandra.metrics:t
I figured this one out. As it turns out, the nodes that I couldn't
connect to, had the hostname set to 127.0.1.1. The listen IP is *not*
that IP.
Thanks for the logging tip, it helped track it down.
On Thu, Sep 3, 2015 at 10:43 PM, Jason Lewis wrote:
> After enabling that option, I&
I should probably add.. /etc/hosts had the hostname set to 127.0.1.1.
On Thu, Sep 3, 2015 at 11:00 PM, Jason Lewis wrote:
> I figured this one out. As it turns out, the nodes that I couldn't
> connect to, had the hostname set to 127.0.1.1. The listen IP is *not*
> that IP.
>
it.
> I have to load data from SQL Server into Cassandra and I am completely new
> to Cassandra and all of the posts I seem to be able to find are demos
> about loading from MySQL into Cassandra.
> Any, any help would be extremely appreciated!
> Thank you very much!
> Raluca
>
>
>
>
>
>
--
Jason Kushmaul | 517.899.7852
Engineering Manager
server
and if all nodes need to be Snapshotted, and have the snapshots and tokens
backed up.
Can anyone share their DR setups and maybe an overview of how you would recover
if you lost your entire cluster?
Thanks!
Jason Turner
HostedOps Engineer | Hosted Operations | 503.416.5080 (d
Hi Guys,
I've configured internode SSL and set it to be used between datacenters only.
Is there a way in the logs to verify SSL is operating between nodes in
different DCs or do I need to break out tcpdump?
Thank you in advance.
-J
Sent via iPhone
10.129.1.112 |726
Executing seq scan across 3 sstables for [min(-9223372036854775808),
min(-9223372036854775808)] [SharedPool-Worker-2] | 2015-10-22 16:06:56.045000 |
10.129.1.112 | 1423
Read 0
live and 0 tombstone cells [SharedPool-Worker-2] | 2015-10-22 16:06:56.045000 |
10.129.1.112 | 1779
Read 1
live and 0 tombstone cells [SharedPool-Worker-2] | 2015-10-22 16:06:56.045000 |
10.129.1.112 | 1850
Scanned 2 rows and matched 2 [SharedPool-Worker-2] | 2015-10-22 16:06:56.045000
| 10.129.1.112 | 1881
Submitted 1 concurrent range
requests covering 257 ranges [SharedPool-Worker-1] | 2015-10-22 16:06:56.046000
| 10.129.1.112 | 5390
Request complete | 2015-10-22 16:06:56.045807 |
10.129.1.112 | 6807
Can anyone suggest why my data isn't being returned or where to continue
digging?
Thank you!
Jason
Because when you use keytool it stores the generated private key in the
keystore and tags it waiting for the certificate. Then when you import the
issued certificate it is paired in the same record with the key. It's a real
pain to get OpenSSL encoded private keys into a keytool keystore. Don't
I'm getting too many open files errors and I'm wondering what the
cause may be.
lsof -n | grep java show 1.4M files
~90k are inodes
~70k are pipes
~500k are cassandra services in /usr
~700K are the data files.
What might be causing so many files to be open?
jas
> On Fri, Nov 6, 2015 at 12:49 PM, Bryan Cheng
>> wrote:
>>
>>> Is your compaction progressing as expected? If not, this may cause an
>>> excessive number of tiny db files. Had a node refuse to start recently
>>> because of this, had to temporarily
keyspace
replication.
hth,
jason
On Fri, Nov 13, 2015 at 2:35 PM, Shuo Chen wrote:
> Hi,
>
> We have a small cassandra cluster with 4 nodes for production. All the
> nodes have similar hardware configuration and similar data load. The C*
> version is 1.0.7 (prretty old)
>
>
any tools to actually repair the data rather than copy it from a
replica elsewhere because with the JVM error, the database JVMs are not staying
up.
Suggestions would be appreciated.
Thanks,
Jason
Thanks for the tool reference. That will help. The second part of my question
was whether there is a way to actually perform data repair aside from copying
data from a replica.
Thanks,
Jason
From: Carlos Alonso
To: user@cassandra.apache.org; Jason Kania
Sent: Wednesday, February 24
.
Thanks,
Jason
indicates that the replication factor is 1:
root@bull:~# nodetool repair
[2016-02-27 18:04:55,083] Nothing to repair for keyspace 'sensordb'
Thanks,
Jason
Hi,
I just reran the command and collected following. Any suggestions would be
appreciated.
Thanks,
Jason
from 192.168.10.8
ERROR [STREAM-IN-/192.168.10.10] 2016-02-27 20:37:53,857 StreamSession.java:635
- [Stream #c9868f90-ddbb-11e5-80c0-89f591237aca] Remote peer 192.168.10.10
failed stream
I raised
https://issues.apache.org/jira/browse/CASSANDRA-11273 with these details and
the workaround that I found.
From: Paulo Motta
To: "user@cassandra.apache.org" ; Jason Kania
Sent: Sunday, February 28, 2016 10:01 PM
Subject: Re: How to complete bootstrap with except
en looking
around, but haven't found any references beyond the initial suggestion to add
some sort of shard id to the partition key to handle wide rows.
Thanks,
Jason
ld be appreciated.
Thanks,
Jason
From: Jonathan Haddad
To: user@cassandra.apache.org; Jason Kania
Sent: Thursday, March 10, 2016 11:21 AM
Subject: Re: Strategy for dividing wide rows beyond just adding to the
partition key
Have you considered making the date (or week, or whatever, some
list of partition keys for the
table because we cannot reduce the scope with a where clause.
If there is a recommended pattern that solves this, we haven't come across it.
I hope makes the problem clearer.
Thanks,
Jason
From: Jack Krupansky
To: user@cassandra.apache.org; Jason Kan
rectly supply the timeShard portion of our partition
key.
I appreciate your input,
Thanks,
Jason
From: Jack Krupansky
To: "user@cassandra.apache.org"
Sent: Friday, March 11, 2016 4:45 PM
Subject: Re: Strategy for dividing wide rows beyond just adding to the
partition key
IMIT 5000
Splitting the bulk content out of the main table is something we considered too
but we didn't find any detail on whether that would solve our timeout problem.
If there is a reference for using this approach, it would be of interest to us
to avoid any assumptions on how we would app
en we don't know where to start and end.
Thanks,
Jason
From: Carlos Alonso
To: "user@cassandra.apache.org"
Sent: Friday, March 11, 2016 7:24 PM
Subject: Re: Strategy for dividing wide rows beyond just adding to the
partition key
Hi Jason,
If I understand correctly you h
, these queries focus on raw, bulk retrieval of sensor data readings, but
do you have reading-based queries, such as range of an actual sensor reading?
-- Jack Krupansky
On Fri, Mar 11, 2016 at 7:08 PM, Jason Kania wrote:
The 5000 readings mentioned would be against a single sensor on a single sensor
on 192.168.10.9 Unfortunately, attempts to
compact on 192.168.10.9 only give the following error without any stack trace
detail and are not fixed with repair.
root@cutthroat:/usr/local/bin/analyzer/bin# nodetool compact
error: null
-- StackTrace --
java.lang.ArrayIndexOutOfBoundsException
Thanks for the response.
All nodes are using NTP.
Thanks,
Jason
From: Kai Wang
To: user@cassandra.apache.org; Jason Kania
Sent: Wednesday, March 30, 2016 10:59 AM
Subject: Re: Inconsistent query results and node state
Do you have NTP setup on all nodes?
On Tue, Mar 29, 2016 at
result is the epoch 0 value.
Thoughts on how to proceed?
Thanks,
Jason
From: Tyler Hobbs
To: user@cassandra.apache.org
Sent: Wednesday, March 30, 2016 11:31 AM
Subject: Re: Inconsistent query results and node state
org.apache.cassandra.service.DigestMismatchException: Mismatch for
connections.
> See https://issues.apache.org/jira/browse/CASSANDRA-9590
>
>> On Wed, Apr 20, 2016 at 8:51 AM, Jason J. W. Williams
>> wrote:
>> Hi Ben,
>>
>> Thanks for confirming what I saw occur. The Datastax drivers don't play very
>> nicely with Twisted Pyth
Hello friends,
I'm getting a:
ERROR 22:50:29,695 Fatal exception in thread Thread[SSTableBatchOpen:2,5,main]
java.lang.OutOfMemoryError: Java heap space
error when I start Cassandra. This node was running fine and after
some server work/upgrades it started throwing this error when I start
the Ca
what I am looking for that would be interesting. Can you help
me out with that?
Jason
On Sat, Jul 7, 2012 at 8:20 PM, Tyler Hobbs wrote:
> The heap dump is only 47mb, so something strange is going on. Is there
> anything interesting in the heap dump?
>
>
Hi
I encounter the High CPU problem, Cassandra 1.0.3, happened on both
sized and leveled compaction, 6G heap, 64bit Oracle java. For normal
traffic, Cassandra will use 15% CPU.
But every half a hour, Cassandra will use almost 100% total cpu (SUSE,
12 Core).
And here is the top inform
Thanks Jonathan that did the trick. I deleted the Statistics.db files
for the offending column family and was able to get Cassandra to
start.
Thank you,
Jason
Worker.run() @bci=28, line=908
(Compiled frame)
- java.lang.Thread.run() @bci=11, line=662 (Interpreted frame)
BRs
//Jason
2012/7/11 Jason Tang
> Hi
>
> I encounter the High CPU problem, Cassandra 1.0.3, happened on both
> sized and leveled compaction, 6G heap, 64bit Oracle java
Hi
I have a 4 nodes Cassandra cluster, and replicate factor is 3, and write
consistent level is ALL, and each write suppose to write to at least 3
nodes, right?
I check the schema, and found the parameter "Replicate on write: false",
what does this parameter mean.
How it impact the writ
Hi
I am starting using Cassandra for not a long time, and also have problems
in consistency.
Here is some thinking.
If you have Write:Any / Read:One, it will have consistency problem, and if
you want to repair, check your schema, and check the parameter "Read repair
chance: "
http://wiki.apache.o
g of
> QUORAM.
>
> ** **
>
> *From:* Jason Tang [mailto:ares.t...@gmail.com]
> *Sent:* Tuesday, July 17, 2012 8:24 PM
> *To:* user@cassandra.apache.org
> *Subject:* Re: Replication factor - Consistency Questions
>
> ** **
>
> Hi
>
> ** **
>
> I a
Hi
For some consistency problem, we can not use delete direct to delete
one row, and then we use TTL for each column of the row.
We using the Cassandra as the central storage of the stateful system.
All request will be stored in Cassandra, and marked as status;NEW, and then
we change it
setMaxCompactionThreshold(0)
setMinCompactionThreshold(0)
2012/7/27 Илья Шипицин
> Hello!
>
> if we are dealing with append-only data model, so what if I disable
> compaction on certain CF ?
> any side effect ?
>
> can I do it with
>
> "update column family with compaction_strategy = null "
ay to avoid the hex I'm using for the key?
I tried the following
ASSUME KEYS ARE text;
but it gave this error:
Improper assume command.
I'm thinking I've missed something here and hope a kind soul would
point me to a solution.
Cheers,
Jason
node
* My main question is how to get the nodes to share all the data since
I want a replication factor of 2 (so both nodes have all the data) but
that won't work while there is only one server. Should I bring up 2
extra servers instead of just one?
Thanks,
Jason
it back up, but I'm worried it might
not come back if it gets the same errors.
Also as a random question: is there any way to 'merge' historical schema
changes together?
Thanks,
Jason
is the case here, as getVersion is blank. Don't
all nodes bootstrap with a blank schema version? Why would the Migration
logic expect the lastVersion to match the bootstrapping nodes getVersion?
On Wednesday, September 5, 2012 4:29:34 AM UTC-7, Jason Harvey wrote:
>
> Hey folks,
>
I attempted to manually load the Schema sstables onto the new node and
bootstrap it. Unfortunately when doing so, the new node believed it was
already bootstrapped, and just joined the ring with zero data.
To fix (read: hack) that, I removed the following logic from
StorageService.java:523:
andra-server-throws-java-lang-assertionerror-decoratedkey-decorated
Jason
On Mon, Sep 10, 2012 at 11:29 PM, André Cruz wrote:
> I'm also having "AssertionError"s.
>
> ERROR [ReadStage:51687] 2012-09-10 14:33:54,211 AbstractCassandraDaemon.java
> (line 134) Exception in
for any comments and insight
Regards,
Jason
java
(line 373) Finished hinted handoff of 0 rows to endpoint /node-3
DEBUG [RPC-Thread:577363] 2012-09-14 12:52:12,484 StorageProxy.java (line
212) Write timeout java.util.concurrent.TimeoutException for one (or more)
of:
DEBUG [RPC-Thread:577363] 2012-09-14 12:52:12,484 CassandraServer.java
(line 648) ... timed out
> Hop
which version is that? in version, 1.1.2 , nodetool does take the column
family.
setcachecapacity
- Set the key and row cache capacities of a given column family
On Wed, Sep 19, 2012 at 2:15 AM, rohit reddy wrote:
> Hi,
>
> Is it possible to enable row cache per column family after the colum
t;
> I forgot to ask, what consistency level are you using for writes ?
> Have you checked the disk health on node-3 ?
>
> Cheers
>
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 19/09/2012, at 1:10 AM, J
Hi, when the heap is going more than 70% usage, you should be able to see
in the log, many flushing, or reducing the row cache size down. Did you
restart the cassandra daemon in the node that thrown OOM?
On Thu, Sep 20, 2012 at 9:11 PM, Vanger wrote:
> Hello,
> We are trying to add new nodes to
what jvm version?
On Thu, Oct 11, 2012 at 2:04 PM, Daniel Woo wrote:
> Hi guys,
>
> I am running a mini cluster with 6 nodes, recently we see very frequent
> ParNewGC on two nodes. It takes 200 - 800 ms on average, sometimes it takes
> 5 seconds. You know, hte ParNewGC is stop-of-wolrd GC and ou
ine included
for context
What is this telling me? Is my network dropping for less than a
second? Are my nodes really dead and then up? Can someone shed some
light on this for me?
cheers,
Jason
check 10.50.10.21 for what is the system load.
On Tue, Oct 23, 2012 at 10:41 AM, Jason Hill wrote:
> Hello,
>
> I'm on version 1.0.11.
>
> I'm seeing this in my system log with occasional frequency:
>
> INFO [GossipTasks:1] 2012-10-23 02:26:34,449 Gossipe
ginal post, there are logs
from 2 different nodes: 10.21 and 10.25. They are each reporting that
the other is DOWN/UP at the same time. Would that still point me to
the suggestions you made? I don't see errors in the logs, but I do see
a lot of dropped mutations and reads. Any correlation?
than
> Cheers
>
> -
> Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 24/10/2012, at 1:27 PM, Jason Hill wrote:
>
> thanks for the replies.
>
> I'll check the load on the node that is reported as DOWN/U
maybe enable the debug in log4j-server.properties and going through the log
to see what actually happen?
On Tue, Oct 30, 2012 at 7:31 PM, Alain RODRIGUEZ wrote:
> Hi,
>
> I have an issue with counters, yesterday I had a lot of ununderstandable
> reads/sec on one server. I finally restart Cassand
should it be --cql3 ?
http://www.datastax.com/docs/1.1/dml/using_cql#start-cql3
On Wed, Nov 7, 2012 at 11:16 PM, Tamar Fraenkel wrote:
> Hi!
> I installed new cluster using DataStax AMI with --release 1.0.11, so I
> have cassandra 1.0.11 installed.
> Nodes have python-cql 1.0.10-1 and python2.6
option -XX:+UseLargePages ?
On Sat, Nov 10, 2012 at 2:34 AM, Morantus, James (PCLN-NW) <
james.moran...@priceline.com> wrote:
> Hi,
>
> ** **
>
> Does anyone know if DataStax/Cassandra recommends using HugeTLB on a
> cluster?
>
> ** **
>
> Thank you
>
> ** **
>
> *James Morantus*
>
The existence of sstable X will give an impact to the system or cluster?
when the compaction threshold is reach, the sstable x and sstable y will be
compacted. it's more like the system responsibility than human intervention.
On Mon, Nov 12, 2012 at 12:09 PM, B. Todd Burruss wrote:
> if i stop
if you have rows like 10k and get 100 column per row, this gonna choke the
cluster...been there. if you really still have to use multiget_slice, try
slice your data before calling multiget_slice and check if your cluster
read request pending increase... try to slow down the client sending
request t
It should be in the trunk, check it
https://github.com/apache/cassandra/blob/trunk/bin/cassandra-shuffle
On Thu, Jan 10, 2013 at 1:18 AM, Manu Zhang wrote:
> Is cassandra-shuffle command in the trunk? Or it is only included in the
> Debian package? I don't find it in the trunk.
>
>
> On Sat, No
always check NEWS.txt for instance for cassandra 1.1.3 you need to
run nodetool upgradesstables if your cf has counter.
On Wed, Jan 16, 2013 at 11:58 PM, Mike wrote:
> Hello,
>
> We are looking to upgrade our Cassandra cluster from 1.1.2 -> 1.1.8 (or
> possibly 1.1.9 depending on timing). It i
equests for 1 second before reading up to the moment that
> the request was received.
>
> In either of these approaches you can tune the time offset based on how
> closely synchronized you believe you can keep your clocks. The tradeoff of
> course, will be increased latency.
>
>
hat latter case,
> it's a convenience and you can force a timestamp client side if you really
> wish. In other words, Cassandra dependency on time synchronization is not a
> strong one even in that case. But again, that doesn't seem at all to be the
> problem you are trying to solve.
The reason for multiple keys (and, by extension, multiple columns) is to better
distribute the write/read load across the cluster as keys will (hopefully) be
distributed on different nodes. This helps to avoid hot spots.
Hope this helps,
-Jason Brown
Netflix
cqlsh> CREATE KEYSPACE demodb WITH replication = {'class':
'SimpleStrategy', 'replication_factor': 3};
cqlsh> use demodb;
cqlsh:demodb>
On Tue, Jan 22, 2013 at 7:04 PM, Paul van Hoven <
paul.van.ho...@googlemail.com> wrote:
> CREATE KEYSPACE demodb WITH strategy_class = 'SimpleStrategy'
> AND st
PM, Paul van Hoven <
paul.van.ho...@googlemail.com> wrote:
> Okay, that worked. Why is the statement from the tutorial wrong. I
> mean, why would a company like datastax post somthing like this?
>
> 2013/1/22 Jason Wee :
> > cqlsh> CREATE KEYSPACE demodb WITH replication
There is a limit option, find it in the doc.
On Fri, Feb 22, 2013 at 3:41 AM, Sri Ramya wrote:
> hi,,
> Cassandra can display maximum 100 rows in a Columnfamily. can i increase
> it. If it is possible please mention here.
> Thank you
>
You need an equal operator in your query. For instance, SELECT * FROM
users WHERE country = 'malaysia' age > 20
On Thu, Feb 28, 2013 at 10:04 PM, Everton Lima wrote:
> Hello,
> I was using cql 2. I have the following query:
>SELECT * FROM users WHERE age > 20 AND age < 25;
>
> The table wa
This happened sometime ago, but for the sake of helping others if they
encounter,
each column family has a row cache provider, you can read into the schema,
for example :
...
and row_cache_provider = 'SerializingCacheProvider'
...
it cannot start the cache provider for a reason and as a result,
version 1.0.8
Just curious, what is the mechanism for off heap in 1.1?
Thank you.
/Jason
On Mon, Mar 4, 2013 at 11:49 PM, aaron morton wrote:
> What version are you using ?
>
> As of 1.1 off heap caches no longer require JNA
> https://github.com/apache/cassandra/blob/trunk/N
ra let you use that: you can provid your own timestamp (using unix
> timestamp is just the default). The point being, unix timestamp is the
> better approximation we have in practice.
>
> --
> Sylvain
>
>
> On Mon, Mar 4, 2013 at 9:26 AM, Jason Tang wrote:
>
>> Hi
>>
&
try assasinate from the jmx?
http://nartax.com/2012/09/assassinate-cassandra-node/
or try cassandra -Dcassandra.load_ring_state=false
http://www.datastax.com/docs/1.0/references/cassandra#options
On Tue, Mar 5, 2013 at 6:54 PM, Alain RODRIGUEZ wrote:
> Any clue on this ?
>
>
> 2013/2/25 Alain
Varun,
This a message better for the user@ ML.
Thanks,
-Jason
On Tue, May 16, 2017 at 3:41 AM, varun saluja wrote:
> Hi Experts,
>
> We are facing issue on production cluster. Compaction on system.hint table
> is running from last 2 days.
>
>
> pending tasks: 1
removing dev@ from this conversation, as the thread is more appropriately
for user@
On Mon, Jun 12, 2017 at 4:51 AM, Eduardo Alonso
wrote:
> -Virtual tokens are not recommended when using SOLR or
> cassandra-lucene-index.
>
> If you use your table schema you will not have any problem with partit
Hi Andrew,
This question is best for the user@ list, included here.
Thanks,
-Jason
On Wed, Aug 30, 2017 at 10:00 AM, Andrew Whang
wrote:
> In evaluating 3.x, we found that hints are unable to be replayed between
> 2.x and 3.x nodes. This introduces a risk during the upgrade path fo
leads somewhere positive, that benefits everyone,
-Jason
On Wed, Feb 21, 2018 at 2:53 PM, Kenneth Brotman <
kenbrot...@yahoo.com.invalid> wrote:
> Hi Akash,
>
> I get the part about outside work which is why in replying to Jeff Jirsa I
> was suggesting the big companies could jus
I can't find any info related to dates anywhere.
jas
higher. You can also use nodetool cfstats to read for the write
latency.
Thanks.
Jason
On Mon, Oct 27, 2014 at 8:45 PM, Or Sher wrote:
> Hi all,
>
> We're using Hector in one of our older use cases with C* 1.0.9.
> We suspect it increases our total round trip write latency
1 - 100 of 321 matches
Mail list logo