some other nodes
continued to be high, and finally had to give up join. Is it a new node to join
the process itself is very slow, or our way of use (too much data per node)
improper and cause this problem? Is there any good way to speed up the process
of adding new nodes?
Thanks,
Kevin
unsubscribe
Currently I'm using a client (Pelops) to insert UUIDs (both lexical and
time) in to Cassandra. I haven't yet implemented a facility to remove them
with Pelops; i'm testing and refining the insertion mechanism.
As such, I would like to use the CLI to delete test UUID values. It seems,
however, t
che.org
Subject: Re: How to delete UUIDs from the CLI?
If you're not using 0.8.0 the cli deals poorly with non-string row keys.
On Sat, Jun 4, 2011 at 7:48 PM, Kevin wrote:
> Currently I'm using a client (Pelops) to insert UUIDs (both lexical
> and
> time) in to Cassandra. I
TimeUUIDs should be used for data that is time-based and requires
uniqueness.
TimeUUID comparisons compare the time-based portion of the UUID. So no, you
do not need to know the MAC addresses. In fact, for languages that cannot
get to that low of a level to access a MAC address (like Java), the
Correction. TimeUUID comparisons FIRST compare the time-based portion, then
go on to the other portion.
From: Sameer Farooqui [mailto:cassandral...@gmail.com]
Sent: Tuesday, June 14, 2011 8:16 PM
To: user@cassandra.apache.org
Subject: When does it make sense to use TimeUUID?
I would like
When dealing with large SliceRanges, it better to read all the results in to
memory (by setting "count" to the largest value possible), or is it better
to divide the query in to smaller SliceRange queries? Large in this case
being on the order of millions of rows.
There's a footnote concerning
There's pretty limited information on Cassandra's built-in secondary index
facility as is, but trying to find out why the secondary index has to have
low cardinality has been like finding a needle in a haystack..that is
floating somewhere in the Atlantic.
Can someone explain why low cardinality
Now that OpsCenter doesn't work with open source installs, are there any
runs at an open source equivalent? I'd be more interested in looking at
metrics of a running cluster and doing other tasks like managing
repairs/rolling restarts more so than historical data.
27;re just doing a CAS operation now to read the existing value, then
increment it.
I think it might have been better to implement this as a counter. Would
that be inherently faster or would a CAS be about the same?
I can't really test it without deploying it so I figured I would just ask
her
On Wed, Jul 20, 2016 at 11:53 AM, Jeff Jirsa
wrote:
> Can you tolerate the value being “close, but not perfectly accurate”? If
> not, don’t use a counter.
>
>
>
yeah.. agreed.. this is a problem which is something I was considering. I
guess it depends on whether they are 10x faster..
--
We’r
We have a 60 node CS cluster running 2.2.7 and about 20GB of RAM allocated
to each C* node. We're aware of the recommended 8GB limit to keep GCs low
but our memory has been creeping up (probably) related to this bug.
Here's what we're seeing... if we do a low level of writes we think
everything g
index/content_legacy_2016_08_02:1470154500099 (106107128 bytes)
On Tue, Aug 2, 2016 at 6:43 PM, Kevin Burton wrote:
> We have a 60 node CS cluster running 2.2.7 and about 20GB of RAM allocated
> to each C* node. We're aware of the recommended 8GB limit to keep GCs low
> but our memory has been cr
to make your partitions smaller (like
> 1/10th of the size).
>
> Cheers
> Ben
> <https://issues.apache.org/jira/browse/CASSANDRA-11206>
>
> On Wed, 3 Aug 2016 at 12:35 Kevin Burton wrote:
>
>> I have a theory as to what I think is happening here.
>>
>
nt, your best
>> solution would be to find a way to make your partitions smaller (like
>> 1/10th of the size).
>>
>> Cheers
>> Ben
>> <https://issues.apache.org/jira/browse/CASSANDRA-11206>
>>
>> On Wed, 3 Aug 2016 at 12:35 Kevin Burton wrote:
&
We usually use 100 per every 5 minutes.. but you're right. We might
actually move this use case over to using Elasticsearch in the next couple
of weeks.
On Wed, Aug 3, 2016 at 11:09 AM, Jonathan Haddad wrote:
> Kevin,
>
> "Our scheme uses large buckets of content where we
nt or what CQL is causing the
large mutation.
Any thoughts on how to mitigate this?
Kevin
--
We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!
Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ pr
(but other drivers should
> have a similar exception):
> https://github.com/datastax/python-driver/blob/master/cassandra/protocol.py#L288
>
> On Wed, Aug 3, 2016 at 1:59 PM Ryan Svihla wrote:
>
>> Made a Jira about it already
>> https://issues.apache.org/jira/plugi
BTW. we think we tracked this down to using large partitions to implement
inverted indexes. C* just doesn't do a reasonable job at all with large
partitions so we're going to migrate this use case to using Elasticsearch
On Wed, Aug 3, 2016 at 1:54 PM, Ben Slater
wrote:
> Yep, that was what I w
s pretty surprising. Why would this explicit request to compact an
sstable not remove tombstones?
Thanks!
Kevin
On Fri, Sep 2, 2016 at 9:33 AM, Mark Rose wrote:
> Hi Kevin,
>
> The tombstones will live in an sstable until it gets compacted. Do you
> have a lot of pending compactions? If so, increasing the number of
> parallel compactors may help.
Nope, we are pretty well managed on co
nodetool cfstats has some valuable data but what I would like is a 1 minute
delta.
Similar to iostat...
It's easy to parse this but has anyone done it?
I want to see IO throughput and load on C* for each table.
--
We’re hiring if you know of any awesome Java Devops or Linux Operations
Enginee
We get lots of write timeouts when we decommission a node. About 80% of
them are write timeout and just about 20% of them are read timeout.
We’ve tried to adjust streamthroughput (and compaction throughput) for that
matter and that doesn’t resolve the issue.
We’ve increased write_request_timeout
in failures of CAS?
This is Cassandra 2.0.9 btw.
On Wed, Jul 1, 2015 at 2:22 PM, Kevin Burton wrote:
> We get lots of write timeouts when we decommission a node. About 80% of
> them are write timeout and just about 20% of them are read timeout.
>
> We’ve tried to adjust streamthrou
WOW.. nice. you rock!!
On Wed, Jul 1, 2015 at 3:18 PM, Robert Coli wrote:
> On Wed, Jul 1, 2015 at 2:58 PM, Kevin Burton wrote:
>
>> Looks like all of this is happening because we’re using CAS operations
>> and the driver is going to SERIAL consistency level.
>> ...
&g
I can’t seem to find a decent resource to really explain this…
Our app seems to fail some write requests, a VERY low percentage. I’d like
to retry the write requests that fail due to number of replicas not being
correct.
http://docs.datastax.com/en/developer/java-driver/2.0/common/drivers/refere
I have a table which just has primary keys.
basically:
create table foo (
sequence bigint,
signature text,
primary key( sequence, signature )
)
I need these to eventually get GCd however it doesn’t seem to work.
If I then run:
select ttl(sequence) from foo;
I get:
Cannot use sel
RA-9312.
>
> On Tue, Aug 4, 2015 at 9:22 PM, Kevin Burton wrote:
>
>> I have a table which just has primary keys.
>>
>> basically:
>>
>> create table foo (
>>
>> sequence bigint,
>> signature text,
>> primary key( sequ
Mildly off topic but we are looking to hire someone with Cassandra
experience..
I don’t necessarily want to spam the list though. We’d like someone from
the community who contributes to Open Source, etc.
Are there forums for Apache / Cassandra, etc for jobs? I couldn’t fine one.
--
Founder/CE
Is there any advantage to using say 40 columns per row vs using 2 columns
(one for the pk and the other for data) and then shoving the data into a
BLOB as a JSON object?
To date, we’ve been just adding new columns. I profiled Cassandra and
about 50% of the CPU time is spent on CPU doing compactio
ile=nodes Averages from the middle 80% of
> values:interval_op_rate : 23489
>
> From: on behalf of Kevin Burton
> Reply-To: "user@cassandra.apache.org"
> Date: Sunday, August 23, 2015 at 1:02 PM
> To: "user@cassandra.apache.org"
> Subject: Practical limitation
Hey.
I’m considering migrating my DB from using multiple columns to just 2
columns, with the second one being a JSON object. Is there going to be any
real difference between TEXT or UTF-8 encoded BLOB?
I guess it would probably be easier to get tools like spark to parse the
object as JSON if it’
shows a ton of
> different examples, but they’re not scientific, and at this point they’re
> old versions (and performance varies version to version).
>
> - Jeff
>
> From: on behalf of Kevin Burton
> Reply-To: "user@cassandra.apache.org"
> Date:
ge this, but
> it's good to have it on the radar.
>
>
> On Sun, Aug 23, 2015 at 10:31 PM Kevin Burton wrote:
>
>> Agreed. We’re going to run a benchmark. Just realized we grew to 144
>> columns. Fun. Kind of disappointing that Cassandra is so slow in this
&g
Check out kairosd for a time series db on Cassandra.
On Aug 31, 2015 7:12 AM, "Peter Lin" wrote:
>
> I didn't realize they had added max and min as stock functions.
>
> to get the sample time. you'll probably need to write a custom function.
> google for it and you'll find people that have done i
should post to general@
…
On Fri, Sep 11, 2015 at 5:34 PM, Otis Gospodnetić <
otis.gospodne...@gmail.com> wrote:
> Hey Kevin - I think there is j...@apache.org
>
> Otis
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch S
I’m trying to benchmark two scenarios…
10 columns with 150 bytes each
vs
150 columns with 10 bytes each.
The total row “size” would be 1500 bytes (ignoring overhead).
Our app uses 150 columns so I’m trying to see if packing it into a JSON
structure using one column would improve performance.
Any issues with running Cassandra 2.0.16 on Java 8? I remember there is
long term advice on not changing the GC but not the underlying version of
Java.
Thoughts?
--
We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!
Founder/CEO Spinn3r.com
Location: *San Francis
I wanted to share this with the community in the hopes that it might help
someone with their schema design.
I didn't get any red flags early on to limit the number of columns we use.
If anything the community pushes for dynamic schema because Cassandra has
super nice online ALTER TABLE.
However,
k JDK9 will be the one.
>
> On Sep 25, 2015, at 7:14 PM, Stefano Ortolani wrote:
>
> I think those were referring to Java7 and G1GC (early versions were buggy).
>
> Cheers,
> Stefano
>
>
> On Fri, Sep 25, 2015 at 5:08 PM, Kevin Burton wrote:
>
>> Any issu
How many can we decommission?
I remember reading docs for this but hell if I can find it now :-P
I know what the answer is theoretically. I just want to make sure we do
everything properly.
Kevin
--
We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!
Founder/
rote:
> On Tue, Oct 6, 2015 at 12:32 PM, Kevin Burton wrote:
>
>> How many nodes can we bootstrap at once? How many can we decommission?
>>
>
> short answer : 1 node can join or part at simultaneously
>
> longer answer : https://issues.apache.org/jira/browse/CASSANDRA-2
TCP tuning,
>
> On Tue, Oct 6, 2015 at 1:29 PM, Kevin Burton wrote:
>
>> I'm not sure which is faster/easier. Just joining one box at a time and
>> then decommissioning or using replace_address.
>>
>> this stuff is always something you do rarely and then more comple
I find it really frustrating that nodetool status doesn't include a hostname
Makes it harder to track down problems.
I realize it PRIMARILY uses the IP but perhaps cassandra.yml can include an
optional 'hostname' parameter that can be set by the user. OR have the box
itself include the hostname
Let's say I have 10 nodes, I add 5 more, if I fail to run nodetool cleanup,
is excessive data transferred when I add the 6th node? IE do the existing
nodes send more data to the 6th node?
the documentation is unclear. It sounds like the biggest problem is that
the existing data causes things to
e technology,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of cho
ess. However, it was easily 20-30x faster.
This probably saved me about 5 hours of sleep!
In hindsight, I'm not sure what we would have done differently. Maybe
bought more boxes. Maybe upgraded to Cassandra 2.2 and probably java 8 as
well.
Setting up datacenter migration might have worked out
auto boostrapped themselves EVEN though
auto_bootstrap=false.
We don't have any errors. Everything seems functional. I'm just worried
about data loss.
Thoughts?
Kevin
--
We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!
Founder/CEO Spinn3r.com
Location:
My advice is to not even consider anything else or make any other changes
to your architecture until you get onto a modern and maintained filesystem.
VERY VERY VERY few people are deploying anything on ReiserFS so you're
going to be the first group encountering any problems.
On Thu, Oct 15, 2015
An shit.. I think we're seeing corruption.. missing records :-/
On Sat, Oct 17, 2015 at 10:45 AM, Kevin Burton wrote:
> We just migrated from a 30 node cluster to a 45 node cluster. (so 15 new
> nodes)
>
> By default we have auto_boostrap = false
>
> so we just push ou
I'm doing a big nodetool repair right now and I'm pretty sure the added
overhead is impacting our performance.
Shouldn't you be able to throttle repair so that normal compactions can use
most of the resources?
--
We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers!
if done on a single
> node, is typically correctable with `nodetool repair`.
>
> If you do it on many nodes at once, it’s possible that the new nodes
> could represent all 3 replicas of the data, but don’t physically have any
> of that data, leading to missing records.
>
>
>
ogy,
> delivering Apache Cassandra to the world’s most innovative enterprises.
> Datastax is built to be agile, always-on, and predictably scalable to any
> size. With more than 500 customers in 45 countries, DataStax is the
> database technology and transactional backbone of choice for th
this would resolve this problem.
IF anyone else thinks this is an issue I'll create a JIRA.
On Mon, Oct 19, 2015 at 3:38 PM, Robert Coli wrote:
> On Mon, Oct 19, 2015 at 9:30 AM, Kevin Burton wrote:
>
>> I think the point I was trying to make is that on highly loaded boxes,
>&g
Have you tried restarting? It's possible there's open file handles to
sstables that have been compacted away. You can verify by doing lsof and
grepping for DEL or deleted.
If it's not that, you can run nodetool cleanup on each node to scan all of
the sstables on disk and remove anything that it's
Internally we have the need for a blob store for web content. It's MOSTLY
key, ,value based but we'd like to have lookups by coarse grained tags.
This needs to store normal web content like HTML , CSS, JPEG, SVG, etc.
Highly doubt that anything over 5MB would need to be stored.
We also need the
18, 2016 at 6:52 PM, Kevin Burton wrote:
>
>> Internally we have the need for a blob store for web content. It's
>> MOSTLY key, ,value based but we'd like to have lookups by coarse grained
>> tags.
>>
>
> I know you know how to operate and scale MySQ
There's also the 'support' issue.. C* is hard enough as it is... maybe you
can bring in another system like ES or HDFS but the more you bring in the
more your complexity REALLY goes through the roof.
Better to keep things simple.
I really like the chunking idea for C*... seems like an easy way to
I think there are two strategies to upgradesstables after a release.
We're doing a 2.0 to 2.1 upgrade (been procrastinating here).
I think we can go with B below... Would you agree?
Strategy A:
- foreach server
- upgrade to 2.1
- nodetool upgradesstables
Strategy B:
-
Not sure if this is a bug or not or kind of a *fuzzy* area.
In 2.0 this worked fine.
We have a bunch of automated scripts that go through and create tables...
one per day.
at midnight UTC our entire CQL went offline.. .took down our whole app. ;-/
The resolution was a full CQL shut down and th
47 PM, Jonathan Haddad wrote:
> Instead of using ZK, why not solve your concurrency problem by removing
> it? By that, I mean simply have 1 process that creates all your tables
> instead of creating a race condition intentionally?
>
> On Fri, Jan 22, 2016 at 6:16 PM Kevin Burton wrote:
&
fic Jira assigned, and the antipattern doc doesn't appear to
> reference this scenario. Maybe a committer can shed some more light.
>
> -- Jack Krupansky
>
> On Fri, Jan 22, 2016 at 10:29 PM, Kevin Burton wrote:
>
>> I sort of agree.. but we are also considering migrating t
Is there a faster way to get the output of 'nodetool status' ?
I want us to more aggressively monitor for 'nodetool status' and boxes
being DN...
I was thinking something like jolokia and REST but I'm not sure if there
are variables exported by jolokia for nodetool status.
Thoughts?
--
We’re
On behalf of the development community, I am pleased to announce the
release of YCSB 0.7.0.
Highlights:
* GemFire binding replaced with Apache Geode (incubating) binding
* Apache Solr binding was added
* OrientDB binding improvements
* HBase Kerberos support and use single connection
* Accumulo i
I have a paging model whereby we stream data from CS by fetching 'pages'
thereby reading (sequentially) entire datasets.
We're using the bucket approach where we write data for 5 minutes, then we
can just fetch the bucket for that range.
Our app now has TONS of data and we have a piece of middlew
Ha.. Yes... C*... I guess I need something like coprocessors in bigtable.
On Fri, Apr 8, 2016 at 1:49 AM, vincent gromakowski <
vincent.gromakow...@gmail.com> wrote:
> c* I suppose
>
> 2016-04-07 19:30 GMT+02:00 Jonathan Haddad :
>
>> What is CS?
>>
>> O
Are you in VPC or EC2 Classic? Are you using enhanced networking?
On Tue, Apr 12, 2016 at 9:52 AM, Alessandro Pieri wrote:
> Hi Jack,
>
> As mentioned before I've used m3.xlarge instance types together with two
> ephemeral disks in raid 0 and, according to Amazon, they have "high"
> network perf
specify localhost or 127.0.0.1 and change it to the IP
address of the machine/server where it is running. I am assuming that I have
hit all the right configuration points. Ideas?
Thank you.
Kevin
ache.org
Subject: RE: Connecting to cassandra.
Importance: Low
The first thing to check is the log files under /var/log/cassandra, should
give you some hint.
Thanks.
-Wei
Sent from my Samsung smartphone on AT&T
Original message
Subject: Connecting to cassandra.
From:
sstabletojson
3) If you add a built-in secondary index the type information is needed,
strings sort differently then integer
4) columns in rows are sorted by the column name, strings sort differently
then integers
On Sat, Nov 10, 2012 at 11:55 PM, Kevin Burton
wrote:
> I am sure this has been as
ne on AT&T
Original message
Subject: RE: Connecting to cassandra.
From: Kevin Burton
To: user@cassandra.apache.org
CC:
Thank you in the output.log I see the line:
INFO 13:36:59,110 This node will not auto bootstrap because it is configured to
be a seed node.
A
heers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 12/11/2012, at 8:06 AM, Kevin Burton wrote:
Thank you this helps with my understanding.
So the goal here is to supply as many name/type pairs as can be reasonably
be foreseen when the column fami
I am sorry if this is an FAQ. But I was wondering what the syntax for
describing an array? I have gotten as far as feeling a need to understand a
'super-column' but I fail after that. Once I have the metadata in place to
describe an array how do I insert data into the array? Get data from the
arra
gt; Aaron Morton
> Freelance Developer
> @aaronmorton
> http://www.thelastpickle.com
>
> On 12/11/2012, at 8:35 PM, Kevin Burton wrote:
>
>> I am sorry if this is an FAQ. But I was wondering what the syntax for
>> describing an array? I have gotten as far as feeli
le.com
On 13/11/2012, at 9:46 AM, Kevin Burton wrote:
While this solves the problem for an array of 'primitive' types. What if I
want an array or collection of an arbitrary type like list, where foo
is a user defined type? I am guessing that this cannot be done with
'collecti
good starting point
http://www.datastax.com/docs/1.1/references/cql/index
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 14/11/2012, at 2:42 AM, Kevin Burton wrote:
Sorry to be so slow but I am just learning CQL. Would this synt
as "variable names" to
identify a particular vector or list. They are the storage engine "row key".
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 14/11/2012, at 5:31 PM, Kevin
ndra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 15/11/2012, at 10:38 AM, Kevin Burton wrote:
> An array would be a list of groups of items. In my case I want a
list/array of line items. An order has certain characteristics and one of
them is a list of the items that are being ordered. Say
y uses composite key, which gives you
additional capabilities like order by in the where clause
On Wed, Nov 14, 2012 at 5:27 PM, Kevin Burton
wrote:
I hope I am not bugging you but now what is the purpose of PRIMARY_KEY(id,
item_id)? By expressing the KEY as two values this basically gives the
Is there an IDE for a Cassandra database? Similar to the SQL Server
Management Studio for SQL server. I mainly want to execute queries and see
the results. Preferably that runs under a Windows OS.
Thank you.
Great post Akhil! Thanks for explaining that.
On Mon, May 29, 2017 at 5:43 PM, Akhil Mehra wrote:
> Hi Preetika,
>
> After thinking about your scenario I believe your small SSTable size might
> be due to data compression. By default, all tables enable SSTable
> compression.
>
> Let go through yo
This might be an interesting question - but is there a way to truncate data
from just a single node or two as a test instead of truncating from the
entire cluster? We have time series data we don't really care if we're
missing gaps in, but it's taking up a huge amount of space and we're
looking to
Thanks for the suggestions! Could altering the RF from 2 to 1 cause any
issues, or will it basically just be changing the coordinator's write paths
and also guiding future repairs/cleans?
On Wed, Jul 12, 2017 at 22:29 Jeff Jirsa wrote:
>
>
> On 2017-07-11 20:09 (-0700), &
Are you saying if a node had double the hardware capacity in every way it
would be a bad idea to up num_tokens? I thought that was the whole idea of
that setting though?
On Thu, Aug 17, 2017 at 9:52 AM, Carlos Rolo wrote:
> No.
>
> If you would double all the hardware on that node vs the others
I'm tracking down a weird bug and was wondering if you guys had any
feedback.
I'm trying to create ten tables programatically.. .
The first one I create, for some reason, isn't created.
The other 9 are created without a problem.
Im doing this with the datastax driver's session.execute().
No ex
uyHai Doan wrote:
> Can you just give the C* version and the complete DDL script to reproduce
> the issue ?
>
>
> On Wed, Aug 13, 2014 at 10:08 PM, Kevin Burton wrote:
>
>> I'm tracking down a weird bug and was wondering if you guys had any
>> feedback.
>>
and I'm certain that the CQL is executing… because I get a ResultSet back
and verified that the CQL is correct.
On Wed, Aug 13, 2014 at 1:26 PM, Kevin Burton wrote:
> 2.0.5… I'm upgrading to 2.0.9 now just to rule this out….
>
> I can give you the full CQL for the table,
yeah… problem still exists on 2.0.9
On Wed, Aug 13, 2014 at 1:26 PM, Kevin Burton wrote:
> and I'm certain that the CQL is executing… because I get a ResultSet back
> and verified that the CQL is correct.
>
>
> On Wed, Aug 13, 2014 at 1:26 PM, Kevin Burton wrote:
>
>
ah.. good idea. I'll try that now.
On Wed, Aug 13, 2014 at 1:36 PM, DuyHai Doan wrote:
> Maybe tracing the requests ? (just the one creating the schema of course)
>
>
> On Wed, Aug 13, 2014 at 10:30 PM, Kevin Burton wrote:
>
>> yeah… problem still exists on 2.0.9
>
ug 13, 2014 at 1:38 PM, Kevin Burton wrote:
> ah.. good idea. I'll try that now.
>
>
> On Wed, Aug 13, 2014 at 1:36 PM, DuyHai Doan wrote:
>
>> Maybe tracing the requests ? (just the one creating the schema of course)
>>
>>
>> On Wed, Aug 13, 2014
the tables back out, or run a SELECT against it, it will
fail.
Hm…
On Wed, Aug 13, 2014 at 1:52 PM, Kevin Burton wrote:
> It still failed. Tracing shows that the query is being executed. Just
> that the table isn't created. I did a diff against the two table names and
> the only
the table? This feels
> like code error rather than a database bug.
>
>
> On Wed, Aug 13, 2014 at 1:26 PM, Kevin Burton wrote:
>
>> 2.0.5… I'm upgrading to 2.0.9 now just to rule this out….
>>
>> I can give you the full CQL for the table, but I can't seem t
e,
or another system table which defines them…
(just thinking out loud)
Kevin
--
Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
<http://spinn3r.com>
The DataStax java driver has a Row object which getInt, getLong methods…
However, the getString only works on string columns.
That's probably reasonable… but if I have a raw Row, how the heck do I
easily print it?
I need a handy way do dump a ResultSet …
--
Founder/CEO Spinn3r.com
Location: *
make a lot of sense.
We're VERY IO bound… so for us SSD is a no brainer.
We were actually all memory before because of this and just finished a big
SSD migration … (though on MySQL)…
But our Cassandra deploy will be on SSD on Softlayer.
It's a no brainer really..
Kevin
On Tue, A
I agree that it belongs on that mailing list but it's setup weird.. .I
can't subscribe to it in Google Groups… I am not sure what exactly is wrong
with it.. mailed the admins but it hasn't been resolved.
On Tue, Aug 19, 2014 at 1:49 AM, Sylvain Lebresne
wrote:
> This kind of question belong to
can't delete data or truncate the table either.
So when your cluster fills up, it's just dead.
Kevin
--
Founder/CEO Spinn3r.com
Location: *San Francisco, CA*
blog: http://burtonator.wordpress.com
… or check out my Google+ profile
<https://plus.google.com/102718274791889610666/posts>
<http://spinn3r.com>
>
>
>> +1, though because you can't drop the snapshots those two commands
> automatically create (if the snapshot-before-DROP even works with disk
> full, which it probably doesn't...) you still need access to the machines
> to reclaim your disk space.
>
>
True.. I actually disabled the snapshot f
How do I watch the progress of nodetool repair.
Looks like the folklore from the list says to just use
nodetool compactionstats
nodetool netstats
… but the repair seems locked/stalled and neither of these are showing any
progress..
granted , this is a lot of data, but it would be nice to at lea
Say I want to do a rolling restart of Cassandra…
I can’t just restart all of them because they need some time to gossip and
for that gossip to get to all nodes.
What is the best strategy for this.
It would be something like:
/etc/init.d/cassandra restart && wait-for-cassandra.sh
… or something
1 - 100 of 306 matches
Mail list logo