.
Thanks,
Mike
--
*Disclaimer: This e-mail and any attachments may contain confidential
information. If you are not the intended recipient, any disclosure,
copying, distribution or use of any information contained herein is
strictly prohibited. If you have received this transmission in error
properties, you’ll compact
> away most of the other data in those old sstables (but not the partition
> that’s been manually updated)
>
> Also table level TTLs help catch this type of manual manipulation -
> consider adding it if appropriate.
>
> --
> Jeff Jirsa
>
>
&
at
> effectively blocks all other expiring cells from being purged.
>
> --
> Jeff Jirsa
>
>
> On May 3, 2019, at 7:57 PM, Nick Hatfield
> wrote:
>
> Hi Mike,
>
>
>
> If you will, share your compaction settings. More than likely, your issue
> is from 1 of
Thx for the help Paul - there are definitely some details here I still
don't fully understand, but this helped me resolve the problem and know
what to look for in the future :)
On Fri, May 3, 2019 at 12:44 PM Paul Chandler wrote:
> Hi Mike,
>
> For TWCS the sstable can only be de
be isolated the way it is (ie only one CF even
though I have a few others that share a very similar schema, and only some
nodes) seems like it will help me prevent it.
On Thu, May 2, 2019 at 1:00 PM Paul Chandler wrote:
> Hi Mike,
>
> It sounds like that record may have been deleted, i
etion_info" : {
"local_delete_time" : "2019-01-22T17:59:35Z" }
}
]
}
]
}
```
As expected, almost all of the data except this one suspicious partition
has a ttl and is already expired. But if a partition isn't expired and I
see it in t
have any other ideas? Why does the row show in `sstabledump`
but not when I query for it?
I appreciate any help or suggestions!
- Mike
the problem:
```
pooling: {
coreConnectionsPerHost: {
[distance.local]: 2,
[distance.remote]: 0
}
}
```
Any suggestions?
- Mike
8 at 2:24 PM, Mike Torra wrote:
>
>> Hi There -
>>
>> I have noticed an issue where I consistently see high p999 read latency
>> on a node for a few hours after replacing the node. Before replacing the
>> node, the p999 read latency is ~30ms, but after it increa
EBS volumes, but it seems too consistent to be actually caused by
that. The problem is consistent across multiple replacements, and multiple
EC2 regions.
I appreciate any suggestions!
- Mike
Then could it be that calling `nodetool drain` after calling `nodetool
disablegossip` is what causes the problem?
On Mon, Feb 12, 2018 at 6:12 PM, kurt greaves wrote:
>
> Actually, it's not really clear to me why disablebinary and thrift are
> necessary prior to drain, because they happen in th
s that I moved
`nodetool disablegossip` to after `nodetool drain`. This is pretty
anecdotal, but is there any explanation for why this might happen? I'll be
monitoring my cluster closely to see if this change does indeed fix the
problem.
On Mon, Feb 12, 2018 at 9:33 AM, Mike Torra wrote:
>
Any other ideas? If I simply stop the node, there is no latency problem,
but once I start the node the problem appears. This happens consistently
for all nodes in the cluster
On Wed, Feb 7, 2018 at 11:36 AM, Mike Torra wrote:
> No, I am not
>
> On Wed, Feb 7, 2018 at 11:35 AM, Jeff Jir
No, I am not
On Wed, Feb 7, 2018 at 11:35 AM, Jeff Jirsa wrote:
> Are you using internode ssl?
>
>
> --
> Jeff Jirsa
>
>
> On Feb 7, 2018, at 8:24 AM, Mike Torra wrote:
>
> Thanks for the feedback guys. That example data model was indeed
> abbreviated - the re
g nodes easier (or rather, we need to make drain do
> the right thing), but in this case, your data model looks like the biggest
> culprit (unless it's an incomplete recreation).
>
> - Jeff
>
>
> On Tue, Feb 6, 2018 at 10:58 AM, Mike Torra wrote:
>
>> Hi -
>
ng else I should be doing to
gracefully restart the cluster? It could be something to do with the nodejs
driver, but I can't find anything there to try.
I appreciate any suggestions or advice.
- Mike
overall data storage and create another
table and periodically transfer data from the main to the child table. But I
believe I'll get the same problem because Cassandra simply don't sort as an
RDBMS. So here must be an idea behind the philosophy of Cassandra.
Can anyone help me out?
Best regards
Mike Wenzel
(1)https://www.datastax.com/dev/blog/we-shall-have-order
I'm trying to use sstableloader to bulk load some data to my 4 DC cluster,
and I can't quite get it to work. Here is how I'm trying to run it:
sstableloader -d 127.0.0.1 -i {csv list of private ips of nodes in cluster}
myks/mttest
At first this seems to work, with a steady stream of logging like
a way to tell when/if the local node has successfully updated the
compaction strategy? Looking at the sstable files, it seems like they are
still based on STCS but I don't know how to be sure.
Appreciate any tips or suggestions!
On Mon, Mar 13, 2017 at 5:30 PM, Mike Torra wrote:
>
I'm trying to change compaction strategy one node at a time. I'm using
jmxterm like this:
`echo 'set -b
org.apache.cassandra.db:type=ColumnFamilies,keyspace=my_ks,columnfamily=my_cf
CompactionParametersJson
\{"class":"TimeWindowCompactionStrategy","compaction_window_unit":"HOURS","compaction_windo
I can't say that I have tried that while the issue is going on, but I have
done such rolling restarts for sure, and the timeouts still occur every
day. What would a rolling restart do to fix the issue?
In fact, as I write this, I am restarting each node one by one in the
eu-west-1 datacenter, and
the value of phi_convict_threshold to 12, as suggested
here:
https://docs.datastax.com/en/cassandra/3.0/cassandra/architecture/archDataDistributeFailDetect.html.
This does not seem to have changed anything on the nodes that I've changed
it on.
I appreciate any suggestions on what else to try in order to track down
these timeouts.
- Mike
g>"
mailto:user@cassandra.apache.org>>
Date: Saturday, January 14, 2017 at 1:25 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: implementing a 'sorted set' on top of cassandra
Mik
We currently use redis to store sorted sets that we increment many, many times
more than we read. For example, only about 5% of these sets are ever read. We
are getting to the point where redis is becoming difficult to scale (currently
at >20 nodes).
We've started using cassandra for other thin
Just bumping - has anyone seen this before?
http://stackoverflow.com/questions/41446352/cassandra-3-9-jvm-metrics-have-bad-name
From: Mike Torra mailto:mto...@demandware.com>>
Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassa
e, too. The rest of the jvm.* metrics have this extra '.'
character that causes them to not show up in graphite.
Am I missing something silly here? Appreciate any help or suggestions.
- Mike
o:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Date: Wednesday, November 2, 2016 at 1:07 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: failing bootstraps with OOM
On Wed, Nov 2, 2016 at 3:
x27;m still having issues
I'd appreciate any suggestions on what else I can try to track down the cause
of these OOM exceptions.
- Mike
Garo,
No, we didn't notice any change in system load, just the expected spike in
packet counts.
Mike
On Wed, Jul 20, 2016 at 3:49 PM, Juho Mäkinen
wrote:
> Just to pick this up: Did you see any system load spikes? I'm tracing a
> problem on 2.2.7 where my cluster sees load sp
urbed by the initial timeout
spike which leads to dropping all / high-percentage of all subsequent
traffic.
We are planning to continue production use with msg coaleasing disabled for
now and may run tests in our staging environments to identify where the
coalescing is breaking this.
Mike
On Tue,
Jeff,
Thanks, yeah we updated to the 2.16.4 driver version from source. I don't
believe we've hit the bugs mentioned in earlier driver versions.
Mike
On Mon, Jul 4, 2016 at 11:16 PM, Jeff Jirsa
wrote:
> AWS ubuntu 14.04 AMI ships with buggy enhanced networking driver –
> d
Jens,
We haven't noticed any particular large GC operations or even persistently
high GC times.
Mike
On Thu, Jun 30, 2016 at 3:20 AM, Jens Rantil wrote:
> Hi,
>
> Could it be garbage collection occurring on nodes that are more heavily
> loaded?
>
> Cheers,
> Jens
&
One thing to add, if we do a rolling restart of the ring the timeouts
disappear entirely for several hours and performance returns to normal.
It's as if something is leaking over time, but we haven't seen any
noticeable change in heap.
On Thu, Jun 23, 2016 at 10:38 AM, Mike Heffner wr
hts on what to look for? Can we increase thread count/pool sizes
for the messaging service?
Thanks,
Mike
--
Mike Heffner
Librato, Inc.
ake a look at the records in system.compaction:
select * from system.compaction_history;
Regards,
Mike Yeap
On Tue, May 31, 2016 at 5:21 PM, Paul Dunkler wrote:
> And - as an addition:
>
> Shoudln't that be documented that even snapshot files can change?
>
> I guess
memtable_offheap_space_in_mb
Regards,
Mike Yeap
On Sun, May 29, 2016 at 6:18 PM, Bhuvan Rawal wrote:
> Hi,
>
> We are running a 6 Node cluster in 2 DC on DSC 3.0.3, with 3 Node each.
> One of the node was showing UNREACHABLE on other nodes in nodetool
> describecluster and on that node
Hi Paolo,
a) was there any large insertion done?
b) are the a lot of files in the saved_caches directory?
c) would you consider to increase the HEAP_NEWSIZE to, say, 1200M?
Regards,
Mike Yeap
On Fri, May 27, 2016 at 12:39 AM, Paolo Crosato <
paolo.cros...@targaubiest.com> wrote:
> H
Hi George, are you using NetworkTopologyStrategy as the replication
strategy for your keyspace? If yes, can you check the
cassandra-rackdc.properties of this new node?
https://issues.apache.org/jira/browse/CASSANDRA-8279
Regards,
Mike Yeap
On Wed, May 25, 2016 at 2:31 PM, George Sigletos
Repair is the default
for Cassandra 2.2 and later.
Regards,
Mike Yeap
On Wed, May 25, 2016 at 8:01 AM, Bryan Cheng wrote:
> Hi Luke,
>
> I've never found nodetool status' load to be useful beyond a general
> indicator.
>
> You should expect some small skew, as this
ch is just a Docker image where you need to manage that
all yourself (painfully)
--
--mike
cause I didn't use the -full option of the "nodetool rebuild".
Thanks!
Regards,
Mike Yeap
On Thu, May 19, 2016 at 4:03 PM, Ben Slater
wrote:
> Use nodetool listsnapshots to check if you have a snapshot - in default
> configuration, Cassandra takes snapshots for operations
Hi all, I would like to know, is there any way to rebuild a particular
column family when all the SSTables files for this column family are
missing?? Say we do not have any backup of it.
Thank you.
Regards,
Mike Yeap
Emils,
We believe we've tracked it down to the following issue:
https://issues.apache.org/jira/browse/CASSANDRA-11302, introduced in 2.1.5.
We are running a build of 2.2.5 with that patch and so far have not seen
any more timeouts.
Mike
On Fri, Mar 4, 2016 at 3:14 AM, Emīls Šolmanis
Emils,
I realize this may be a big downgrade, but are you timeouts reproducible
under Cassandra 2.1.4?
Mike
On Thu, Feb 25, 2016 at 10:34 AM, Emīls Šolmanis
wrote:
> Having had a read through the archives, I missed this at first, but this
> seems to be *exactly* like what we're e
sting:
memtable_allocation_type: offheap_objects
memtable_flush_writers: 8
Cheers,
Mike
On Fri, Feb 19, 2016 at 1:46 PM, Nate McCall wrote:
> The biggest change which *might* explain your behavior has to do with the
> changes in memtable flushing between 2.0 and 2.1:
> https://issues.a
rites, batching (via Thrift
mostly) to 5 tables, between 6-1500 rows per batch.
Mike
On Thu, Feb 18, 2016 at 12:22 PM, Anuj Wadehra
wrote:
> Whats the GC overhead? Can you your share your GC collector and settings ?
>
>
> Whats your query pattern? Do you use secondary indexes, batches
ther reply that we've tracked it to something between
2.0.x and 2.1.x, so we are focusing on narrowing which point release it was
introduced in.
Cheers,
Mike
On Thu, Feb 18, 2016 at 3:33 AM, Alain RODRIGUEZ wrote:
> Hi Mike,
>
> What about the output of tpstats ? I imagine y
st on that earlier.
Thanks,
Mike
On Wed, Feb 10, 2016 at 2:51 PM, Mike Heffner wrote:
> Hi all,
>
> We've recently embarked on a project to update our Cassandra
> infrastructure running on EC2. We are long time users of 2.0.x and are
> testing out a move to version 2.2.5 running o
Jaydeep,
No, we don't use any light weight transactions.
Mike
On Wed, Feb 17, 2016 at 6:44 PM, Jaydeep Chovatia <
chovatia.jayd...@gmail.com> wrote:
> Are you guys using light weight transactions in your write path?
>
> On Thu, Feb 11, 2016 at 12:36 AM, Fabrice Faco
Jeff,
We have both commitlog and data on a 4TB EBS with 10k IOPS.
Mike
On Wed, Feb 10, 2016 at 5:28 PM, Jeff Jirsa
wrote:
> What disk size are you using?
>
>
>
> From: Mike Heffner
> Reply-To: "user@cassandra.apache.org"
> Date: Wednesday, February
Paulo,
Thanks for the suggestion, we ran some tests against CMS and saw the same
timeouts. On that note though, we are going to try doubling the instance
sizes and testing with double the heap (even though current usage is low).
Mike
On Wed, Feb 10, 2016 at 3:40 PM, Paulo Motta
wrote:
>
x27;t see any msg that pointed to something obvious. Happy to provide any
more information that may help.
We are pretty much at the point of sprinkling debug around the code to
track down what could be blocking.
Thanks,
Mike
--
Mike Heffner
Librato, Inc.
we are not using DTCS, but it
matches since the upgrade appeared to only drop fully expired sstables.
Mike
On Sat, Jul 18, 2015 at 3:40 PM, Nate McCall wrote:
> Perhaps https://issues.apache.org/jira/browse/CASSANDRA-9592 got
> compactions moving forward for you? This would explain th
27;org.apache.cassandra.io.compress.LZ4Compressor'}
AND dclocal_read_repair_chance = 0.1
AND default_time_to_live = 0
AND gc_grace_seconds = 0
AND max_index_interval = 2048
AND memtable_flush_period_in_ms = 0
AND min_index_interval = 128
AND read_repair_chance = 0.0
AND speculative_retry = '99.0PERCENTILE';
Thanks,
Mike
--
Mike Heffner
Librato, Inc.
lthough I'm not sure what that means in practice. Will the counters be
99.99% accurate? How often will they be over or under counted?
Thanks, Mike.
a node is full CPU (all other
cores are idleing), so I assume I'm CPU bound on node side. But why ? What the
node is doing ? Why does it take so long time ?
--
Mike Neir
Liquid Web, Inc.
Infrastructure Administrator
=Cache,scope=KeyCache,name=Capacity
The Type of Attribute (Value) is java.lang.Object
is it possible to implement the datatype of gauge as numeric types
instead of object, or other way around for example using metric
reporter...etc?
Thanks a lot for any suggestion!
Best Regard!
Mike
to stick with
Thrift?
Mike
On Thu, Jul 17, 2014 at 8:27 PM, Tyler Hobbs wrote:
> For this type of query, you really want the tuple notation introduced in
> 2.0.6 (https://issues.apache.org/jira/browse/CASSANDRA-4851):
>
> SELECT * FROM CF WHERE key='X' AND (column1, col
olumn1>=1 AND column2>=3 AND column3>4
AND column1<=2;
but that is rejected with:
Bad Request: PRIMARY KEY part column2 cannot be restricted (preceding part
column1 is either not restricted or by a non-EQ relation)
Mike
On Thu, Jul 17, 2014 at 6:37 PM, Michael Dykman wrote:
&g
lumn1=1 AND column2=3 AND column3>4
AND column1<=2;
fails with:
DoGetMeasures: column1 cannot be restricted by both an equal and an inequal
relation
This is against Cassandra 1.2.16.
What is the proper way to perform this query?
Cheers,
Mike
--
Mike Heffner
Librato, Inc.
un 24, 2014 3:09 AM, "Mike Carter" wrote:
>
>> Hello!
>>
>>
>> I'm a beginner in C* and I'm quite struggling with it.
>>
>> I’d like to measure the performance of some Cassandra-Range-Queries. The
>> idea is to execute multidimensional
29 | 28 | 17 | 42 | 42 | 4 | 6 | 61 | 93
62693 | 1 | 26 | 48 | 15 | 22 | 73 | 94 | 86 | 4 | 66 | 63
488360 | 1 | 8 | 57 | 86 | 31 | 51 | 9 | 40 | 52 | 91 | 45
Mike
om streaming it seems that simply
restarting will inevitably hit this problem again.
Cheers,
Mike
--
Mike Heffner
Librato, Inc.
Hi Boole,
Have you tried chef? There is this cookbook for deploying cassandra:
http://community.opscode.com/cookbooks/cassandra
MikeA
On 21 November 2013 01:33, Boole.Z.Guo (mis.cnsh04.Newegg) 41442 <
boole.z@newegg.com> wrote:
> Hi all,
>
> Is there any open source software for automati
I am investigating Java Out of memory heap errors. So I created an .hprof
file and loaded it into Eclipse Memory Analyzer Tool which gave some
"Problem Suspects".
First one looks like:
One instance of "org.apache.cassandra.db.ColumnFamilyStore" loaded by
"sun.misc.Launcher$AppClassLoader
Thanks for the response Rob,
And yes, the relevel helped the bloom filter issue quite a bit, although it
took a couple of days for the relevel to complete on a single node (so if
anyone tried this, be prepared)
-Mike
Sent from my iPhone
On Sep 23, 2013, at 6:34 PM, Robert Coli wrote:
>
cause this fix in 1.0.11:
* fix 1.0.x node join to mixed version cluster, other nodes >= 1.1
(CASSANDRA-4195)
-Jeremiah
--
Mike Neir
Liquid Web, Inc.
Infrastructure Administrator
Is there anything that you can link that describes the pitfalls you mention? I'd
like a bit more information. Just for clarity's sake, are you recommending 1.0.9
-> 1.0.12 -> 1.1.12 -> 1.2.x? Or would 1.0.9 -> 1.1.12 -> 1.2.x suffice?
Regarding the placement strategy mentioned in a different p
.
MN
On 08/30/2013 12:15 PM, Robert Coli wrote:
On Fri, Aug 30, 2013 at 8:57 AM, Mike Neir mailto:m...@liquidweb.com>> wrote:
I'm faced with the need to update a 36 node cluster with roughly 25T of data
on disk to a version of cassandra in the 1.2.x series. While it seem
are no schema changes going on, the node should be able to just hop back into
the cluster without error and without transitioning through the "Joining" state.
--
Mike Neir
Liquid Web, Inc.
Infrastructure Administrator
s:comment-tabpanel#comment-13748998
Cheers,
Mike
On Sun, Aug 25, 2013 at 4:06 AM, Janne Jalkanen wrote:
> This on cass 1.2.8
>
> Ring state before decommission
>
> -- Address Load Owns Host ID
> TokenRack
> UN 10.0.0.1 3
at is the reason to have this time difference? For both operations,
> what it is time-consuming the data streaming from (or to) other node, right?
>Thanks in advance.
>
> Att.
>
> *Rodrigo Felix de Almeida*
> LSBD - Universidade Federal do Ceará
> Project Manager
> MBA, CSM, CSPO, SCJP
>
--
Mike Heffner
Librato, Inc.
Aiman,
I believe that is one of the cases we added a check for:
https://github.com/librato/tablesnap/blob/master/tablesnap#L203-L207
Mike
On Thu, Jul 11, 2013 at 1:54 PM, Aiman Parvaiz wrote:
> Thanks for the info Mike, we ran in to a race condition which was killing
> table snap, I w
ur backups. We're using a slightly
modified version [1]. We currently backup every sst as soon as they hit
disk (tablesnap's inotify), but we're considering moving to a periodic
snapshot approach as the sst churn after going from 24 nodes -> 6 nodes is
quite high.
Mike
[1]: https://
db is >
TTL, ensure you remove all foo-hf-123-*.
I recommend taking a snapshot beforehand to be safe. ;-)
Mike
On Wed, Jul 10, 2013 at 8:09 AM, Theo Hultberg wrote:
> Hi,
>
> I think I remember reading that if you have sstables that you know contain
> only data that whose ttl has e
I'm curious because we are experimenting with a very similar configuration,
what basis did you use for expanding the index_interval to that value? Do
you have before and after numbers or was it simply reduction of the heap
pressure warnings that you looked for?
thanks,
Mike
On Tue, Jul 9,
indicate that the sending node is
limiting our streaming rate.
Mike
On Tue, Jul 2, 2013 at 3:00 PM, Mike Heffner wrote:
> Sankalp,
>
> Parallel sstableloader streaming would definitely be valuable.
>
> However, this ring is currently using vnodes and I was surprised to see
> th
from
multiple replicas across the az/rack configuration?
Mike
On Tue, Jul 2, 2013 at 1:53 PM, sankalp kohli wrote:
> This was a problem pre vnodes. I had several JIRA for that but some of
> them were voted down saying the performance will improve with vnodes.
> The main problem is that i
On Mon, Jul 1, 2013 at 10:06 PM, Mike Heffner wrote:
>
> The only changes we've made to the config (aside from dirs/hosts) are:
>
Forgot to include we've changed this as well:
-partitioner: org.apache.cassandra.dht.Murmur3Partitioner
+partitioner: org.apache.cassandra.dh
ansfers with rsync.
Any suggestions for what to adjust to see better streaming performance? 5%
of what a single rsync can do seems somewhat limited.
Thanks,
Mike
--
Mike Heffner
Librato, Inc.
eloper
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 28/02/2013, at 3:21 PM, Mike Koh wrote:
It has been suggested to me that we could save a fair amount of time and money
by taking a snapshot of only 1 replica (so every third node for most column
families). Assuming that we are oka
It has been suggested to me that we could save a fair amount of time and
money by taking a snapshot of only 1 replica (so every third node for
most column families). Assuming that we are okay with not having the
absolute latest data, does this have any possibility of working? I feel
like it s
h out-of-order within an sstable.)"
We recently upgraded from 1.1.2 to 1.1.9.
Does anyone know if an offline scrub is recommended to be performed when
switching from STCS->LCS after upgrading from 1.1.2?
Any insight would be appreciated,
Thanks,
-Mike
On 2/17/2013 8:57 PM, Wei Zhu wrote:
Hello Wei,
First thanks for this response.
Out of curiosity, what SSTable size did you choose for your usecase, and
what made you decide on that number?
Thanks,
-Mike
On 2/14/2013 3:51 PM, Wei Zhu wrote:
I haven't tried to switch compaction strategy. We started with LCS.
For us,
Another piece of information that would be useful is advice on how to
properly set the SSTable size for your usecase. I understand the
default is 5MB, a lot of examples show the use of 10MB, and I've seen
cases where people have set is as high as 200MB.
Any information is appreciated,
cause updates to be mistakenly dropped for being old.
Also, make sure you are running with a gc_grace period that is high
enough. The default is 10 days.
Hope this helps,
-Mike
On 2/15/2013 1:13 PM, Víctor Hugo Oliveira Molinar wrote:
hello everyone!
I have a column family filled with event
[d558])
10.0.8.24 us-east 1c Up Normal 90.72 GB
0.04% Token(bytes[eaa8])
Any help would be appreciated, as if something is going drastically
wrong we need to go back to backups and revert back to 1.1.2.
Thanks,
-Mike
On 2/14/2013 8:32 AM
ing.
Given we use an RF of 3, and LOCAL_QUORUM consistency for everything,
and we are not seeing errors, something seems to be working correctly.
Any idea what is going on above? Should I be alarmed?
-Mike
nd, but I haven't
found too much on what else needs to be done after the schema change.
I did these tests with Cassandra 1.1.9.
Thanks,
-Mike
t for several days?
Thanks,
-Mike
On 2/10/2013 3:27 PM, aaron morton wrote:
I would do #1.
You can play with nodetool setcompactionthroughput to speed things up,
but beware nothing comes for free.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
n a staggered upgrade of the sstables over a number of days.
2) Upgrade one node at a time, running the clustered in a mixed
1.1.2->1.1.9 configuration for a number of days.
I would prefer #1, as with #2, streaming will not work until all the
nodes are upgraded.
I appreciate your thoughts,
-Mike
O
o
have started at the start of the second session.
Any hints?
-Mike
Thanks Sylvain. I should have scanned Jira first. Glad to see it's on the
todo list.
On Wed, Feb 6, 2013 at 12:24 AM, Sylvain Lebresne wrote:
> Not yet: https://issues.apache.org/jira/browse/CASSANDRA-4450
>
> --
> Sylvain
>
>
> On Wed, Feb 6, 2013 at 9:06 AM, M
a question mark in for timestamp failed as expected and I don't see
a method on the DataStax java driver BoundStatement for setting it.
Thanks in advance.
/Mike Sample
55,247 |
> 127.0.0.1 | 3644
>
>
> It reads from the secondary index and discards keys that are outside of
> the token range.
>
> Cheers
>
>
> -
> Aaron Morton
> Freelance Cassandra Developer
> New Zealand
>
> @aaronmorton
&g
families (the former
makes sense, I'm just making sure).
-Mike
On 1/16/2013 11:08 AM, Jason Wee wrote:
always check NEWS.txt for instance for cassandra 1.1.3 you need to
run nodetool upgradesstables if your cf has counter.
On Wed, Jan 16, 2013 at 11:58 PM, Mike <mailto:mthero...@y
Has anyone had any gotchas recently that I should be aware of before
performing this upgrade?
In order to upgrade, is the only thing that needs to change are the JAR
files? Can everything remain as-is?
Thanks,
-Mike
1 - 100 of 243 matches
Mail list logo