Really wild guess : do you monitor I/O performance and are positive those
are the same over the past year ? (network becoming a little busier, hard
drive a bit slower and so on) ?
Wild guess 2 : a new 'monitoring' software (log shipping agent for
instance) added meanwhile on the box ?
On 11 June 2
Hi,
I believe this rpm was built by Datastax right ?
https://rpm.datastax.com/community/noarch/ is what you seem to be looking
for
Otherwise newest rpm are here :
https://www.apache.org/dist/cassandra/redhat/22x/
On 5 June 2018 at 16:21, ukevg...@gmail.com wrote:
> Hi everybody,
>
> I am not ab
Hi,
You need to manually force compaction if you do not care ending up with one
big sstable (nodetool compact)
On 31 May 2018 at 11:07, onmstester onmstester wrote:
> Hi,
> I've deleted 50% of my data row by row now disk usage of cassandra data is
> more than 80%.
> The gc_grace of table was de
Hi Mikhail,
Could you please provide :
- your cluster version/topology (number of nodes, cpu, ram available etc)
- what kind of underlying storage you are using
- cfstat using -H option cause I'm never sure I'm converting bytes=>GB
You are storing 1Tb per node, so long running compaction is not r
科技有限公司
>
> Virtue Intelligent Network Ltd, co.
>
> Add: 2003,20F No.35 Luojia creative city,Luoyu Road,Wuhan,HuBei
>
> Mob: +86 13797007811|Tel: + 86 27 5024 2516
>
>
>
> *发件人:* Nicolas Guyomar
> *发送时间:* 2018年4月20日 15:44
> *收件人:* user@cassandra.apache.org
> *主题:*
Hi,
You can have a look to
https://github.com/apache/cassandra/blob/trunk/NEWS.txt which list every
modification / advice for upgrading between each C* version
On 20 April 2018 at 09:25, Xiangfei Ni wrote:
> By the way,is there official documentation for online upgrade cassandra
> from 3.9 to
Hi,
I receive similar messages from time to time, and I'm using Gmail ;) I
believe I never missed a mail on the ML and that you can safely ignore this
message
On 13 April 2018 at 15:06, Jacques-Henri Berthemet <
jacques-henri.berthe...@genesys.com> wrote:
> Hi,
>
>
>
> I’m getting bounce messag
..can you share that section?
>
>
> On Wednesday, April 11, 2018, Abdul Patel wrote:
>
>> Thanks .this is perfect
>>
>> On Wednesday, April 11, 2018, Nicolas Guyomar
>> wrote:
>>
>>> Sorry, I should have give you this link instead :
>>> h
Sorry, I should have give you this link instead :
https://github.com/apache/cassandra/blob/trunk/NEWS.txt
You'll find everything you need IMHO
On 11 April 2018 at 17:05, Abdul Patel wrote:
> Thanks.
>
> Is the upgrade process staright forward do we have any documentation to
> upgrade?
>
>
> On
Hi,
New features can be found here :
https://github.com/apache/cassandra/blob/cassandra-3.11/CHANGES.txt
On 11 April 2018 at 16:51, Jonathan Haddad wrote:
> Move to the latest 3.0, or if you're feeling a little more adventurous,
> 3.11.2.
>
> 4.0 discussion is happening now, nothing is decided
Hi Lucas,
There are usually some logs in system.log at node startup regarding JMX
initialization, are those OK ?
On 5 April 2018 at 22:13, Lucas Benevides
wrote:
> Dear community members,
>
> I have just upgraded my Cassandra from version 3.11.1 to 3.11.2. I kept my
> previous configuration fil
Hi Shalom,
You might want to compress on application side before inserting in
Cassandra, using the algorithm on your choice, based on compression ratio
and speed that you found acceptable with your use case
On 4 April 2018 at 14:38, shalom sagges wrote:
> Thanks DuyHai!
>
> I'm using the defau
Hi,
You also have 62 pending compactions at the same time, which is odd for
such a small dataset IHMO, are you triggering 'nodetool compact' with some
kind of cron you may have forgot after a test or something else ?
Do you have any monitoring in place ? If not, you could let some 'dstat
-tnrvl 10
Hi,
This might be a compaction which is running, have you check that ?
On 9 March 2018 at 11:29, Yasir Saleem wrote:
> Hi Team,
>
> we are facing issue of uneven data movement in cassandra disk for
> specific which disk03 in our case, however all the disk are consuming
> around 60% of space b
I got once this kind of problem and it was exactly what Chris explained.
Could you double check that on this remote host you do not have 2 versions
of cassandra and nodetool is pointing to the old one ?
On 6 March 2018 at 17:17, onmstester onmstester wrote:
> One my PC i've the exactly same vers
.20.10
/10.1.21.10
/10.1.22.10
localhost/127.0.0.1
This might be consider a "bug" or a nice thing to fix by just ignoring
empty host don't you think ?
Have a nice day
On 2 March 2018 at 11:14, Nicolas Guyomar wrote:
> Hi Marco,
>
> Could that be because your seed list h
Hi Marco,
Could that be because your seed list has an extra comma in the end of the
line, thus being interpreted by default as localhost by Cassandra ? And
because you are listening on the node IP localhost is not reachable (need
to check to code to be sure)
Here => seeds: '10.1.20.10,10.1.21.10,
Hi,
With
org.apache.cassandra.metrics:type=ClientRequest,scope=Read,name=Latency and
OneMinuteRate you can have such a metrics
As for the state of the request with regards to other node I do no think
you can have that IMHO with JMX (this is available using TRACING per
request)
On 1 March 2018
Works for me : https://issues.apache.org/jira/browse/CASSANDRA-14128
On 1 March 2018 at 10:36, Kenneth Brotman
wrote:
> The link for 14128 doesn’t work and I can’t find it anywhere.
>
>
>
> Kenneth Brotman
>
>
>
> *From:* kurt greaves [mailto:k...@instaclustr.com]
> *Sent:* Wednesday, February 2
Hi everyone,
I'm trying to find information on the Chunk Cache, and the "only" thing I
found so far is the Jira :
https://issues.apache.org/jira/browse/CASSANDRA-5863 where this
functionality was added.
I'm wondering if this is something to be adjusted when it's full ? Are
there some rule of thum
Is Jon blog post
https://academy.datastax.com/planet-cassandra/blog/cassandra-summit-recap-diagnosing-problems-in-production
was relocated somewhere ?
On 27 February 2018 at 16:34, Kenneth Brotman
wrote:
> One presentation that I hope can get updated is Jon Haddad’s very thorough
> presentation
Hi,
Adding the java-driver ML for further question, because this does look like
a bug
Are you able to reproduce it a clean environnement using the same C*
version and driver version ?
On 27 February 2018 at 10:05, Abhishek Kumar Maheshwari <
abhishek.maheshw...@timesinternet.in> wrote:
> Hi Al
I find DIFFERENCE to be working for dropped mutation, which IMHO works the
same way as Hint metrics
*select difference(sum("Dropped_Count")) FROM "cassandraDroppedMessage"
groupby host* is valid when I check with nodetool over a period of time
Not sure what is not working on your side
On 26 Feb
Hi,
Before looking for a sizing, have you try looking for application side
compression before inserting you data (this paper is really interresting
https://aaltodoc.aalto.fi/bitstream/handle/123456789/29099/master_Burman_
Michael_2017.pdf?sequence=1 ) ? For timeseries use case this is a major
stor
en care about
> the cleanup afterwards one by one.
>
> regards,
> Jürgen
>
> 2018-02-20 13:56 GMT+01:00 Nicolas Guyomar :
>
>> Hi Jurgen,
>>
>> stream_throughput_outbound_megabits_per_sec is the "given total
>> throughput in Mbps", so
Hi Jurgen,
stream_throughput_outbound_megabits_per_sec is the "given total throughput
in Mbps", so it does limit the "concurrent throughput" IMHO, is it not what
you are looking for?
The only limits I can think of are :
- number of connection between every node and the one boostrapping
- number o
Hi,
Such a quick failure might indicate that you are trying to start Cassandra
with the latest JDK which is not yet supported.
You should have a look at the /var/log/system.log
On 13 February 2018 at 16:03, Irtiza Ali wrote:
> Hello everyone:
>
> Whenever I try to run the Cassandra it stop wit
make sure that throwing away a person's data
> encryption key will make it impossible to restore personal data and
> impossible to resolve any pseudonyms associated with that person.
>
>
> On 09.02.18 17:10, Nicolas Guyomar wrote:
>
> Hi everyone,
>
> Because of GDPR w
Hi everyone,
Because of GDPR we really face the need to support “Right to Be Forgotten”
requests => https://gdpr-info.eu/art-17-gdpr/ stating that *"the
controller shall have the obligation to erase personal data without undue
delay"*
Because I usually meet customers that do not have that much c
Hi,
There are no piece of code in Cassandra that would remove this folder. You
should start looking elsewhere, like other people mentioned (chef, ansible
and so on), good luck
On 8 February 2018 at 22:54, test user wrote:
> Does anyone have more inputs on the missing hints folder, rather why i
urrent compaction throughput: 0 MB/s
>
> # nodetool getconcurrentcompactors
> Current concurrent compactors in the system is:
> 16
>
>
> Which memtable_allocation_type are you using ?
>
>
> # grep memtable_allocation_type /etc/cassandra/conf/cassandra.yaml
> memtabl
Hi Jurgen,
It does feel like some OOM during boostrap from previous C* v2, but that
sould be fixed in your version.
Do you know how many sstables is this new node suppose to receive ?
Juste a wild guess : it may have something to do with compaction not
keeping up because every other nodes are st
February 2018 at 11:40, milenko markovic
wrote:
> qb3@qb3-Latitude-E6430-3:~$ cassandra -v
> 3.11.1
> qb3@qb3-Latitude-E6430-3:~$ ps -ef | grep cassandra
> qb3 7017 6859 0 11:40 pts/200:00:00 grep --color=auto cassandra
>
>
>
> On Wednesday, 7 February 201
Hi Milenko,
Can you check the JMX configuration in jvm.options file and make sure you
can login without user/pwd ?
Also, the node might be listening to a specific network interface, can you
output 'netstat -an | grep 7199' for us ? (assuming your JMX port config
is 7199)
On 7 February 2018 at 1
Your row hit rate is 0.971 which is already very high, IMHO there is
"nothing" left to do here if you can afford to store your entire dataset in
memory
Local read latency: 0.030 ms already seems good to me, what makes you think
that you can achieve more with the relative "small" box you are using
Hi,
Could you explain a bit more what you are trying to achieve please ?
Performance tuning is by far the most complex problem we have to deal with,
and there are a lot of configuration changes that can be made on a C*
cluster
When you do performance tuning, do not forget that you need to warmup
Hi,
You have an invalid yaml: file:/etc/cassandra/cassandra.yaml. I suppose
what you just changed is not yaml compatible, pay attention to space
betwenn semi-colon and value
You an use any tool like https://jsonformatter.org/yaml-formatter to help
fix this issue
On 5 February 2018 at 09:28, mile
Hi,
Not sure if StorageService should be accessed, but you can check node
movement here :
'org.apache.cassandra.db:type=StorageService/LeavingNodes',
'org.apache.cassandra.db:type=StorageService/LiveNodes',
'org.apache.cassandra.db:type=StorageService/UnreachableNodes',
'org.apache.cassandra.db:ty
2018 at 12:03, Joel Samuelsson
wrote:
> It was indeed created with C* 1.X
> Do you have any links or otherwise on how I would add the column4? I don't
> want to risk destroying my data.
>
> Best regards,
> Joel
>
> 2018-01-18 11:18 GMT+01:00 Nicolas Guyomar :
>
Hi Joel,
You cannot alter a table primary key.
You can however alter your existing table to only add column4 using cqlsh
and cql, even if this table as created back with C* 1.X for instance
On 18 January 2018 at 11:14, Joel Samuelsson
wrote:
> So to rephrase that in CQL terms I have a table li
Thank you Thomas for starting this thread, I'm having exactly the same
issue on AWS EC2 RHEL-7.4_HVM-20180103-x86_64-2-Hourly2-GP2 (ami-dc13a4a1)
I was starting to bang my head on my desk !
So I'll try to downgrade back to 152 then !
On 18 January 2018 at 08:34, Steinmaurer, Thomas <
thomas.ste
Hi,
It might really be a long shot, but I thought UserDefinedCompaction
triggered by JMX on a single sstable might remove data the node does not
own (to answer your " *Any other way to re-write SSTables with data a node
owns after a cluster scale out" *question part)
I might be wrong though
On
Hi,
I believe you can use Logstash to parse C* logs, using some grok pattern
like those :
https://gist.github.com/ibspoof/917a888adb08a819eab7163b97e018cb so that
you gain some nice insight of what your cluster is really doing !
It feel more "native" than to add some jar in C* lib in order to cha
> We are using this to store audit data of our primary SQL Server DB. Our
> primary key consists of the original table name, column name and the
> month+year combination.
>
>
> I just realized that a script had managed to sneak in more than 100
> million rows on the same
Hi Dipan,
This seems like a really unbalanced modelisation, you have some very wide
rows !
Can you share your model and explain a bit what you are storing in this
table ? Your partition key might not be appropriate
On 20 December 2017 at 09:43, Dipan Shah wrote:
> Hello Kurt,
>
>
> I think I m
Hi Amit,
This is way too much data per node, official recommendation are to try to
stay below 2Tb per node, I have seen nodes up to 4Tb but then maintenance
gets really complicated (backup, boostrap, streaming for repair etc etc)
Nicolas
On 15 December 2017 at 15:01, Amit Agrawal
wrote:
> Hi,
Hi Mickael,
Partition are related to the table they exist in, so in your case, you are
targeting 2 partitions in 2 different tables.
Therefore, IMHO, you will only get atomicity using your batch statement
On 11 December 2017 at 15:59, Mickael Delanoë wrote:
> Hello,
>
> I have a question regard
Quick follow up : triggering a repair did the trick, sstables are starting
to get compacted.
Thank you
On 13 November 2017 at 15:53, Nicolas Guyomar
wrote:
> Hi,
>
> I'm using default "nodetool repair" in 3.0.13 which I believe is full by
> default. I'm no
r.
>
> On Mon, Nov 13, 2017 at 3:28 PM Jeff Jirsa wrote:
>
>> Running incremental repair puts sstables into a “repaired” set (and an
>> unrepaired set), which results in something similar to what you’re
>> describing.
>>
>> Were you running / did you run incre
Hi everyone,
I'm facing quite a strange behavior on STCS on 3.0.13, the strategy seems
to have "forgotten" about old sstables, and started a completely new cycle
from scratch, leaving the old sstables on disk untouched :
Something happened on Nov 10 on every node, which resulted in all those
ssta
Hi Andrea,
Do you have the error using the builder ?
PoolingOptions poolingOptions = new PoolingOptions();
poolingOptions
.setMaxRequestsPerConnection(HostDistance.LOCAL, 32768)
.setMaxRequestsPerConnection(HostDistance.REMOTE, 1);
Builder builder = Cluster.builder();
builder.addCon
Hi,
OneMinuteRate is the mean rate of write/s over a minute bucket of data
AFAIK.
You can find Latencies on every attributes whose name does not end with
"Rate"
On 2 November 2017 at 18:10, AI Rumman wrote:
> Hi,
>
> I am trying to calculate the Read/second and Write/Second in my Cassandra
>
Hi,
I believe that as long as you use -local option in another DC this would be
safe, but repairing a DC while boostraping a new node in it does not seems
OK AFAIK.
On 24 October 2017 at 14:08, Peng Xiao <2535...@qq.com> wrote:
> Hi there,
>
> Can we add a new node (bootstrap) and run repair on
Hi Thomas,
AFAIK temporarily reading at LOCAL_QUORUM/QUORUM until nodetool repair is
finished is the way to go. You can still disable binary/thrift on the node
to "protect" it from acting as a coordinator, and complete its repair
quietly, but I'm not sure that would make such a huge difference in
Hi David,
Last time I did such an upgrade, I got stuck in 2.1.x because of OpsCenter
5.2 :
http://docs.datastax.com/en/landing_page/doc/landing_page/compatibility.html#compatibility__opsc-compatibility
.
This might have change since, but I do not think C* 2.2 will work with
OpsCenter 5.2
If you a
Hi Paul,
This might be a long shot, but some repairs might fail to clear their
snapshot (not sure if its still the case with C* 3.7 however, I had the
problem on 2.X branche).
What does nodetool listsnapshot indicate ?
On 21 September 2017 at 05:49, kurt greaves wrote:
> repair does overstream
Wrong copy/paste !
Looking at the code, it should do nothing :
// look up the sstables now that we're on the compaction executor, so we
don't try to re-compact
// something that was already being compacted earlier.
On 4 September 2017 at 13:54, Nicolas Guyomar
wrote:
> You
gt;>
>>> AND compaction = {'class': 'org.apache.cassandra.db.compa
>>> ction.SizeTieredCompactionStrategy', 'max_threshold': '32',
>>> 'min_threshold': '12', 'tombstone_threshold': '0.1',
9 Validation gps
> gpsfullwithstate 472.4 GiB 5.32 TiB bytes 8.67%
>
> what does it mean? the difference between Validation and Compaction
>
>
> 在 2017年9月1日,下午8:36,Nicolas Guyomar 写道:
>
> Hi,
>
> Well, the command you are using works for me on 3.0.9, I do not have
[RMI TCP Connection(1714)-127.0.0.1] 2017-09-01 17:02:42,516
> CompactionManager.java:704 - Schema does not exist for file
> mc-151276-big-Data.db. Skipping.
>
>
> 在 2017年9月1日,下午4:54,Nicolas Guyomar 写道:
>
> You should have a log coming from the CompactionManager (in cassandra
&g
Whoops sorry I mislead you with cassandra 2.1 behavior, you were right
giving your sstable full path , what kind of log do you have when you
trigger the compaction with the full path ?
On 1 September 2017 at 11:30, Nicolas Guyomar
wrote:
> Well, not sure why you reached a memory usage li
27.0.0.1] 2017-09-01 17:02:42,516
> CompactionManager.java:704 - Schema does not exist for file
> mc-151276-big-Data.db. Skipping.
>
>
> 在 2017年9月1日,下午4:54,Nicolas Guyomar 写道:
>
> You should have a log coming from the CompactionManager (in cassandra
> system.log) when you try the comma
gt;run -b org.apache.cassandra.db:type=CompactionManager
> forceUserDefinedCompaction mc-100963-big-Data.db
> #calling operation forceUserDefinedCompaction of mbean
> org.apache.cassandra.db:type=CompactionManager
> #operation returns:
> null
>
>
>
>
> 在 2017年9月1日,下午3:
Hi,
Last time I used forceUserDefinedCompaction, I got myself a headache
because I was trying to use a full path like you're doing, but in fact it
just need the sstable as parameter
Can you just try :
echo "run -b org.apache.cassandra.db:type=CompactionManager
forceUserDefinedCompaction mc-10096
ok.com/LivePersonInc> We Create Meaningful Connections
>
>
>
> On Tue, Aug 29, 2017 at 2:41 PM, Nicolas Guyomar <
> nicolas.guyo...@gmail.com> wrote:
>
>> Hi Shalom,
>>
>> AFAIK, you are completely safe with prepared statement, there are no
>> caveat
Hi Shalom,
AFAIK, you are completely safe with prepared statement, there are no
caveats using them, and you will have better performance.
Make sure to only prepare them once ;)
On 29 August 2017 at 13:41, Matija Gobec wrote:
> I don't see any disadvantages or warning signs. You will see a perf
>> Thanks for your reply..
>>
>> On Tue, May 23, 2017 at 7:40 PM, Nicolas Guyomar <
>> nicolas.guyo...@gmail.com> wrote:
>>
>>> Hi,
>>>
>>> If you were to know the batch size on client side to make sure it does
>>> not get above the 5k
Hi,
If you were to know the batch size on client side to make sure it does not
get above the 5kb limit, so that you can "limit the number of statements in
a batch", I would suspect you do not need batch at all right ? See
https://inoio.de/blog/2016/01/13/cassandra-to-batch-or-not-to-batch/
As for
Hi Xihui,
I was looking for this documentation also, but I believe datastax removed
it, and it is not available yet on the apache website
As far as I remember, intermediate version was needed if C* Version <
2.1.7.
You should be safe starting from 2.2.6, but testing the upgrade on a
dedicated p
Hi Kunal,
Timeout usually occured in the client (eg cqlsh), it does not mean that the
truncate operation is interrupted.
Have you checked that you have no old snapshot (automatic snaphost for
instance) that you could get rid off to get some space back ?
On 21 April 2017 at 11:27, benjamin roth
Hi Anuja,
What your are looking for is provided as part of DSE :
https://docs.datastax.com/en/datastax_enterprise/5.0/datastax_enterprise/sec/auditEnabling.html
On 1 April 2017 at 20:15, Vladimir Yudovin wrote:
> Hi anuja,
>
> I don't thinks there is a way to do this without creating custom Cas
?
>
> tia,
> rouble
>
> On Wed, Feb 15, 2017 at 4:53 AM, Nicolas Guyomar <
> nicolas.guyo...@gmail.com> wrote:
>
>> Hi Rouble,
>>
>> I usually have to read javadoc in java driver to get my ideas straight
>> regarding exception handling.
>&
Hi Rouble,
I usually have to read javadoc in java driver to get my ideas straight
regarding exception handling.
You can find informations reading :
http://docs.datastax.com/en/drivers/java/3.1/com/datastax/driver/core/policies/RetryPolicy.html
and for instance
http://docs.datastax.com/en/drivers
73 matches
Mail list logo