Re: apache-cassandra 2.2.8 rpm

2018-06-06 Thread ukevgeni



On 2018/06/05 16:07:51, Michael Shuler  wrote: 
> There is no Apache Cassandra RPM for 2.2.8. If there were, it would be 
> basically identical to the Datastax package anyway. The differences 
> would be package name and a dependency on python 2.7 in the spec for 
> Apache Cassandra. (I used to maintain the Datastax Community packages 
> and currently build the Apache Cassandra ones)
> 
> The Apache Cassandra project started building RPMS with version 2.2.10 
> in the 2.2 series. The RPM packages pick up where Datastax left off 
> (although a version or two may have been missed). If you need to stay on 
> 2.2.8, just use the one you're running now, it's fine.
> 
> My suggestion would be the same, if you are planning on upgrading anyway 
> - install the latest 2.2 series RPM package, which is currently 2.2.12, 
> and follow the NEWS.txt notes as with any upgrade.
> 
> Very large warning: test your upgrade on dev/staging cluster. Back up 
> configs/data, set up Apache Cassandra RPM repo config, drop the Datastax 
> one, and upgrade to latest in your series. The myriad of possible 
> package name conflicts were not added to the Apache Cassandra RPMs, so 
> upgrade with care - the Apache Cassandra package is not going to 
> automagically remove the old community packages, so you'd need to do the 
> removal yourself.
> 
> Hope that helps!
> 
> -- 
> Kind regards,
> Michael
> 
> 
> On 06/05/2018 10:00 AM, ukevg...@gmail.com wrote:
> > 
> > 
> > On 2018/06/05 14:56:24, Carlos Rolo  wrote:
> >> I would recommend migrate to a higher version of Apache Cassandra. Since
> >> Datastax always push some extra patches in their distribution. So I would
> >> go 2.2.8 -> 2.2.9+ at least. Since it's a minor upgrade I would read this
> >> https://github.com/apache/cassandra/blob/cassandra-2.2/NEWS.txt and upgrade
> >> to the 2.2.12.
> >>
> >> [image: Pythian]
> >> *Carlos Rolo* | Open Source Consultant | [image: LinkedIn]
> >> 
> >> *m* +351 918 918 100
> >> r...@pythian.com   *www.pythian.com*
> >> 
> >> [image: Pythian]
> >> 
> >>
> >> On Tue, Jun 5, 2018 at 3:49 PM, ukevg...@gmail.com 
> >> wrote:
> >>
> >>>
> >>>
> >>> On 2018/06/05 14:28:20, Nicolas Guyomar 
> >>> wrote:
>  Hi,
> 
>  I believe this rpm was built by Datastax right ?
>  https://rpm.datastax.com/community/noarch/ is what you seem to be
> >>> looking
>  for
>  Otherwise newest rpm are here :
>  https://www.apache.org/dist/cassandra/redhat/22x/
> 
>  On 5 June 2018 at 16:21, ukevg...@gmail.com  wrote:
> 
> > Hi everybody,
> >
> > I am not able to find an RPM package for apache-cassandra 2.2.8
> >
> > Is there anyone who can share a link I really couldn't find it.
> >
> > Thank you
> >
> > Ev
> >
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> >
> >
> >>> I am trying to migrate from Datastax to apache cassandra.
> >>> I already have datastax 2.2.8 installed just trying to migrate to Apache
> >>> cassandra 2.2.8
> >>>
> >>> -
> >>> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> >>> For additional commands, e-mail: user-h...@cassandra.apache.org
> >>>
> >>>
> >>
> >> -- 
> >>
> >>
> >> --
> >>
> >>
> >>
> >>
> >>
> > I will do that if I can't find 2.2.8
> > Anybody who has apache-cassandra 2.2.8 will save me 3 months ?
> > 
> > Thank you,
> > 
> > -
> > To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> > For additional commands, e-mail: user-h...@cassandra.apache.org
> > 
> 
> 
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
> 
> 
Finally can I run mixed Datastax and Apache nodes in the same cluster same 
version?
Thank you for all your help.


-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Repair slow, "Percent repaired" never updated

2018-06-06 Thread Martin Mačura
P.S.: Here's a corresponding log from the second node:

INFO  [AntiEntropyStage:1] 2018-06-04 13:37:16,409 Validator.java:281
- [repair #afc2ef90-67c0-11e8-b07c-c365701888e8] Sending completed
merkle tree to /14.0.53.234 for asm_log.event
INFO  [StreamReceiveTask:30] 2018-06-04 14:14:28,989
StreamResultFuture.java:187 - [Stream
#6244fd50-67ff-11e8-b07c-c365701888e8] Session with /14.0.53.234 is
complete
INFO  [StreamReceiveTask:30] 2018-06-04 14:14:28,990
StreamResultFuture.java:219 - [Stream
#6244fd50-67ff-11e8-b07c-c365701888e8] All sessions completed
INFO  [AntiEntropyStage:1] 2018-06-04 14:14:29,000
ActiveRepairService.java:452 - [repair
#af1aefc0-67c0-11e8-b07c-c365701888e8] Not a global repair, will not
do anticompaction


Why is there no anticompaction if it's an incremental repair?

We have two datacenters currently, this concerns the second one that
we recently brought up (with nodetool rebuild). We cannot do a repair
across datacenters, because nodes in the old DC would run out of disk
space.

Regards,

Martin



On Tue, Jun 5, 2018 at 6:06 PM, Martin Mačura  wrote:
> Hi,
> we're on cassandra 3.11.2, and we're having some issues with repairs.
> They take ages to complete, and some time ago the incremental repair
> stopped working - that is, SSTables are not being marked as repaired,
> even though the repair reports success.
>
> Running a full or incremental repair does not make any difference.
>
> Here's a log of a typical repair (omitted a lot of 'Maximum memory
> usage' messages):
>
> INFO  [Repair-Task-12] 2018-06-04 06:29:50,396 RepairRunnable.java:139
> - Starting repair command #11 (af1aefc0-67c0-11e8-b07c-c365701888e8),
> repairing keyspace prod with repair options (parallelism: parallel,
> primary range: false, incremental: true, job threads: 1,
> ColumnFamilies: [event], dataCenters: [DC1], hosts: [], # of ranges:
> 1280, pull repair: false)
> INFO  [Repair-Task-12] 2018-06-04 06:29:51,497 RepairSession.java:228
> - [repair #afc2ef90-67c0-11e8-b07c-c365701888e8] new session: will
> sync /14.0.53.234, /14.0.52.115 on range [...] for asm_log.[event]
> INFO  [Repair#11:1] 2018-06-04 06:29:51,776 RepairJob.java:169 -
> [repair #afc2ef90-67c0-11e8-b07c-c365701888e8] Requesting merkle trees
> for event (to [/14.0.52.115, /14.0.53.234])
> INFO  [ValidationExecutor:10] 2018-06-04 06:31:13,859
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB),
> cannot allocate chunk of 1.000MiB
> WARN  [PERIODIC-COMMIT-LOG-SYNCER] 2018-06-04 06:32:01,385
> NoSpamLogger.java:94 - Out of 14 commit log syncs over the past
> 134.02s with average duration of 34.90ms, 2 have exceeded the
> configured commit interval by an average of 60.66ms
> ...
> INFO  [ValidationExecutor:10] 2018-06-04 13:31:19,011
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB),
> cannot allocate chunk of 1.000MiB
> INFO  [AntiEntropyStage:1] 2018-06-04 13:37:17,357
> RepairSession.java:180 - [repair
> #afc2ef90-67c0-11e8-b07c-c365701888e8] Received merkle tree for event
> from /14.0.52.115
> INFO  [ValidationExecutor:10] 2018-06-04 13:46:19,281
> NoSpamLogger.java:91 - Maximum memory usage reached (512.000MiB),
> cannot allocate chunk of 1.000MiB
> INFO  [IndexSummaryManager:1] 2018-06-04 13:57:18,772
> IndexSummaryRedistribution.java:76 - Redistributing index summaries
> INFO  [AntiEntropyStage:1] 2018-06-04 13:58:21,971
> RepairSession.java:180 - [repair
> #afc2ef90-67c0-11e8-b07c-c365701888e8] Received merkle tree for event
> from /14.0.53.234
> INFO  [RepairJobTask:4] 2018-06-04 13:58:39,780 SyncTask.java:73 -
> [repair #afc2ef90-67c0-11e8-b07c-c365701888e8] Endpoints /14.0.52.115
> and /14.0.53.234 have 15406 range(s) out of sync for event
> INFO  [RepairJobTask:4] 2018-06-04 13:58:39,781 LocalSyncTask.java:71
> - [repair #afc2ef90-67c0-11e8-b07c-c365701888e8] Performing streaming
> repair of 15406 ranges with /14.0.52.115
> INFO  [RepairJobTask:4] 2018-06-04 13:59:49,075
> StreamResultFuture.java:90 - [Stream
> #6244fd50-67ff-11e8-b07c-c365701888e8] Executing streaming plan for
> Repair
> INFO  [StreamConnectionEstablisher:3] 2018-06-04 13:59:49,076
> StreamSession.java:266 - [Stream
> #6244fd50-67ff-11e8-b07c-c365701888e8] Starting streaming to
> /14.0.52.115
> INFO  [StreamConnectionEstablisher:3] 2018-06-04 13:59:49,089
> StreamCoordinator.java:264 - [Stream
> #6244fd50-67ff-11e8-b07c-c365701888e8, ID#0] Beginning stream session
> with /14.0.52.115
> INFO  [STREAM-IN-/14.0.52.115:7000] 2018-06-04 14:01:14,423
> StreamResultFuture.java:173 - [Stream
> #6244fd50-67ff-11e8-b07c-c365701888e8 ID#0] Prepare completed.
> Receiving 321 files(6.238GiB), sending 318 files(6.209GiB)
> WARN  [Service Thread] 2018-06-04 14:12:15,578 GCInspector.java:282 -
> ConcurrentMarkSweep GC in 4095ms.  CMS Old Gen: 4086661264 ->
> 1107272664; Par Eden Space: 503316480 -> 0; Par Survivor Space:
> 21541464 -> 0
> ...
> WARN  [GossipTasks:1] 2018-06-04 14:12:15,677 FailureDetector.java:288
> - Not marking nodes down due to local

Re: Single Host: Fix "Unknown CF" issue

2018-06-06 Thread Evelyn Smith
Hi Michael,

So I looked at the code, here are some stages of your error message:
1. at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:292) 
[apache-cassandra-3.11.0.jar:3.11.0
At this step Cassandra is running through the keyspaces in it’s schema 
turning off compactions for all tables before it starts rerunning the commit 
log (so it isn’t an issue with the commit log).
2. at org.apache.cassandra.db.Keyspace.open(Keyspace.java:127) 
~[apache-cassandra-3.11.0.jar:3.11.0]
Loading key space related to the column family that is erroring out
3. at org.apache.cassandra.db.Keyspace.(Keyspace.java:324) 
~[apache-cassandra-3.11.0.jar:3.11.0]
Cassandra has initialised the column family and is reloading the view
4. at org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:204) 
~[apache-cassandra-3.11.0.jar:3.11.0]
At this point I haven’t had enough time to tell if Cassandra is 
requesting info on a column specifically or still requesting information on a 
column family. Regardless, given we already rule out issues with the SSTables 
and their directory and Cassandra is yet to start processing the commit log 
this to me suggests it’s something wrong in one of the system keyspaces storing 
the schema information.

There should definitely be a way to resolve this with zero data loss by either:
1. Fixing the issue in the system keyspace SSTables (hard)
2. Rerunning the commit log on a new Cassandra node that has been restored from 
the current one (I’m not sure if this is possible but I’ll figure it out 
tomorrow)

The alternative is if you are ok with losing the commitlog then you can backup 
the data and restore it to a new node (or the same node but with everything 
blown away). This isn’t a trivial process though I’ve done it a few times.

How important is the data?

Happy to come back to this tomorrow (need some sleep)

Regards,
Eevee.




> On 5 Jun 2018, at 7:32 pm, m...@vis.at wrote:
> 
> Keyspace.getColumnFamilyStore



Re: Single Host: Fix "Unknown CF" issue

2018-06-06 Thread mm

Hi Evelyn,

thanks a lot for your detailed response message.

The data is not important. We've already wiped the data and created a 
new cassandra installation. The data re-import task is already running. 
We've lost the data for a couple of months but in this case this does 
not matter.


Nevertheless we will try what you told us - just to be smarter/faster if 
this happens in production (where we will setup a cassandra cluster with 
multiple cassandra nodes anyway). I will drop you a note when we are 
done.


Hmmm... the problem is within a "View". Are this the materialized views?

I'm asking this because:
* Someone on the internet (stackoverflow if a recall correctly) 
mentioned that using materialized views are to be deprecated.
* I had been on a datastax workshop in Zurich a couple of days ago where 
a datastax employee told me that we should not use materialized views - 
it is better to create & fill all tables directly.


Would you also recommend not to use materialized views? As this problem 
is related to a view - maybe we could avoid this problem simply by 
following this recommendation.


Thanks a lot again!

Greetings,
Michael



On 06.06.2018 16:48, Evelyn Smith wrote:

Hi Michael,

So I looked at the code, here are some stages of your error message:
1. at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:292)
[apache-cassandra-3.11.0.jar:3.11.0
 At this step Cassandra is running through the keyspaces in it’s
schema turning off compactions for all tables before it starts
rerunning the commit log (so it isn’t an issue with the commit log).
2. at org.apache.cassandra.db.Keyspace.open(Keyspace.java:127)
~[apache-cassandra-3.11.0.jar:3.11.0]
 Loading key space related to the column family that is erroring out
3. at org.apache.cassandra.db.Keyspace.(Keyspace.java:324)
~[apache-cassandra-3.11.0.jar:3.11.0]
 Cassandra has initialised the column family and is reloading the view
4. at
org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:204)
~[apache-cassandra-3.11.0.jar:3.11.0]
 At this point I haven’t had enough time to tell if Cassandra is
requesting info on a column specifically or still requesting
information on a column family. Regardless, given we already rule out
issues with the SSTables and their directory and Cassandra is yet to
start processing the commit log this to me suggests it’s something
wrong in one of the system keyspaces storing the schema information.

There should definitely be a way to resolve this with zero data loss
by either:
1. Fixing the issue in the system keyspace SSTables (hard)
2. Rerunning the commit log on a new Cassandra node that has been
restored from the current one (I’m not sure if this is possible but
I’ll figure it out tomorrow)

The alternative is if you are ok with losing the commitlog then you
can backup the data and restore it to a new node (or the same node but
with everything blown away). This isn’t a trivial process though
I’ve done it a few times.

How important is the data?

Happy to come back to this tomorrow (need some sleep)

Regards,
Eevee.


On 5 Jun 2018, at 7:32 pm, m...@vis.at wrote:
Keyspace.getColumnFamilyStore



-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Re: Single Host: Fix "Unknown CF" issue

2018-06-06 Thread Pradeep Chhetri
Hi Michael,

We have faced the same situation as yours in our production environment
where we suddenly got "Unknown CF Exception" for materialized views too. We
are using Lagom apps with cassandra for persistence. In our case, since
these views can be regenerated from the original events, we were able to
safely recover.

Few suggestions from my operations experience:

1) Upgrade your cassandra cluster to 3.11.2 because there are lots of bug
fixes specific to materialized views.
2) Never let your application create/update/delete cassandra
table/materialized views. Always create them manually to make sure that
only connection is doing the operation.

Regards,
Pradeep



On Wed, Jun 6, 2018 at 9:44 PM,  wrote:

> Hi Evelyn,
>
> thanks a lot for your detailed response message.
>
> The data is not important. We've already wiped the data and created a new
> cassandra installation. The data re-import task is already running. We've
> lost the data for a couple of months but in this case this does not matter.
>
> Nevertheless we will try what you told us - just to be smarter/faster if
> this happens in production (where we will setup a cassandra cluster with
> multiple cassandra nodes anyway). I will drop you a note when we are done.
>
> Hmmm... the problem is within a "View". Are this the materialized views?
>
> I'm asking this because:
> * Someone on the internet (stackoverflow if a recall correctly) mentioned
> that using materialized views are to be deprecated.
> * I had been on a datastax workshop in Zurich a couple of days ago where a
> datastax employee told me that we should not use materialized views - it is
> better to create & fill all tables directly.
>
> Would you also recommend not to use materialized views? As this problem is
> related to a view - maybe we could avoid this problem simply by following
> this recommendation.
>
> Thanks a lot again!
>
> Greetings,
> Michael
>
>
>
>
> On 06.06.2018 16:48, Evelyn Smith wrote:
>
>> Hi Michael,
>>
>> So I looked at the code, here are some stages of your error message:
>> 1. at
>> org.apache.cassandra.service.CassandraDaemon.setup(Cassandra
>> Daemon.java:292)
>> [apache-cassandra-3.11.0.jar:3.11.0
>>  At this step Cassandra is running through the keyspaces in it’s
>> schema turning off compactions for all tables before it starts
>> rerunning the commit log (so it isn’t an issue with the commit log).
>> 2. at org.apache.cassandra.db.Keyspace.open(Keyspace.java:127)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>  Loading key space related to the column family that is erroring out
>> 3. at org.apache.cassandra.db.Keyspace.(Keyspace.java:324)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>  Cassandra has initialised the column family and is reloading the view
>> 4. at
>> org.apache.cassandra.db.Keyspace.getColumnFamilyStore(Keyspace.java:204)
>> ~[apache-cassandra-3.11.0.jar:3.11.0]
>>  At this point I haven’t had enough time to tell if Cassandra is
>> requesting info on a column specifically or still requesting
>> information on a column family. Regardless, given we already rule out
>> issues with the SSTables and their directory and Cassandra is yet to
>> start processing the commit log this to me suggests it’s something
>> wrong in one of the system keyspaces storing the schema information.
>>
>> There should definitely be a way to resolve this with zero data loss
>> by either:
>> 1. Fixing the issue in the system keyspace SSTables (hard)
>> 2. Rerunning the commit log on a new Cassandra node that has been
>> restored from the current one (I’m not sure if this is possible but
>> I’ll figure it out tomorrow)
>>
>> The alternative is if you are ok with losing the commitlog then you
>> can backup the data and restore it to a new node (or the same node but
>> with everything blown away). This isn’t a trivial process though
>> I’ve done it a few times.
>>
>> How important is the data?
>>
>> Happy to come back to this tomorrow (need some sleep)
>>
>> Regards,
>> Eevee.
>>
>> On 5 Jun 2018, at 7:32 pm, m...@vis.at wrote:
>>> Keyspace.getColumnFamilyStore
>>>
>>
>
> -
> To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
> For additional commands, e-mail: user-h...@cassandra.apache.org
>
>


FINAL REMINDER: Apache EU Roadshow 2018 in Berlin next week!

2018-06-06 Thread sharan

Hello Apache Supporters and Enthusiasts

This is a final reminder that our Apache EU Roadshow will be held in 
Berlin next week on 13th and 14th June 2018. We will have 28 different 
sessions running over 2 days that cover some great topics. So if you are 
interested in Microservices, Internet of Things (IoT), Cloud, Apache 
Tomcat or Apache Http Server then we have something for you.


https://foss-backstage.de/sessions/apache-roadshow

We will be co-located with FOSS Backstage, so if you are interested in 
topics such as incubator, the Apache Way, open source governance, legal, 
trademarks or simply open source communities then there will be 
something there for you too.  You can attend any of talks, presentations 
and workshops from the Apache EU Roadshow or FOSS Backstage.


You can find details of the combined Apache EU Roadshow and FOSS 
Backstage conference schedule below:


https://foss-backstage.de/schedule?day=2018-06-13

Ticket prices go up on 8th June 2018 and we have a last minute discount 
code that anyone can use before the deadline:


15% discount code: ASF15_discount
valid until June 7, 23:55 CET

You can register at the following link:

https://foss-backstage.de/tickets

Our Apache booth and lounge will be open from 11th - 14th June for 
meetups, hacking or to simply relax between sessions. And we will be 
posting regular updates on social media throughout next week so please 
follow us on Twitter @ApacheCon


Thank you for your continued support and we look forward to seeing you 
in Berlin!


Thanks
Sharan Foga, VP Apache Community Development

http://apachecon.com/

PLEASE NOTE: You are receiving this message because you are subscribed 
to a user@ or dev@ list of one or more Apache Software Foundation projects.




-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Fwd: What is ChunkReader.readChunk used for?

2018-06-06 Thread John Wilson
Hi,

Cassandra uses the readChunk method which is in SimpleChunkReader and
CompressedChunkReader. This readChunk method is called from ChunkCache and
BufferManagingRebufferer (which is used when cache is not in use).

My question:

   1. What exactly is the readChunk method used for? Is it to read into the
   Key Cache (which maps partition keys to specific SSTable) or to actually
   read rows from SSTables?
   2. What is the position parameter in readChunk(long position, ByteBuffer
   uncompressed)?


Thanks,
John