Production Ready/Stable DataStax Java Driver

2016-05-08 Thread Anuj Wadehra
Hi,
Which DataStax Java Driver release is most stable (production ready) for 
Cassandra 2.1?
ThanksAnuj




Warning message even for batches targeting single partition

2016-05-08 Thread Bhuvan Rawal
Hi,

I was testing unlogged batches of size 15 and encountered bunch of these
messages which are filling up my disk:
2016-05-08 20:22:58,338 [WARNING]  cassandra.protocol: Server warning:
*Unlogged
batch covering 1 partition detected* against tables [xyz.abc, xyz.123]. You
should use a logged batch for atomicity, or asynchronous writes for
performance.

If I have identified the partition and writing bunch of rows into that
using unlogged batch, isnt that an ideal case?

Regards,
Bhuvan


Re: Production Ready/Stable DataStax Java Driver

2016-05-08 Thread Alex Popescu
Hi Anuj,

All released versions of the DataStax Java driver are production ready:

1. they all go through the complete QA cycle
2. we merge all bug fixes and improvements upstream.

Now, if you are asking which is currently the most deployed version, that's
2.1 (latest version 2.1.10.1 [1]).

If you want to be ready for future Cassandra upgrades and benefit of the
latest features of the Java driver, then
that's the 3.0 branch (latest version 3.0.1 [2]).

Last but not least, you should also consider when making the decision that
our current focus and main development
goes into the 3.x branch and that the 2.1 is in maintenance mode (meaning
that no new features will be added and it
will only see critical bug fixes).

Bottom line, if your application is not already developed against the 2.1
version of the Java driver, you should use
the latest 3.0 release.


[1]:
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/bYQSUvKQm5k/JduPTt7cGAAJ

[2]:
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/tOWZm4RVbm4/5E_aDAc8IAAJ


On Sun, May 8, 2016 at 7:39 AM, Anuj Wadehra  wrote:

> Hi,
>
> Which DataStax Java Driver release is most stable (production ready) for
> Cassandra 2.1?
>
> Thanks
> Anuj
>
>
>


-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax



» DataStax Enterprise - the database for cloud applications. «


Re: Production Ready/Stable DataStax Java Driver

2016-05-08 Thread Anuj Wadehra
Thanks Alex !!
We are starting to use CQL for the first time (using Thrift till now), so I 
think it makes sense to directly use Java driver 3.0.1 instead of 2.1.10.

As 3.x driver supports all 1.2+ Cassandra versions, I would also like to better 
understand the motivation of having 2.1 releases simultaneously with 3.x 
releases of Java driver.
One obvious reason should be the "Breaking changes" in 3.x. So, 2.1.x bug fix 
releases give some breathing time to existing 2.1 users for getting ready for 
accomodating those breaking changes in their code instead of forcing them to do 
those changes at short notice and upgrade to 3.x immediately. Is that 
understanding correct?



ThanksAnuj
Sent from Yahoo Mail on Android 
 
  On Sun, 8 May, 2016 at 9:01 PM, Alex Popescu wrote:   Hi 
Anuj,
All released versions of the DataStax Java driver are production ready:
1. they all go through the complete QA cycle2. we merge all bug fixes and 
improvements upstream.
Now, if you are asking which is currently the most deployed version, that's 2.1 
(latest version 2.1.10.1 [1]).
If you want to be ready for future Cassandra upgrades and benefit of the latest 
features of the Java driver, thenthat's the 3.0 branch (latest version 3.0.1 
[2]).
Last but not least, you should also consider when making the decision that our 
current focus and main development goes into the 3.x branch and that the 2.1 is 
in maintenance mode (meaning that no new features will be added and itwill only 
see critical bug fixes). 
Bottom line, if your application is not already developed against the 2.1 
version of the Java driver, you should use the latest 3.0 release. 

[1]: 
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/bYQSUvKQm5k/JduPTt7cGAAJ
[2]: 
https://groups.google.com/a/lists.datastax.com/d/msg/java-driver-user/tOWZm4RVbm4/5E_aDAc8IAAJ

On Sun, May 8, 2016 at 7:39 AM, Anuj Wadehra  wrote:

Hi,
Which DataStax Java Driver release is most stable (production ready) for 
Cassandra 2.1?
ThanksAnuj






-- 
Bests,
Alex Popescu | @al3xandruSen. Product Manager @ DataStax



» DataStax Enterprise - the database for cloud applications. «


  


Re: Production Ready/Stable DataStax Java Driver

2016-05-08 Thread Alex Popescu
On Sun, May 8, 2016 at 10:00 AM, Anuj Wadehra 
wrote:

> As 3.x driver supports all 1.2+ Cassandra versions, I would also like to
> better understand the motivation of having 2.1 releases simultaneously with
> 3.x releases of Java driver.


Hi Anuj,

Both Apache Cassandra and the DataStax drivers are evolving fast with
significant improvements across the board. While we support and provide the
latest and greatest, we do also support the users that are already in
production and allow them enough time to upgrade. Major release are
sometimes introducing breaking changes. That's unfortunate but sometimes
the only way we can push things forward.

I do agree with your assessment 1000% that if starting now, the best
version to go with is the latest on the 3.0 branch.


-- 
Bests,

Alex Popescu | @al3xandru
Sen. Product Manager @ DataStax



» DataStax Enterprise - the database for cloud applications. «


Re: Warning message even for batches targeting single partition

2016-05-08 Thread Ben Slater
Hi Bhuvan,

You’re correct that large unlogged batch on one partition isn’t an issue.
The logging behaviour has been/is being changed - see this jira for a
detailed discussion: https://issues.apache.org/jira/browse/CASSANDRA-10876

Cheers
Ben



On Mon, 9 May 2016 at 00:58 Bhuvan Rawal  wrote:

> Hi,
>
> I was testing unlogged batches of size 15 and encountered bunch of these
> messages which are filling up my disk:
> 2016-05-08 20:22:58,338 [WARNING]  cassandra.protocol: Server warning: 
> *Unlogged
> batch covering 1 partition detected* against tables [xyz.abc, xyz.123].
> You should use a logged batch for atomicity, or asynchronous writes for
> performance.
>
> If I have identified the partition and writing bunch of rows into that
> using unlogged batch, isnt that an ideal case?
>
> Regards,
> Bhuvan
>
-- 

Ben Slater
Chief Product Officer, Instaclustr
+61 437 929 798


RE: Cassandra 2.0.x OOM during startsup - schema version inconsistency after reboot

2016-05-08 Thread Michael Fong
Hi, all,


Haven't heard any responses so far, and this isue has troubled us for quite 
some time. Here is another update:

We have noticed several times that The schema version may change after 
migration and reboot:

Here is the scenario:

1.   Two node cluster (1 & 2).

2.   There are some schema changes, i.e. create a few new columnfamily. The 
cluster will wait until both nodes have schema version in sync (describe 
cluster) before moving on.

3.   Right before node2 is rebooted, the schema version is consistent; 
however, after ndoe2 reboots and starts servicing, the MigrationManager would 
gossip different schema version.

4.   Afterwards, both nodes starts exchanging schema  message indefinitely 
until one of the node dies.

We currently suspect the change of schema is due to replying the old entry in 
commit log. We wish to continue dig further, but need experts help on this.

I don't know if anyone has seen this before, or if there is anything wrong with 
our migration flow though..

Thanks in advance.

Best regards,


Michael Fong

From: Michael Fong [mailto:michael.f...@ruckuswireless.com]
Sent: Thursday, April 21, 2016 6:41 PM
To: user@cassandra.apache.org; d...@cassandra.apache.org
Subject: RE: Cassandra 2.0.x OOM during bootstrap

Hi, all,

Here is some more information on before the OOM happened on the rebooted node 
in a 2-node test cluster:


1.   It seems the schema version has changed on the rebooted node after 
reboot, i.e.
Before reboot,
Node 1: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,326 MigrationManager.java 
(line 328) Gossiping my schema version 4cb463f8-5376-3baf-8e88-a5cc6a94f58f
Node 2: DEBUG [MigrationStage:1] 2016-04-19 11:09:42,122 MigrationManager.java 
(line 328) Gossiping my schema version 4cb463f8-5376-3baf-8e88-a5cc6a94f58f

After rebooting node 2,
Node 2: DEBUG [main] 2016-04-19 11:18:18,016 MigrationManager.java (line 328) 
Gossiping my schema version f5270873-ba1f-39c7-ab2e-a86db868b09b



2.   After reboot, both nods repeatedly send MigrationTask to each other - 
we suspect it is related to the schema version (Digest) mismatch after Node 2 
rebooted:
The node2  keeps submitting the migration task over 100+ times to the other 
node.
INFO [GossipStage:1] 2016-04-19 11:18:18,261 Gossiper.java (line 1011) Node 
/192.168.88.33 has restarted, now UP
INFO [GossipStage:1] 2016-04-19 11:18:18,262 TokenMetadata.java (line 414) 
Updating topology for /192.168.88.33
INFO [GossipStage:1] 2016-04-19 11:18:18,263 StorageService.java (line 1544) 
Node /192.168.88.33 state jump to normal
INFO [GossipStage:1] 2016-04-19 11:18:18,264 TokenMetadata.java (line 414) 
Updating topology for /192.168.88.33
DEBUG [GossipStage:1] 2016-04-19 11:18:18,265 MigrationManager.java (line 102) 
Submitting migration task for /192.168.88.33
DEBUG [GossipStage:1] 2016-04-19 11:18:18,265 MigrationManager.java (line 102) 
Submitting migration task for /192.168.88.33
DEBUG [MigrationStage:1] 2016-04-19 11:18:18,268 MigrationTask.java (line 62) 
Can't send schema pull request: node /192.168.88.33 is down.
DEBUG [MigrationStage:1] 2016-04-19 11:18:18,268 MigrationTask.java (line 62) 
Can't send schema pull request: node /192.168.88.33 is down.
DEBUG [RequestResponseStage:1] 2016-04-19 11:18:18,353 Gossiper.java (line 977) 
removing expire time for endpoint : /192.168.88.33
INFO [RequestResponseStage:1] 2016-04-19 11:18:18,353 Gossiper.java (line 978) 
InetAddress /192.168.88.33 is now UP
DEBUG [RequestResponseStage:1] 2016-04-19 11:18:18,353 MigrationManager.java 
(line 102) Submitting migration task for /192.168.88.33
DEBUG [RequestResponseStage:1] 2016-04-19 11:18:18,355 Gossiper.java (line 977) 
removing expire time for endpoint : /192.168.88.33
INFO [RequestResponseStage:1] 2016-04-19 11:18:18,355 Gossiper.java (line 978) 
InetAddress /192.168.88.33 is now UP
DEBUG [RequestResponseStage:1] 2016-04-19 11:18:18,355 MigrationManager.java 
(line 102) Submitting migration task for /192.168.88.33
DEBUG [RequestResponseStage:2] 2016-04-19 11:18:18,355 Gossiper.java (line 977) 
removing expire time for endpoint : /192.168.88.33
INFO [RequestResponseStage:2] 2016-04-19 11:18:18,355 Gossiper.java (line 978) 
InetAddress /192.168.88.33 is now UP
DEBUG [RequestResponseStage:2] 2016-04-19 11:18:18,356 MigrationManager.java 
(line 102) Submitting migration task for /192.168.88.33
.


On the otherhand, Node 1 keeps updating its gossip information, followed by 
receiving and submitting migrationTask afterwards:
DEBUG [RequestResponseStage:3] 2016-04-19 11:18:18,332 Gossiper.java (line 977) 
removing expire time for endpoint : /192.168.88.34
INFO [RequestResponseStage:3] 2016-04-19 11:18:18,333 Gossiper.java (line 978) 
InetAddress /192.168.88.34 is now UP
DEBUG [RequestResponseStage:4] 2016-04-19 11:18:18,335 Gossiper.java (line 977) 
removing expire time for endpoint : /192.168.88.34
INFO [RequestResponseStage:4] 2016-04-19 11:18:18,335 Gossiper.java (line 978) 
InetAddress /192.168.88.

Nodetool Cleanup Problem

2016-05-08 Thread Jan Ali
Hi All, 

I use cassandra 3.4.When running 'nodetool cleanup' command , see this error?
error: Expecting URI in variable: [cassandra.config]. Found[cassandra.yaml]. 
Please prefix the file with [file:///] for local files and [file:///] 
for remote files. If you are executing this from an external tool, it needs to 
set Config.setClientMode(true) to avoid loading configuration.
-- StackTrace --
org.apache.cassandra.exceptions.ConfigurationException: Expecting URI in 
variable: [cassandra.config]. Found[cassandra.yaml]. Please prefix the file 
with [file:///] for local files and [file:///] for remote files. If you 
are executing this from an external tool, it needs to set 
Config.setClientMode(true) to avoid loading configuration.
    at 
org.apache.cassandra.config.YamlConfigurationLoader.getStorageConfigURL(YamlConfigurationLoader.java:78)
    at 
org.apache.cassandra.config.YamlConfigurationLoader.(YamlConfigurationLoader.java:92)
    at 
org.apache.cassandra.config.DatabaseDescriptor.loadConfig(DatabaseDescriptor.java:134)
    at 
org.apache.cassandra.config.DatabaseDescriptor.(DatabaseDescriptor.java:121)
    at 
org.apache.cassandra.config.CFMetaData$Builder.(CFMetaData.java:1160)
    at 
org.apache.cassandra.config.CFMetaData$Builder.create(CFMetaData.java:1175)
    at 
org.apache.cassandra.config.CFMetaData$Builder.create(CFMetaData.java:1170)
    at 
org.apache.cassandra.cql3.statements.CreateTableStatement.metadataBuilder(CreateTableStatement.java:118)
    at org.apache.cassandra.config.CFMetaData.compile(CFMetaData.java:413)
    at 
org.apache.cassandra.schema.SchemaKeyspace.compile(SchemaKeyspace.java:238)
    at 
org.apache.cassandra.schema.SchemaKeyspace.(SchemaKeyspace.java:88)
    at org.apache.cassandra.config.Schema.(Schema.java:96)
    at org.apache.cassandra.config.Schema.(Schema.java:50)
    at org.apache.cassandra.tools.nodetool.Cleanup.execute(Cleanup.java:45)
    at org.apache.cassandra.tools.NodeTool$NodeToolCmd.run(NodeTool.java:248)
    at org.apache.cassandra.tools.NodeTool.main(NodeTool.java:162)

Can anyone help me?
Best regards,
Jan Ali


RE: Effectiveness of Scrub Operation vs SSTable previously marked in blacklist

2016-05-08 Thread Michael Fong
Hi,

I have filed a jira ticket to keep tracked  @ 
https://issues.apache.org/jira/browse/CASSANDRA-11624

Thanks!

Sincerely,

Michael Fong

From: Marcus Eriksson [mailto:krum...@gmail.com]
Sent: Wednesday, March 23, 2016 10:47 PM
To: user@cassandra.apache.org
Subject: Re: Effectiveness of Scrub Operation vs SSTable previously marked in 
blacklist

yeah that is most likely a bug, could you file a ticket?

On Tue, Mar 22, 2016 at 4:36 AM, Michael Fong 
mailto:michael.f...@ruckuswireless.com>> wrote:
Hi, all,

We recently encountered a scenario under Cassandra 2.0 deployment. Cassandra 
detected a corrupted sstable, and when we attempt to scrub the sstable (with 
all the associated sstables), the corrupted sstable was not included in the 
sstable list. This continues until we restart Cassandra and perform sstable 
again.

After we traced the Cassandra source code, we are a bit confused with the 
effectiveness of scrubbing and SStable being marked in blacklist in Cassandra 
2.0+

It seems from previous version (Cassandra 1.2), the scrub operation would 
operate on a sstable regardless of it being previously marked. However, in 
Cassandra 2.0, the function flows seems changed.

Here is function flow that we traced in Cassandra 2.0 source code:


From org.apache.cassandra.db.compaction.CompactionManager

…
public void performScrub(ColumnFamilyStore cfStore, final boolean 
skipCorrupted, final boolean checkData) throws InterruptedException, 
ExecutionException

{

performAllSSTableOperation(cfStore, new AllSSTablesOperation()

{
…
private void performAllSSTableOperation(final ColumnFamilyStore cfs, final 
AllSSTablesOperation operation) throws InterruptedException, ExecutionException
{
final Iterable sstables = cfs.markAllCompacting();
…

org.apache.cassandra.db. ColumnFamilyStore
…

public Iterable markAllCompacting()

{

Callable> callable = new 
Callable>()

{

public Iterable call() throws Exception

{

assert data.getCompacting().isEmpty() : data.getCompacting();

Iterable sstables = 
Lists.newArrayList(AbstractCompactionStrategy.filterSuspectSSTables(getSSTables()));

if (Iterables.isEmpty(sstables))

return null;
…

If it is true, would this flow – marking corrupted sstable in blacklist, defeat 
the original purpose of scrub operation?  Thanks in advanced!


Sincerely,

Michael Fong



Re: [C*3.0.3]lucene indexes not deleted and nodetool repair makes DC unavailable

2016-05-08 Thread Siddharth Verma
Hi Eduardo,
Thanks for your help on stratio index problem

As per your questions.

1. We ran nodetool repair on one box(no range repair), but due to it,
entire DC was non responsive.
It was up, but we were not able to connect.

2. RF is 3, and we have 2 DCs each with 3 nodes.

3. Consistency level for writes is Local_Quorum.

Thanks
Siddharth Verma