Hi Satoshi,

This question would be better for the 'DataStax Java Driver for Apache
Cassandra mailing list
<https://groups.google.com/a/lists.datastax.com/forum/#%21forum/java-driver-user>',
but I do have a few thoughts about what you are observing:

Between java-driver 2.1 and 3.0 the driver updated its Netty dependency
from 3.9.x to 4.0.x.  Cluster#close is likely taking two seconds longer
because the driver uses AbstractEventExecutor.shutdownGracefully()
<https://github.com/netty/netty/blob/netty-4.0.44.Final/common/src/main/java/io/netty/util/concurrent/AbstractEventExecutor.java#L50>
which waits for a quiet period of 2 seconds to allow any inflight requests
to complete.  You can disable that by passing a custom NettyOptions
<http://docs.datastax.com/en/drivers/java/3.1/com/datastax/driver/core/NettyOptions.html>
to a Cluster.Builder using withNettyOptions, i.e.:

    /**
     * A custom {@link NettyOptions} that shuts down the {@link
EventLoopGroup} after
     * no quiet time.  This is useful for tests that consistently close
clusters as
     * otherwise there is a 2 second delay (from JAVA-914
<https://datastax-oss.atlassian.net/browse/JAVA-914>).
     */
    public static NettyOptions nonQuietClusterCloseOptions = new
NettyOptions() {
        @Override
        public void onClusterClose(EventLoopGroup eventLoopGroup) {
            eventLoopGroup.shutdownGracefully(0, 15,
SECONDS).syncUninterruptibly();
        }
    };

However, I wouldn't recommend doing this unless you have a requirement for
Cluster.close to be as qiuck as possible, as after all closing a Cluster
frequently is not something you should expect to be doing often as a
Cluster and it's Session are meant to be reused over the lifetime of an
application.

With regards to Cluster.connect being slower, i'm not sure I have an
explanation for that and it is not something I have noticed.  I would not
expect Cluster.connect to even take a second with a single node cluster
(for example, I recorded some numbers
<https://datastax-oss.atlassian.net/browse/JAVA-692?focusedCommentId=21428&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-21428>
a while back and mean initialization time with a 40 node cluster with auth
was ~251ms).  Have you tried executing several trials of Cluster.connect
within a single JVM process, does the initialization time improve with a
subsequent Cluster.connect?  I'm wondering if maybe there is some
additional first-time initialization required that was not before.

Thanks,
Andy

On Mon, Mar 6, 2017 at 6:01 AM, Matija Gobec <matija0...@gmail.com> wrote:

> Interesting question since I never measured connect and close times.
> Usually this is something you do once the application starts and thats it.
> Do you plan to misuse it and create a new cluster object and open a new
> connection for each request?
>
> On Mon, Mar 6, 2017 at 7:19 AM, Satoshi Hikida <sahik...@gmail.com> wrote:
>
>> Hi,
>>
>> I'm going to try to update the DataStax's Java Driver version from 2.1.8
>> to 3.1.3.
>> First I ran the test program and measured the time with both drivers
>> v2.1.8 and v3.1.3.
>>
>> The test program is simply Build a Cluster and connect to it and execute
>> a simple select statement, and close the Cluster.
>>
>> The read performance was almost the same for both version (around 20ms),
>> However, the performance of connecting to the cluster, and closing the
>> cluster were significant different.
>>
>> The test environment is as following:
>> - EC2 instance: m4.large(2vCPU, 8GB Memory), 1 node
>> - java1.8
>> - Cassandra v2.2.8
>>
>> Here is the result of the test. I ran the test program for several times
>> but the result almost the same as this result.
>>
>> | Method               | Time in sec (v2.1.8/v3.1.3)|
>> |-----------------------|------------------------------------|
>> | Cluster#connect |                       1.178/2.468 |
>> | Cluster#close     |                       0.022/2.240 |
>>
>> With v3.1.3 driver, Cluster#connect() performance degraded about 1/2 and
>> Cluster#close() degraded 1/100.  I want to know what is the cause of this
>> performance degradations. Could someone advice me?
>>
>>
>> The Snippet of the test program is as following.
>> ```
>> Cluster cluster = Cluster
>>     .builder()
>>     .addContactPoints(endpoints)
>>     .withCredentials(USER, PASS)
>>     .withClusterName(CLUSTER_NAME)
>>     .withRetryPolicy(DefaultRetryPolicy.INSTANCE)
>>     // .withLoadBalancingPolicy(new TokenAwarePolicy(new
>> DCAwareRoundRobinPolicy(DC_NAME))) // for driver 2.1.8
>>     .withLoadBalancingPolicy(new 
>> TokenAwarePolicy(DCAwareRoundRobinPolicy.builder().build()))
>> // for driver 3.1.3
>>     .build();
>>
>> Session session = cluster.connect();
>> ResultSet rs = session.execute("select * from system.local;");
>>
>> session.close();
>> cluster.close();
>> ```
>>
>> Regards,
>> Satoshi
>>
>>
>

Reply via email to