[
https://issues.apache.org/jira/browse/HADOOP-19120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17831745#comment-17831745
]
ASF GitHub Bot commented on HADOOP-19120:
-----------------------------------------
anmolanmol1234 commented on code in PR #6633:
URL: https://github.com/apache/hadoop/pull/6633#discussion_r1542783560
##########
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azurebfs/services/kac/TestApacheClientConnectionPool.java:
##########
@@ -0,0 +1,129 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements. See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership. The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License. You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.hadoop.fs.azurebfs.services.kac;
+
+import java.io.IOException;
+
+import org.junit.Assert;
+import org.junit.Test;
+import org.mockito.Mockito;
+
+import org.apache.hadoop.fs.azurebfs.AbstractAbfsTestWithTimeout;
+import org.apache.http.HttpClientConnection;
+import org.apache.http.HttpHost;
+import org.apache.http.conn.routing.HttpRoute;
+
+import static
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.DEFAULT_MAX_CONN_SYS_PROP;
+import static
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.HTTP_MAX_CONN_SYS_PROP;
+import static
org.apache.hadoop.fs.azurebfs.constants.AbfsHttpConstants.KAC_CONN_TTL;
+
+public class TestApacheClientConnectionPool extends
+ AbstractAbfsTestWithTimeout {
+
+ public TestApacheClientConnectionPool() throws Exception {
+ super();
+ }
+
+ @Test
+ public void testBasicPool() throws IOException {
+ System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+ validatePoolSize(DEFAULT_MAX_CONN_SYS_PROP);
+ }
+
+ @Test
+ public void testSysPropAppliedPool() throws IOException {
+ final String customPoolSize = "10";
+ System.setProperty(HTTP_MAX_CONN_SYS_PROP, customPoolSize);
+ validatePoolSize(Integer.parseInt(customPoolSize));
+ }
+
+ private void validatePoolSize(int size) throws IOException {
+ KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+ final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+ final HttpClientConnection[] connections = new HttpClientConnection[size *
2];
+
+ for (int i = 0; i < size * 2; i++) {
+ connections[i] = Mockito.mock(HttpClientConnection.class);
+ }
+
+ for (int i = 0; i < size * 2; i++) {
+ keepAliveCache.put(routes, connections[i]);
+ }
+
+ for (int i = size; i < size * 2; i++) {
+ Mockito.verify(connections[i], Mockito.times(1)).close();
+ }
+
+ for (int i = 0; i < size * 2; i++) {
+ if (i < size) {
+ Assert.assertNotNull(keepAliveCache.get(routes));
+ } else {
+ Assert.assertNull(keepAliveCache.get(routes));
+ }
+ }
+ System.clearProperty(HTTP_MAX_CONN_SYS_PROP);
+ keepAliveCache.close();
+ }
+
+ @Test
+ public void testKeepAliveCache() throws IOException {
+ KeepAliveCache keepAliveCache = KeepAliveCache.getInstance();
+ final HttpRoute routes = new HttpRoute(new HttpHost("localhost"));
+ HttpClientConnection connection = Mockito.mock(HttpClientConnection.class);
+
+ keepAliveCache.put(routes, connection);
+
+ Assert.assertNotNull(keepAliveCache.get(routes));
+ keepAliveCache.put(routes, connection);
+
+ final HttpRoute routes1 = new HttpRoute(new HttpHost("localhost1"));
Review Comment:
didn't get where did we add this route in the cache ?
> [ABFS]: ApacheHttpClient adaptation as network library
> ------------------------------------------------------
>
> Key: HADOOP-19120
> URL: https://issues.apache.org/jira/browse/HADOOP-19120
> Project: Hadoop Common
> Issue Type: Sub-task
> Components: fs/azure
> Affects Versions: 3.5.0
> Reporter: Pranav Saxena
> Assignee: Pranav Saxena
> Priority: Major
> Labels: pull-request-available
>
> Apache HttpClient is more feature-rich and flexible and gives application
> more granular control over networking parameter.
> ABFS currently relies on the JDK-net library. This library is managed by
> OpenJDK and has no performance problem. However, it limits the application's
> control over networking, and there are very few APIs and hooks exposed that
> the application can use to get metrics, choose which and when a connection
> should be reused. ApacheHttpClient will give important hooks to fetch
> important metrics and control networking parameters.
> A custom implementation of connection-pool is used. The implementation is
> adapted from the JDK8 connection pooling. Reasons for doing it:
> 1. PoolingHttpClientConnectionManager heuristic caches all the reusable
> connections it has created. JDK's implementation only caches limited number
> of connections. The limit is given by JVM system property
> "http.maxConnections". If there is no system-property, it defaults to 5.
> Connection-establishment latency increased with all the connections were
> cached. Hence, adapting the pooling heuristic of JDK netlib,
> 2. In PoolingHttpClientConnectionManager, it expects the application to
> provide `setMaxPerRoute` and `setMaxTotal`, which the implementation uses as
> the total number of connections it can create. For application using ABFS, it
> is not feasible to provide a value in the initialisation of the
> connectionManager. JDK's implementation has no cap on the number of
> connections it can have opened on a moment. Hence, adapting the pooling
> heuristic of JDK netlib,
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]