Re: Something I still don't quite understand, Re: Let's Encrypt with Tomcat behind httpd

2020-11-05 Thread James H. H. Lampert

On 8/24/20 9:57 AM, Christopher Schultz wrote:


So your RewriteCond[ition] is expected to always be true? Okay. Maybe
remove it, then? BTW I think your rewrite will strip query strings and
stuff like that. Maybe you just want RedirectPermanent instead of
Rewrite(Cond|Rule)?


Ladies and Gentlemen:

This past Friday, the cached challenge result expired, and so this past 
Monday, I ran another certbot test.


With the rewrite in place for our "subdomain of interest," the cert 
covering everything else served by the httpd server renewed without 
incident, but the separate cert covering this subdomain failed completely.


I commented out the rewrite, and ran the test again, and both renewed 
without incident.


I posted a redacted version of the complete VirtualHost blocks back on 
August 24th. And after I'd run my tests this week, I've also posted it 
to ServerFault, at

https://serverfault.com/q/1041047/498231

I'm intrigued by Mr. Schultz's suggestion of


Maybe you just want RedirectPermanent instead of
Rewrite(Cond|Rule)?


Would that make a difference? Or is it just a matter of altering the 
RewriteCond clause to specifically ignore anything that looks like a 
Let's Encrypt challenge? Or is there something I can put on the default 
landing page for the subdomain, rather than in the VirtualHost, to cause 
the redirection?


As I recall (unless there's a way to force-expire the cached challenge 
result on a certbot call), I have to wait until December to run another 
test.


--
JHHL

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



WriteListener.onWritePossible is never called back again if the origin cuts the socket

2020-11-05 Thread Joan ventusproxy
Hello,

Tomcat 8.5.55 (also tried with 8.5.37).
Similar to “Bug 62614 - Async servlet over HTTP/2 WriteListener does not work 
because onWritePossible is never called back again” but using NIO connector:


I’m unable to create an example that reproduces this issue. So I will explain 
what’s happening and I hope someone can give me some clue about what’s going on.

The case is simple: we connect to our servlet using http components setting a 
response timeout of 10 seconds. Since our servlet takes less than 5 seconds to 
get the response from the backend, it returns all the content to the client 
(it’s about 180K).
The point is when we set a response timeout of, for instance, 2 seconds. In 
this case the client closes the socket with the servlet before it can return 
the response. In fact, in most situations, when we set the WriteListener to the 
async response this socket is already closed. In this situation 2 different 
things are randomly happening:

1. The expected, the WriteListener throws an IOException: 
org.apache.catalina.connector.ClientAbortException: java.io.IOException: Broken 
pipe, and the ‘onError’ method is called:

while (this.numIterations > 0 && this.os.isReady()) {
this.os.write(this.response, this.startIdx, this.endIdx - 
this.startIdx);  ← The error happens here.
( . . . )
}

2. The unexpected. The ‘onWritePossible’ method is called just once. 
In this case ‘onWritePossible’ is called, this.os.isReady is true and the 
execution enter into the cycle but the above ‘this.os.write’ does not throw any 
exception, then ‘this.os.isReady’ becomes false so the execution exits the 
cycle and the ‘onWritePossible’ method terminates. And it’s never called again 
(neither the ‘onError’ method).


Here I leave a link to the interesting part of the WriteListener code and the 3 
traces: the right one returning the document (trace_OK.txt), the right one 
returning the error (trace_OK_with_broken_pipe.txt) and the wrong one calling 
‘onWritePossible’ just once (trace_KO.txt) : 
https://github.com/joanbalaguero/Tomcat.git

I tried to search a solution for this, no success. I developed a simple test 
case, and it was impossible to reproduce the issue. I’m pretty sure it’s a lack 
of knowledge about how this listener works in this case (sockets already 
closed) but after reading tutorials and more tutorials I’m not able to find the 
solution.

So any help would be very very appreciated.

Thanks for your time.

Joan.



-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Weirdest Tomcat Behavior Ever?

2020-11-05 Thread Stefan Mayr
Am 03.11.2020 um 16:05 schrieb Eric Robinson:
>> -Original Message-
>> From: Eric Robinson 
>> Sent: Tuesday, November 3, 2020 8:21 AM
>> To: Tomcat Users List 
>> Subject: RE: Weirdest Tomcat Behavior Ever?
>>
>>> From: Mark Thomas 
>>> Sent: Tuesday, November 3, 2020 2:06 AM
>>> To: Tomcat Users List 
>>> Subject: Re: Weirdest Tomcat Behavior Ever?
>>>
>>> On 02/11/2020 12:16, Eric Robinson wrote:
>>>
>>> 
>>>
 Gotcha, thanks for the clarification. Let's see what happens when
 the users
>>> start hitting it at 8:00 am Eastern.
>>>
>>> Progress. The first attempt to write to the socket triggers the
>>> following
>>> exception:
>>>
>>> 02-Nov-2020 14:33:54.083 FINE [http-bio-3016-exec-13]
>>> org.apache.tomcat.util.net.JIoEndpoint$DebugOutputStream.write
>>> [301361476]
>>>  java.net.SocketException: Bad file descriptor (Write failed)
>>> at java.net.SocketOutputStream.socketWrite0(Native Method)
>>> at
>>> java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:111)
>>> at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
>>> at
>>>
...

>>> Because this is an instance of an IOException, Tomcat assumes it has
>>> been caused by the client dropping the connection and silently
>>> swallows it. I'll be changing that later today so the exception is
>>> logged as DEBUG level for all new Tomcat versions.
>>>
>>> Possible causes of "java.net.SocketException: Bad file descriptor"
>>> I've been able to find are:
>>>
>>> 1. OS running out of file descriptors.
>>>
>>> 2.Trying to use a closed socket.
>>>
>>> I want to review the source code to see if there are any others.
>>>
>>> I don't think we are seeing 2 as there is no indication of the Socket,
>>> InputStream or OutputStream being closed in the logs.
>>>
>>> That leaves 1. Possible causes here are a file descriptor leak or
>>> normal operations occasionally needing more than the current limit. I
>>> don't think it is a leak as I'd expect to see many more errors of this
>>> type after the first and we aren't seeing that. That leaves the
>>> possibility of the current limit being a little too low.
>>>
>>> My recommendation at this point is to increase the limit for file 
>>> descriptors.
>>> Meanwhile, I'll look at the JRE source to see if there are any other
>>> possible triggers for this exception.
>>>
>>> Mark
>>>
>>>
>>
>> On the tomcat server, max open file descriptors is currently 2853957
>>
>> [root@001app01a ~]# sysctl fs.file-max
>> fs.file-max = 2853957
>>
>> Most of the time, the number of open files appears to run about 600,000.
>>
>>  What do you think of watching the open file count and seeing if the number
>> gets up around the ceiling when the socket write failure occurs? Something
>> like...
>>
>> [root@001app01a ~]#  while [ TRUE ];do FILES=$(lsof|wc -l);echo "$(date
>> +%H:%M:%S) $FILES";done
>> 09:11:15 591671
>> 09:11:35 627347
>> 09:11:54 626851
>> 09:12:11 626429
>> 09:12:26 545748
>> 09:12:42 548578
>> 09:12:58 551487
>> 09:13:14 516700
>> 09:13:30 513312
>> 09:13:45 512830
>> 09:14:02 58
>> 09:14:18 568233
>> 09:14:35 570158
>> 09:14:51 566269
>> 09:15:07 547389
>> 09:15:23 544203
>> 09:15:38 546395
>>
>> It's not ideal; as it seems to take 15-20 seconds to count them using lsof.
>>
>>
>>
> 
> Wait, never mind. I realized the per-process limits are what matters. I 
> checked, and nofile was set to 4096 for the relevant java process.
> 
> I did...
> 
> # prlimit --pid 8730 --nofile=16384:16384
> 
> That should give java some extra breathing room if the issue is max open 
> files, right?

>From my experience you should see a different exception if you hit the
NOFILE limit: java.net.SocketException: Too many open files

But I've only seen that when you open or accept a new connection. Never
seen this later when something is written to that already open socket.

To me a bad file descriptor sounds more like a closed socket. This
reminds me of a database or http-client connection pool that handed out
connections with already closed sockets. I think this could be
suspicious because Mark wrote this happens on the first write to the socket.

- Stefan

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Can Tomcat 9 be FIPS compliant without OpenSSL?

2020-11-05 Thread Avik Ray
Dear team,
Sending this query again after subscribing to the mailing list. Sent
it originally 3 days back, but just saw an error response in the spam
folder asking to subscribe first.

We are using Tomcat 9.0.37 x64 on Windows Server 2016 OS and the NIO
connector with JSSE, without an underlying OpenSSL.

As per Tomcat 9 docs, the only mention of FIPS compliant operation I
see is in the config of APR lifecycle listener, with the expectation
of an underlying OpenSSL implementation that can be set to FIPS
enabled mode. Ref:
https://tomcat.apache.org/tomcat-9.0-doc/config/listeners.html

Is it possible to be FIPS compliant with the usage of Tomcat, without
the above setting? We were thinking of using BouncyCastle FIPS as the
underlying Java crypto provider instead of OpenSSL for multiple
reasons.

Are there any other dependencies Tomcat has on the underlying stack,
besides that provided by a Java crypto provider like BC-FIPS, having a
bearing on FIPS compliance?

Please advise, as this is urgent for a FIPS compliance decision.

Thanks,
Avik Ray

-
To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
For additional commands, e-mail: users-h...@tomcat.apache.org



Re: Can Tomcat 9 be FIPS compliant without OpenSSL?

2020-11-05 Thread Martin Grigorov
Hi,

On Fri, Nov 6, 2020 at 8:57 AM Avik Ray  wrote:

> Dear team,
> Sending this query again after subscribing to the mailing list. Sent
> it originally 3 days back, but just saw an error response in the spam
> folder asking to subscribe first.
>
> We are using Tomcat 9.0.37 x64 on Windows Server 2016 OS and the NIO
> connector with JSSE, without an underlying OpenSSL.
>
> As per Tomcat 9 docs, the only mention of FIPS compliant operation I
> see is in the config of APR lifecycle listener, with the expectation
> of an underlying OpenSSL implementation that can be set to FIPS
> enabled mode. Ref:
> https://tomcat.apache.org/tomcat-9.0-doc/config/listeners.html
>
> Is it possible to be FIPS compliant with the usage of Tomcat, without
> the above setting? We were thinking of using BouncyCastle FIPS as the
> underlying Java crypto provider instead of OpenSSL for multiple
> reasons.
>
> Are there any other dependencies Tomcat has on the underlying stack,
> besides that provided by a Java crypto provider like BC-FIPS, having a
> bearing on FIPS compliance?
>
> Please advise, as this is urgent for a FIPS compliance decision.
>

Please check the README of this project -
https://github.com/amitlpande/tomcat-9-fips
Amit Pande recently shared it here at users@.

Regards,
Martin


>
> Thanks,
> Avik Ray
>
> -
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>