Re: RFR: 8322420: [Linux] cgroup v2: Limits in parent nested control groups are not detected [v19]

2024-07-30 Thread Jan Kratochvil
> The testcase requires root permissions.
> 
> Fix by  Severin Gehwolf.
> Testcase by Jan Kratochvil.

Jan Kratochvil has updated the pull request incrementally with two additional 
commits since the last revision:

 - Inline adjust_controller() twice
 - Revert "Unify 4 copies of adjust_controller()"
   
   This reverts commit 77a81d07d74c8ae9bf34bfd8df9bcaca451ede9a.

-

Changes:
  - all: https://git.openjdk.org/jdk/pull/17198/files
  - new: https://git.openjdk.org/jdk/pull/17198/files/7d954ea3..4aec7a6e

Webrevs:
 - full: https://webrevs.openjdk.org/?repo=jdk&pr=17198&range=18
 - incr: https://webrevs.openjdk.org/?repo=jdk&pr=17198&range=17-18

  Stats: 202 lines in 6 files changed: 109 ins; 87 del; 6 mod
  Patch: https://git.openjdk.org/jdk/pull/17198.diff
  Fetch: git fetch https://git.openjdk.org/jdk.git pull/17198/head:pull/17198

PR: https://git.openjdk.org/jdk/pull/17198


Re: RFR: 8322420: [Linux] cgroup v2: Limits in parent nested control groups are not detected [v15]

2024-07-30 Thread Jan Kratochvil
On Mon, 29 Jul 2024 05:03:22 GMT, Jan Kratochvil  
wrote:

>> src/hotspot/os/linux/cgroupUtil_linux.cpp line 64:
>> 
>>> 62: return cpu->adjust_controller(cpu_total);
>>> 63:   }
>>> 64:   return cpu;
>> 
>> I guess an alternative - and maybe more readable solution - would be to 
>> inline `cpu->adjust_controller()` and `mem->adjust_controller()` code here. 
>> We have cg version agnostic api to query the limits. We'd just need 
>> accessors for `cgroup_path()` and a setter, `set_cgroup_path()` in 
>> `CgroupCpuController/CgroupMemoryController` impls.
>
> Currently (before my refactorization) the same code is duplicated 4 times. Do 
> I understand the plan correctly it should be still duplicated but just 2 
> times? As without the duplication I do not see how to inline it.

I have committed something, not sure if it is what was intended.

-

PR Review Comment: https://git.openjdk.org/jdk/pull/17198#discussion_r1696483665


Re: RFR: 8336032: Enforce immutability of Lists used by ClassFile API

2024-07-30 Thread Adam Sotona
On Mon, 29 Jul 2024 19:36:31 GMT, Chen Liang  wrote:

> In #20241, it's noted that `ClassFile.verify` can return a mix-and-match of 
> mutable and immutable lists.
> 
> Though the mutability of the list returned by `verify` has no particular 
> importance, as the mutable list is constructed on every call and does not 
> mutate the models, there are a few scenarios that they matter. Since 
> ClassFile API is designed with immutability in mind, we should change all 
> lists in ClassFile API to be immutable.
> 
> Change summary:
> - Critical scenarios where ClassFile API stores mutable objects:
>   - `ClassSignature.typeParameters` - keeps user mutable list
>   - `CompoundElement.elementList` - buffered models expose the underlying 
> mutable list
>   - `RuntimeInvisibleParameterAnnotationsAttribute`(and Visible) - keeps 
> users' nest mutable lists in an immutable list
>   - `StackMapFrameInfo.locals/stack` - keeps user mutable lists
> - Optional scenarios to return immutable for good practice
>   - `ClassFile.verify` - mutable for full verified results
>   - `CompoundElement.elementList` - safe mutable for bound models
> - Fail fast on user null `Attribute`s in `BoundAttribute.readAttributes` to 
> prevent unmodifiable list containing null
> 
> I have also checked if we should stick with null-hostile `List.of` lists 
> instead of `Collections.unmodifiableList` lists; we are using 
> `unmodifiableList` (extensively in Signature parsing, for example) or other 
> ad-hoc immutable lists, so it is somewhat impractical and not meaningful to 
> enforce null-hostility. (See the list in JBS)
> 
> These use sites are too sporadic so I made no unit tests.

Nice cleanup, thank you.

-

Marked as reviewed by asotona (Reviewer).

PR Review: https://git.openjdk.org/jdk/pull/20380#pullrequestreview-2207025233


Re: RFR: 8337219: AccessFlags factories do not require necessary arguments [v2]

2024-07-30 Thread Adam Sotona
On Mon, 29 Jul 2024 23:30:48 GMT, Chen Liang  wrote:

>> Removes 6 `AccessFlags` factories that do not take class-file versions as 
>> its arguments.
>> 
>> `AccessFlags` is a wrapper around a bit mask to support modifier streaming 
>> in ClassFile API. It additionally supports advanced validation based on 
>> location.
>> 
>> However, as class file versions evolve, we may also need a class file 
>> version argument to ensure that the flags are correctly constructed. For 
>> example, a pre-valhalla class modifier without `ACC_SUPER` should not be 
>> interpreted as a value class. The current factories cannot find good default 
>> class file versions, and if they always assume latest, they will fail in 
>> this scenario.
>> 
>> As a result, we should remove these 6 factories; note that users can still 
>> set the flags via `XxxBuilder::withFlags` with either `int` or 
>> `AccessFlag...` flags. In contrast, these builder APIs can fetch the 
>> previously-passed class file versions, and correctly validate or interpret 
>> these flags. Same story goes for parsing, which can also construct the right 
>> flags with available information.
>> 
>> This enables us to add methods to interpret the logical flags with 
>> version-specific information. If there's need, we can always add a new 
>> `AccessFlags.of(int, AccessFlag.Location, ClassFileFormatVersion)` factory, 
>> given the flexibility from this removal.
>
> Chen Liang has updated the pull request with a new target base due to a merge 
> or a rebase. The pull request now contains three commits:
> 
>  - Reapply import changes after merge
>  - Merge branch 'master' of https://github.com/openjdk/jdk into 
> fix/accessflags-factory
>  - 8337219: AccessFlags factories do not require necessary arguments

Marked as reviewed by asotona (Reviewer).

-

PR Review: https://git.openjdk.org/jdk/pull/20341#pullrequestreview-2207034646


RFR: 8336462: ConcurrentSkipListSet Javadoc incorrectly warns about size method complexity

2024-07-30 Thread Viktor Klang
Removes some of the old wording around the algorithmic complexity of 
ConcurrentSkipListSet::size() while still retaining the warning around the 
accuracy of the returned result.

-

Commit messages:
 - Improving the JavaDoc of ConcurrentSkipListSet::size() to reflect current 
reality

Changes: https://git.openjdk.org/jdk/pull/20388/files
  Webrev: https://webrevs.openjdk.org/?repo=jdk&pr=20388&range=00
  Issue: https://bugs.openjdk.org/browse/JDK-8336462
  Stats: 8 lines in 1 file changed: 0 ins; 5 del; 3 mod
  Patch: https://git.openjdk.org/jdk/pull/20388.diff
  Fetch: git fetch https://git.openjdk.org/jdk.git pull/20388/head:pull/20388

PR: https://git.openjdk.org/jdk/pull/20388


Re: RFR: 8336462: ConcurrentSkipListSet Javadoc incorrectly warns about size method complexity

2024-07-30 Thread Viktor Klang
On Tue, 30 Jul 2024 09:24:12 GMT, Viktor Klang  wrote:

> Removes some of the old wording around the algorithmic complexity of 
> ConcurrentSkipListSet::size() while still retaining the warning around the 
> accuracy of the returned result.

@DougLea Please let me know what you think about this change. 👍

-

PR Comment: https://git.openjdk.org/jdk/pull/20388#issuecomment-2257892314


Re: RFR: 8336462: ConcurrentSkipListSet Javadoc incorrectly warns about size method complexity

2024-07-30 Thread Doug Lea
On Tue, 30 Jul 2024 09:24:12 GMT, Viktor Klang  wrote:

> Removes some of the old wording around the algorithmic complexity of 
> ConcurrentSkipListSet::size() while still retaining the warning around the 
> accuracy of the returned result.

Yes, thanks for fixing wording that should have been updated a long time ago.

-

PR Comment: https://git.openjdk.org/jdk/pull/20388#issuecomment-2258162995


Integrated: 8336315: tools/jpackage/windows/WinChildProcessTest.java Failed: Check is calculator process is alive

2024-07-30 Thread Vanitha B P
On Wed, 24 Jul 2024 12:20:05 GMT, Vanitha B P  wrote:

> tools/jpackage/windows/WinChildProcessTest.java was failing intermittently, 
> fixed the issue and changes are tested.

This pull request has now been integrated.

Changeset: 2c9fd901
Author:Vanitha B P 
Committer: Alexey Semenyuk 
URL:   
https://git.openjdk.org/jdk/commit/2c9fd9016f4675448a62380ff2b86533020e690f
Stats: 20 lines in 2 files changed: 5 ins; 1 del; 14 mod

8336315: tools/jpackage/windows/WinChildProcessTest.java Failed: Check is 
calculator process is alive

Reviewed-by: asemenyuk, almatvee

-

PR: https://git.openjdk.org/jdk/pull/20312


Re: Stream Gatherers (JEP 473) feedback

2024-07-30 Thread Anthony Vanelverdinghe
July 29, 2024 at 8:08 PM, "Viktor Klang"  wrote:
> 
> Hi Anthony,

Hi Viktor

> Thank you for your patience, and for providing feedback, it is always much 
> appreciated.
> 
> >When writing factory methods for Gatherers, there's sometimes a
> degenerate case that requires returning a no-op Gatherer. So I'd like a
> way to mark a no-op Gatherer as such, allowing the Stream implementation
> to recognize and eliminate it from the pipeline. One idea is to add
> Gatherer.defaultIntegrator(), analogous to the other default… methods.
> Another is to add Gatherers.identity(), analogous to Function.identity().
> 
> I contemplated adding that but in the end I decided I didn't want to add it 
> for the sake of adding it,
> but rather adding it in case it was deemed necessary.
> 
> Do you have a concrete use-case (code) that you could share?

The use case is a time series, which has methods to return a Stream of data 
points, `record DataPoint(Instant, BigDecimal)`. In DataPoint, there are 
several Gatherer factory methods, one of which is `Gatherer withInterpolationAt(NavigableSet instants)`. If 
`instants.isEmpty()`, it returns a no-op Gatherer. In general, I guess most 
factory methods with a collection parameter (and compatible type arguments for 
T and R) will have a degenerate case like this when the collection is empty. 
` Gatherer append(T... elements)` would be another example.
`identity()` would also allow an optimized `andThen` implementation, simply 
returning its argument. And when uncomposed, the Stream library could eliminate 
the `gather` stage, allowing characteristics to propogate in this case. So 
`treeSet.stream().gather(identity()).sorted().distinct().toList()` could be 
optimized to `treeSet.stream().toList()`.

> >Sometimes a factory method returns a Gatherer that only works correctly
> if the upstream has certain characteristics, for example
> Spliterator.SORTED or Spliterator.DISTINCT.
> 
> Do you have a concrete use-case (code) that you could share?

All the Streams that are returned from TimeSeries are well-formed: their data 
points are sorted and distinct. And the Gatherer factory methods in DataPoint 
assume that their upstreams have these characteristics. However, we can't 
prevent clients from constructing malformed Streams and invoking the Gatherers 
on them, which will give erroneous results. Now the Gatherer could keep track 
of the previous element and verify that the current element is greater than the 
previous. But the idea was to eliminate this bookkeeping for well-formed 
Streams, while still preventing erroneous results.

> >One idea is to add methods
> like Gatherers.sorted() and Gatherers.distinct(), where the Stream
> implementation would be able to recognize and eliminate these from the
> pipeline if the upstream already has these characteristics. That way
> we'd be able to write `return Gatherers.sorted().andThen(…);`. Another
> idea is to provide a Gatherer with a way to inspect the upstream
> characteristics. If the upstream is missing the required
> characteristic(s), it could then throw an IllegalStateException.

I figured the latter idea isn't useful: the upstream might be sorted, even 
though it doesn't have the sorted characteristic. So it would be harsh for the 
Gatherer to throw an exception in that case.

> For a rather long time Gatherer had characteristics, however,
> what I noticed is that given composition of Gatherers what ended up happening
> almost always was that the combination of characteristics added overhead and 
> devolved into the empty set
> real fast.
> 
> Also, when it comes to things like sorted() and distinct(), they (by 
> necessity) have to get processed in full
> before emitting anything downstream, which creates a lot of extra memory 
> allocation and doesn't lend themselves all that well to any depth-first 
> streaming.

In the given use case, well-formed Streams would already have the sorted and 
distinct characteristics. So the idea was that the sorted() and distinct() 
gatherers would be eliminated from the pipeline entirely in those cases. Only 
malformed Streams would have to pay the cost of sorted() and distinct(), but 
that'd be an acceptable price for them to pay.

That being said, I hadn't considered cases where an intermediate stage in the 
pipeline would not propagate the characteristics. And in cases where the 
intermediate stage doesn't affect the characteristics, it would actually be 
worse to use something like `Gatherers.sorted().andThen(…)`, instead of just 
keeping track of the previous element and throwing an IllegalStateException if 
necessary.

> >The returns clause of Gatherer.Integrator::integrate just states "true
> if subsequent integration is desired, false if not". In particular, it
> doesn't document the behavior I'm observing, that returning false also
> causes downstream to reject any further output elements.
> 
> Do you have a test case? (There was a bug fixed in this area after 22 was 
> released, so you ma

Re: RFR: 8322256: Define and document GZIPInputStream concatenated stream semantics [v8]

2024-07-30 Thread Lance Andersen
On Sat, 27 Jul 2024 15:00:51 GMT, Archie Cobbs  wrote:

>> `GZIPInputStream` supports reading data from multiple concatenated GZIP data 
>> streams since [JDK-4691425](https://bugs.openjdk.org/browse/JDK-4691425). In 
>> order to do this, after a GZIP trailer frame is read, it attempts to read a 
>> GZIP header frame and, if successful, proceeds onward to decompress the new 
>> stream. If the attempt to decode a GZIP header frame fails, or happens to 
>> trigger an `IOException`, it just ignores the trailing garbage and/or the 
>> `IOException` and returns EOF.
>> 
>> There are several issues with this:
>> 
>> 1. The behaviors of (a) supporting concatenated streams and (b) ignoring 
>> trailing garbage are not documented, much less precisely specified.
>> 2. Ignoring trailing garbage is dubious because it could easily hide errors 
>> or other data corruption that an application would rather be notified about. 
>> Moreover, the API claims that a `ZipException` will be thrown when corrupt 
>> data is read, but obviously that doesn't happen in the trailing garbage 
>> scenario (so N concatenated streams where the last one has a corrupted 
>> header frame is indistinguishable from N-1 valid streams).
>> 3. There's no way to create a `GZIPInputStream` that does _not_ support 
>> stream concatenation.
>> 
>> On the other hand, `GZIPInputStream` is an old class with lots of existing 
>> usage, so it's important to preserve the existing behavior, warts and all 
>> (note: my the definition of "existing behavior" here includes the bug fix in 
>> [JDK-7036144](https://bugs.openjdk.org/browse/JDK-7036144)).
>> 
>> So this patch adds a new constructor that takes two new parameters 
>> `allowConcatenation` and `allowTrailingGarbage`. The legacy behavior is 
>> enabled by setting both to true; otherwise, they do what they sound like. In 
>> particular, when `allowTrailingGarbage` is false, then the underlying input 
>> must contain exactly one (if `allowConcatenation` is false) or exactly N (if 
>> `allowConcatenation` is true) concatenated GZIP data streams, otherwise an 
>> exception is guaranteed.
>
> Archie Cobbs has updated the pull request incrementally with two additional 
> commits since the last revision:
> 
>  - Refactor to eliminate "lenient" mode (still failng test: GZIPInZip.java).
>  - Add GZIP input test for various gzip(1) command outputs.

Consider the following simple gzip test

gz % cat batman.txt 
I am batman
gz % cat hello.txt 
hello
 % cat robin.txt 
Robin Here
gz  % cat Joker.txt  
I am the Joker

gz % gzip -c batman.txt hello.txt >> bats.gz
gz % gunzip -c bats 
I am batman
hello

gz % cp bats.gz joker.gz   

gz % cat Joker.txt >> joker.gz   
gz % gunzip -c joker 
I am batman
hello
gunzip: joker.gz: trailing garbage ignored

gz % cp joker.gz badRobin.gz 
gz % gzip -c robin.txt >> badRobin.gz   
  
gz % gunzip -c badRobin 
  
I am batman
hello
gunzip: badRobin.gz: trailing garbage ignored

Here is the hexdump of the badRobin.gz

gz % hexdump -C badRobin.gz 
  1f 8b 08 08 81 13 a9 66  00 03 62 61 74 6d 61 6e  |...f..batman|
0010  2e 74 78 74 00 f3 54 48  cc 55 48 4a 2c c9 4d cc  |.txt..TH.UHJ,.M.|
0020  e3 02 00 f0 49 dd 72 0c  00 00 00 1f 8b 08 08 8f  |I.r.|
0030  13 a9 66 00 03 68 65 6c  6c 6f 2e 74 78 74 00 cb  |..f..hello.txt..|
0040  48 cd c9 c9 e7 02 00 20  30 3a 36 06 00 00 00 49  |H.. 0:6I|
0050  20 61 6d 20 74 68 65 20  4a 6f 6b 65 72 0a 1f 8b  | am the Joker...|
0060  08 08 66 0f a9 66 00 03  72 6f 62 69 6e 2e 74 78  |..f..f..robin.tx|
0070  74 00 0b ca 4f ca cc 53  f0 48 2d 4a e5 02 00 b3  |t...O..S.H-J|
0080  74 fc c1 0b 00 00 00  |t..|
0087


The following program will use GZIPInputStream and GzipCompressorinputStream to 
access badRobin.gz

package gzip;

import org.apache.commons.compress.compressors.gzip.GzipCompressorInputStream;
import org.testng.annotations.Test;

import java.io.FileInputStream;
import java.io.IOException;
import java.util.zip.GZIPInputStream;

public class GZipInputStreamTest {
@Test
public void openWithGZipInputStream() throws IOException {
String f = "gz/badRobin.gz";
try (var is = new FileInputStream(f); var gis = new 
GZIPInputStream(is)) {
var bytes = gis.readAllBytes();
System.out.printf("contents: %s%n", new String(bytes));
}
}
@Test
public void openWithGzipCompressorInputStream() throws IOException {
String f = "gz/badRobin.gz";;
try (var is = new FileInputStream(f);
 var giz = new  GzipCompressorInputStream(is, true)) {
var bytes = g

Integrated: 8336032: Enforce immutability of Lists used by ClassFile API

2024-07-30 Thread Chen Liang
On Mon, 29 Jul 2024 19:36:31 GMT, Chen Liang  wrote:

> In #20241, it's noted that `ClassFile.verify` can return a mix-and-match of 
> mutable and immutable lists.
> 
> Though the mutability of the list returned by `verify` has no particular 
> importance, as the mutable list is constructed on every call and does not 
> mutate the models, there are a few scenarios that they matter. Since 
> ClassFile API is designed with immutability in mind, we should change all 
> lists in ClassFile API to be immutable.
> 
> Change summary:
> - Critical scenarios where ClassFile API stores mutable objects:
>   - `ClassSignature.typeParameters` - keeps user mutable list
>   - `CompoundElement.elementList` - buffered models expose the underlying 
> mutable list
>   - `RuntimeInvisibleParameterAnnotationsAttribute`(and Visible) - keeps 
> users' nest mutable lists in an immutable list
>   - `StackMapFrameInfo.locals/stack` - keeps user mutable lists
> - Optional scenarios to return immutable for good practice
>   - `ClassFile.verify` - mutable for full verified results
>   - `CompoundElement.elementList` - safe mutable for bound models
> - Fail fast on user null `Attribute`s in `BoundAttribute.readAttributes` to 
> prevent unmodifiable list containing null
> 
> I have also checked if we should stick with null-hostile `List.of` lists 
> instead of `Collections.unmodifiableList` lists; we are using 
> `unmodifiableList` (extensively in Signature parsing, for example) or other 
> ad-hoc immutable lists, so it is somewhat impractical and not meaningful to 
> enforce null-hostility. (See the list in JBS)
> 
> These use sites are too sporadic so I made no unit tests.

This pull request has now been integrated.

Changeset: 6154a212
Author:Chen Liang 
URL:   
https://git.openjdk.org/jdk/commit/6154a2129ba505b7163a1998792296827a056750
Stats: 30 lines in 11 files changed: 17 ins; 1 del; 12 mod

8336032: Enforce immutability of Lists used by ClassFile API

Reviewed-by: asotona

-

PR: https://git.openjdk.org/jdk/pull/20380


Integrated: 8337219: AccessFlags factories do not require necessary arguments

2024-07-30 Thread Chen Liang
On Thu, 25 Jul 2024 23:11:15 GMT, Chen Liang  wrote:

> Removes 6 `AccessFlags` factories that do not take class-file versions as its 
> arguments.
> 
> `AccessFlags` is a wrapper around a bit mask to support modifier streaming in 
> ClassFile API. It additionally supports advanced validation based on location.
> 
> However, as class file versions evolve, we may also need a class file version 
> argument to ensure that the flags are correctly constructed. For example, a 
> pre-valhalla class modifier without `ACC_SUPER` should not be interpreted as 
> a value class. The current factories cannot find good default class file 
> versions, and if they always assume latest, they will fail in this scenario.
> 
> As a result, we should remove these 6 factories; note that users can still 
> set the flags via `XxxBuilder::withFlags` with either `int` or 
> `AccessFlag...` flags. In contrast, these builder APIs can fetch the 
> previously-passed class file versions, and correctly validate or interpret 
> these flags. Same story goes for parsing, which can also construct the right 
> flags with available information.
> 
> This enables us to add methods to interpret the logical flags with 
> version-specific information. If there's need, we can always add a new 
> `AccessFlags.of(int, AccessFlag.Location, ClassFileFormatVersion)` factory, 
> given the flexibility from this removal.

This pull request has now been integrated.

Changeset: 93c19ac7
Author:Chen Liang 
URL:   
https://git.openjdk.org/jdk/commit/93c19ac73c2feb8d6191bc5da98b4a9c8e2b5590
Stats: 211 lines in 21 files changed: 71 ins; 80 del; 60 mod

8337219: AccessFlags factories do not require necessary arguments

Reviewed-by: asotona

-

PR: https://git.openjdk.org/jdk/pull/20341


Re: RFR: 8322256: Define and document GZIPInputStream concatenated stream semantics [v8]

2024-07-30 Thread Archie Cobbs
On Tue, 30 Jul 2024 17:35:33 GMT, Lance Andersen  wrote:

> Based on the above, I am reluctant to change the current behavior given it 
> appears to have been modeled after gzip/gunzip as well as WinZip.

That's a reasonable conclusion... and I have no problem with preserving the 
existing behavior if that's what we want. But in that case I would lobby that 
we should also provide some new way to configure a `GZIPInputStream` that 
guarantees reliable behavior.

The key question here is: "Exactly what current behavior of `new 
GZIPInputStream(in)` do we want to preserve?"

Let's start by assuming that we want your above test to pass. Putting that into 
words: "Given a single GZIP stream followed by trailing garbage, `new 
GZIPInputStream(in)` should successfully decode the GZIP stream and ignore the 
trailing garbage".

Note however that what `new GZIPInputStream(in)` currently provides is stronger 
than that:
1. Trailing garbage is ignored
1. Any `IOException` thrown while reading trailing garbage is ignored
1. Concatenated streams are automatically decoded

So we know we want to preserve 1 - What about 2 and/or 3? Your thoughts?

My personal opinions:

* I think 2 is inherently bad and it should not be implemented in any variant
* I think 3 is not required _by default_, but one should be able to enable it 
somehow

If we were to accept those opinions (preserving only 1), then we would end up 
at the same place as `GzipCompressorInputStream`:
* Underlying `IOException`'s are never suppressed
* `new GZIPInputStream(in)` decodes only one GIZP stream and ignores any 
trailing garbage
* `new GZIPInputStream(in, true)` decodes concatenated streams; trailing 
garbage causes `IOException`

-

PR Comment: https://git.openjdk.org/jdk/pull/18385#issuecomment-2258919532


Re: RFR: 8322256: Define and document GZIPInputStream concatenated stream semantics [v8]

2024-07-30 Thread Archie Cobbs
On Sat, 27 Jul 2024 15:00:51 GMT, Archie Cobbs  wrote:

>> `GZIPInputStream` supports reading data from multiple concatenated GZIP data 
>> streams since [JDK-4691425](https://bugs.openjdk.org/browse/JDK-4691425). In 
>> order to do this, after a GZIP trailer frame is read, it attempts to read a 
>> GZIP header frame and, if successful, proceeds onward to decompress the new 
>> stream. If the attempt to decode a GZIP header frame fails, or happens to 
>> trigger an `IOException`, it just ignores the trailing garbage and/or the 
>> `IOException` and returns EOF.
>> 
>> There are several issues with this:
>> 
>> 1. The behaviors of (a) supporting concatenated streams and (b) ignoring 
>> trailing garbage are not documented, much less precisely specified.
>> 2. Ignoring trailing garbage is dubious because it could easily hide errors 
>> or other data corruption that an application would rather be notified about. 
>> Moreover, the API claims that a `ZipException` will be thrown when corrupt 
>> data is read, but obviously that doesn't happen in the trailing garbage 
>> scenario (so N concatenated streams where the last one has a corrupted 
>> header frame is indistinguishable from N-1 valid streams).
>> 3. There's no way to create a `GZIPInputStream` that does _not_ support 
>> stream concatenation.
>> 
>> On the other hand, `GZIPInputStream` is an old class with lots of existing 
>> usage, so it's important to preserve the existing behavior, warts and all 
>> (note: my the definition of "existing behavior" here includes the bug fix in 
>> [JDK-7036144](https://bugs.openjdk.org/browse/JDK-7036144)).
>> 
>> So this patch adds a new constructor that takes two new parameters 
>> `allowConcatenation` and `allowTrailingGarbage`. The legacy behavior is 
>> enabled by setting both to true; otherwise, they do what they sound like. In 
>> particular, when `allowTrailingGarbage` is false, then the underlying input 
>> must contain exactly one (if `allowConcatenation` is false) or exactly N (if 
>> `allowConcatenation` is true) concatenated GZIP data streams, otherwise an 
>> exception is guaranteed.
>
> Archie Cobbs has updated the pull request incrementally with two additional 
> commits since the last revision:
> 
>  - Refactor to eliminate "lenient" mode (still failng test: GZIPInZip.java).
>  - Add GZIP input test for various gzip(1) command outputs.

Another (more conservative) possibility is to preserve both 1 ("Trailing 
garbage is ignored") and 3 ("Concatenated streams are automatically decoded") 
in the default configuration.

Then basically all we would be changing is no longer suppressing 
`IOException`'s.

And then - as a separate remaining question - if we wanted to also provide more 
control there could be new constructor(s) to control concatenation and/or 
tolerance for trailing garbage.

(In all cases, I think using `mark()`/`reset()` (when available) to make 
trailing garbage detection precise is a good idea.)

-

PR Comment: https://git.openjdk.org/jdk/pull/18385#issuecomment-2259012402


Re: RFR: 8336043: Add quality of implementation discussion to Object.{equals, toString, hashCode} [v4]

2024-07-30 Thread Joe Darcy
> First pass at adding some quality of implementation discussions around the 
> overridable methods of Object.

Joe Darcy has updated the pull request with a new target base due to a merge or 
a rebase. The incremental webrev excludes the unrelated changes brought in by 
the merge/rebase. The pull request contains five additional commits since the 
last revision:

 - Merge branch 'master' into JDK-8336043
 - Respond to review feedback.
 - Merge branch 'master' into JDK-8336043
 - Appease jcheck.
 - JDK-8336043: Add quality of implementation discussion to Object.{equals, 
toString, hashCode}

-

Changes:
  - all: https://git.openjdk.org/jdk/pull/20128/files
  - new: https://git.openjdk.org/jdk/pull/20128/files/95f890bf..80cabee0

Webrevs:
 - full: https://webrevs.openjdk.org/?repo=jdk&pr=20128&range=03
 - incr: https://webrevs.openjdk.org/?repo=jdk&pr=20128&range=02-03

  Stats: 12515 lines in 364 files changed: 7143 ins; 4052 del; 1320 mod
  Patch: https://git.openjdk.org/jdk/pull/20128.diff
  Fetch: git fetch https://git.openjdk.org/jdk.git pull/20128/head:pull/20128

PR: https://git.openjdk.org/jdk/pull/20128