Hi Tony,
Thanks for pointing this out. Your observation is right.
We looked at this problem from a different angle, when defining internal
consistency of a filter value as out of scope [1]. Our concern was that
a filter value may render services unusable because of dependencies
(e.g. Signature.SHA1withECDSA depends on MessageDigest.SHA-1, and
filtering out the latter would render the former unusable). Your point
of view is also valid: there could be a difference in behavior when
comparing the effects of a filter value on services of the same type and
algorithm but implemented by different security providers.
Tracking dependencies between services or providers transparently
—without an explicit declaration or API— is a hard problem in our view.
Even without the Security Providers Filter, a security provider may
depend on others to work, and 3rd party providers are not necessarily an
exception. For example, SunEC depends on a provider implementing
MessageDigest.SHA-1, such as SUN, for SHA1withECDSA to work. These
dependencies are not always explicit by JCA API calls: a service may use
other services directly linking their classes. OpenJDK is agnostic when
it comes to configuration consistency and requires users to make an
informed decision when installing security providers.
The Security Providers Filter is not different in this regard. The
Signature.SHA1withECDSA service may use a MessageDigest.SHA-1
underneath, or Cipher.AES an AlgorithmParameters.AES one, but we don't
know upfront as it depends on each implementation. Other relationships
such as KeyStore.PKCS12 depending on Mac.PBE and Cipher.PBE services are
more complex and less obvious. Even beyond security providers, the
application itself may be unusable for a filter value. It's difficult to
reason or make assumptions about the interaction between services of
different types, or between services and the application itself, and
this is why we proposed to leave filter consistency on the user side.
From the four allow and block combinations of Signature.SHA1withECDSA
and MessageDigest.SHA-1, we see three as clearly motivated: allow both,
block both and allow SHA-1 for MessageDigest but not for Signature. The
combination of blocking SHA-1 for MessageDigest but allowing it for
Signature does not seem to reflect a real case. In general, our
impression is that combinations in which a building block is blocked but
a related higher-level service is allowed do not reflect real cases.
With that said, we also considered temporarily bypassing the filter
inside each service implementation. In this scenario, allowing
Signature.SHA1withECDSA in the filter would be enough for the service to
work, irrespective of MessageDigest.SHA-1 being blocked. We can think of
this as a specific instance of filtering by use or caller, as Sean
suggested some time ago [2]. From a technical point of view, this idea
might be feasible but we have more general concerns that are worth
discussing.
One relatively efficient way to implement this bypassing concept is to
have a thread-local value or a ThreadTracker instance that is set while
executing inside a service instance method or constructor. This value
would be checked at the time of getting a service and, if set, the
service is allowed without any filter check. For example, a
Signature.getInstance() call would return a Signature instance such
that, for each of its instance methods and SPI constructor, the
thread-local value or the ThreadTracker is set and temporarily
guarantees access to all MessageDigest services. Thus, you have a
working Signature.SHA1withECDSA service even if MessageDigest.SHA-1 is
blocked. Other (more sophisticated) approaches such as performing stack
walks would imply a higher performance penalty.
However, we are concerned about the consistency and security
implications of this concept:
1) What does it mean for a KeyStore or an SSLContext service to ignore
the filter internally? This could be a complete bypass of the filter in
practical terms. On the other hand, if we treat some service types
differently in terms of bypassing the filter, how confusing would it be
for the user? For example, Signature services use the filter bypass
concept but KeyStores or 3rd party service types do not.
2) How do we enforce filters across all JCA APIs and what guarantees do
we have? There will be open doors in each security provider to use
algorithms, service types or even security providers that should not be
used. For example, in a FIPS scenario we prefer a failure rather than a
non-FIPS crypto. In a CRIU-snapshot scenario we prefer a failure rather
than the use of a SecureRandom.
3) What are the run time costs that we add? Not a Security Manager stack
walk but still some overhead.
4) What would happen if a security provider —possibly from a 3rd party—
implements services in a parallel way? The thread-local mechanisms might
not be enough for these cases.
We are open to having this discussion but, based on the elements
previously described, we lean more towards keeping the filter plain and
simple. Adding a bypassing mechanism or checking internal consistency
only addresses a portion of the problem at the cost of an increased
complexity.
Regards,
Martin.-
--
[1] -
https://mail.openjdk.org/pipermail/security-dev/2023-February/034554.html
[2] - https://mail.openjdk.org/pipermail/security-dev/2023-July/036540.html
On 10/26/23 01:33, Anthony Scarpino wrote:
Hi Martin,
Thanks for the additional motivation.
While thinking about this PR, I found that the filter can be
inconsistently applied between JDK and 3rd party providers. The java
providers use the JCA to build combined algorithms. For example, the
ECDSAwithSHA1 that comes from SunEC uses getInstance() for
MessageDigest.SHA-1. HmacSHA1 and PBEWithHmacSHA1AndAES_128 do the same.
If a filter only contains "*.MessageDigest.SHA-1", it will disable all
the combined algorithms for JDK providers that use SHA1. For 3rd party
providers, these algorithms would likely be enabled. A PKCS11 provider
that supports ECDSAwithSHA1 would use its internal SHA1 that the filter
would not effect.
I think the filters should be consistently applied to all providers. JDK
providers and any 3rd party providers that uses this pluggable design,
should not be more restricted. This inconsistency could lead to an
incorrect configuration. Have you noticed this in your testing? Can
you think of a way to avoid the inconsistency without changing all these
combined getInstance() calls?
thanks
Tony
On 10/6/23 9:55 AM, Martin Balao wrote:
Hi Tony,
Thanks for having a look at our proposal.
The main motivation for this enhancement is related to cryptographic
policy enforcement and, in particular, the following capabilities: 1)
enforcing that cryptographic services are provided by chosen security
providers only, and 2) allowing or disallowing selected algorithms or
service types across all Java Security APIs.
None of this is entirely new. In regards to capability #1, users can
install or uninstall security providers already, or rely on priorities
and algorithms shadowing. However, we deem this insufficient for the
purposes of policy enforcement, lacking in flexibility, and at risk of
introducing dependencies on implementation details. Some more details
are provided under the section "What is the current limitation?" of
the 8315487 ticket [1]. As for capability #2, there is partial support
currently: algorithms can be blocked from TLS or certificate paths
validation uses but not across all JCA APIs. Thus, we share some of
the motivations that led to existing features but intend to have a
more powerful, comprehensive and flexible solution. As documented in
our proposal, both solutions were combined in a multi-layer model.
The FIPS case is interesting because it requires a combination of
capabilities #1 and #2. However, there are other cases that could
benefit from different policies. I have described some of them below,
providing a summary rationale for why a user might want to adopt the
given policy, a filter conforming to the proposal that would achieve
the desired outcome, and a comparison with how the same outcome might
be met (or potentially be hard/impossible to meet) with the status
quo. See Appendix #1.
As with most security properties, a specific configuration may render
an application completely or partially unusable, and require a
sysadmin/developer/security-expert to perform an assessment. This
effect may be a desired outcome and trigger a remediation action.
Other applications may react in a more resilient way and smoothly
adapt to the policy enforced: use cryptography from an allowed
security provider, skip the use of algorithms that are not allowed,
ask the user to take action, etc. Our concern is that the lack of
strong policy enforcement capabilities may lead to non-compliance
issues going unnoticed.
Existing security capabilities such as the one to install or uninstall
security providers, or even the one that allows to select preference
per algorithm, require the knowledge of what these security providers
implement and what applications require to use. Our proposal allows
better granularity but is not different in terms of relying on public
documentation or sysadmin/developer/security-expert knowledge.
While we don't necessarily share the view of the syntax as hard to use
or error-prone, we concede that it leans more towards the expert UI
side of all security properties. We designed the syntax with the ideas
of consistency, similarity to the serialization filter —to the extent
possible—, simplicity for trivial cases and powerfulness for complex
ones. We want to make sure that it's not only tailored to our needs
today but generic enough for other current or future uses. We tried to
explain the use cases and desirable properties underneath the proposed
design, but at the same time we would like to know if there is any
aspect in particular that is of your concern and if you have any
improvements to suggest so it's more accessible to less experienced
users. We are open to considering specification, implementation and/or
documentation changes.
Thanks,
Martin.-
--
Appendix #1
1) A policy that only authorizes the storage of certificates and keys
in PKCS #11 devices, or in a specific instance managed by the
CentralKeysProvider security provider:
*.KeyStore.PKCS11; !*.KeyStore; *
or
CentralKeysProvider.KeyStore; !*.KeyStore; *
In this scenario, a system administrator is concerned about how
applications store sensitive cryptographic keys and intends to enforce
a centralized or more restricted management. This policy aims to
mitigate security risks and drawbacks associated with local file-based
key storage. In the event of a key update, if centralized management
is applied, applications have access to the latest key without any key
population hassle. While this policy imposes restrictions on key
storage, any security provider (including OpenJDK default ones) can
use these keys after retrieval. This latter observation is relevant
when, for example, PKCS #11 token devices with limited performance or
algorithms availability are used.
We deem this type of policy useful for scenarios where centralized key
management is feasible and desirable, or scenarios where keys are
stored in hardware devices.
Enforcing this policy without the Security Provider Filter would be
hard. While changing the default key store type by means of the
keystore.type security property is possible, that configuration does
not make other key store types unavailable. In addition, this security
property lets users choose a key store algorithm but not its provider.
Uninstalling security providers that offer unwanted KeyStore service
types is not always an option because other service types they offer
might be legitimately required. In other words, this option lacks
granularity. The only way to enforce a policy such as the one
described in this case is to audit the application and library
sources, configurations or logs and check how keys are managed. This
approach would require manual actions and rechecks after each
application or library change.
The Security Provider Filter makes the enforcement of this policy
easy, even under the circumstances of an application or library
update, or after the deployment of a new application. The policy can
also be updated to include other key store algorithms, security
providers or combinations of both.
2) A policy that enforces the use of PKCS #12 key stores only:
*.KeyStore.PKCS12; !*.KeyStore; *
In this scenario, a system administrator is concerned about
applications using key stores with non-standard formats such as JKS,
JCEKS, BKS (Bouncy Castle) or BCFKS (Bouncy Castle) among others.
These key store algorithms may introduce interoperability issues and
require unwanted file conversions at some point. Thus, the system
administrator enforces a policy that only authorizes the PKCS #12
standard for key storage.
As in case #1, the security property for controlling the default key
store type is not enough to prevent applications from using other
formats; uninstalling security providers is not always an option; and
auditing application or libraries source code, configurations or logs
to check how key storage is done could be inconvenient or unfeasible.
The Security Provider Filter provides flexibility to change the
approved key store type or authorize more than one. Third-party
security providers may refer to the PKCS #12 standard by different
algorithm names but that should not be a problem either. For example,
the filter may authorize algorithm name variations such as PKCS12,
BCPKCS12 (Bouncy Castle) and PKCS12-3DES-3DES (Bouncy Castle):
"*.KeyStore.PKCS12; *.KeyStore.BCPKCS12; *.KeyStore.PKCS12-3DES-3DES;
...", or more simply "*.KeyStore.*PKCS12*; ...".
3) A policy that does not allow algorithms considered insecure:
!*.*.MD5; !*.*.MD2; !*.*.SHA-1; *
Security concerns are the motivation behind this type of policy. A
system administrator may enforce it with a deny-list —as done in the
example— or even with a more strict allow-list one. This type of
policy can be applied with algorithms considered secure today or
algorithms that will be required in the future. The latter serves for
the purpose of identifying potential compatibility issues and
providing applications advanced notice to adapt.
While the Security Provider Filter is platform-independent, Linux
crypto-policies is one of the motivations related to this case. Many
Linux distributions, such as RHEL [2], have system-wide
crypto-policies enabled by default. Different crypto-policies profiles
(LEGACY, DEFAULT, FIPS, FUTURE, etc.) define sets of algorithms
authorized for different software packages, including OpenJDK. Our
intention is that crypto-policies for OpenJDK define, according to
each profile, the set of algorithms allowed for all security APIs.
Before the Security Provider Filter, algorithms can be restricted for
some uses with a deny-list type of configuration. However, not all
uses are under scope and applications may use unauthorized algorithms
by calling, for example,
Signature.getInstance("<unauthorized-algorithm>") and using the
service directly. Other approaches such as auditing application and
libraries source code, configurations or logs to check which
algorithms are used may not be practical, as pointed out in case #1.
The Security Provider Filter allows a system administrator to keep
sets of authorized algorithms updated and apply its policy widely to
all JCA service types.
4) A policy in which some uses of MD5 are acceptable (e.g.
MessageDigest) but others are not:
!*.Signature.MD5*; !*.Mac.*MD5; !*.Cipher.*MD5*; *
or
*.MessageDigest.MD5; !*.*.*MD5*; *
Some algorithms may be secure for some uses but not for others. In
this case, a system administrator authorizes MD5 for UUIDs,
redundancy-check codes or other hashes, but prohibits its use for
signatures, message authentication, and for deriving encryption keys
(PBE).
This type of policy enforcement is possible because the Security
Provider Filter lets users specify the service type, in addition to
the algorithm. A system administrator can easily adjust the algorithms
and service types that are allowed or disallowed.
For the same reasons explained in case #3, implementing this policy
without the Security Provider Filter would not be possible or practical.
5) A policy in which only algorithms implemented by the FastProvider
security provider are authorized for encryption:
FastProvider.Cipher; !*.Cipher; *
In this case, a system administrator is concerned about performance
and wants applications to only do cipher operations in FastProvider.
While it is possible to insert FastProvider in the first place of the
security providers list, or even use the preferred algorithms security
property, an application that is using an algorithm not available in
FastProvider will silently slide to a slower implementation. As
described for other cases, removing slower security providers may not
be an option, and auditing applications or libraries source code,
configurations or logs may not be practical.
6) A policy that only allows a specific source of randomness,
irrespective of the algorithm:
SunPKCS11-HSM.SecureRandom; !*.SecureRandom; *
A system administrator has security concerns about sources of
randomness and decides to authorize only one of them, irrespective of
the algorithm. In this case, the prioritized list of security
providers is not enough to use SunPKCS11-HSM because applications may
try to use algorithms not implemented there and silently slide into
other security providers.
Enforcing this type of policy without the Security Provider Filter may
require actions such as uninstalling security providers or auditing
source code, configurations or logs that is not always possible or
practical.
7) In CRIU scenarios, it could be beneficial to enforce a policy that
does not allow the use of random values or key generation before a
snapshot is taken. A snapshot can be taken, for example, running the
JDK with the following filter value:
!*.SecureRandom; !*.KeyPairGenerator; !*.KeyGenerator; *
In some cases, a system administrator might want to enforce an even
more strict policy using an allow-list approach:
*.MessageDigest.SHA-1; *.CertificateFactory; !*
When resuming a snapshot, no filter is set.
This example is based on a real case. To achieve the desired effect
without the Security Providers Filter, a system administrator has to
create a custom security provider that only implements authorized
service types and algorithms. This security provider is the only one
installed while taking the snapshot. When resuming snapshots, all
security providers are enabled. This solution is hard to implement and
not easily extensible to other service types and algorithms. With the
Security Providers Filter it is easy to decide what is available while
taking a snapshot, and what is available while resuming it.
This type of policy falls into the category of those that may benefit
the security of a deployment. The reuse of random seeds or keys in
different executions of the same snapshot may weaken or compromise the
security of a system.
8) A policy that allows the use of a 3rd party security provider for a
specific purpose but not for anything else:
3rdPartyProvider.AllowedService; !3rdPartyProvider; *
In this case, a system administrator has concerns of applications
depending on a specific security provider for more service types or
algorithms than what is authorized (AllowedService).
This type of policy is difficult to implement without the Security
Provider Filter because there is no granularity when installing a
security provider: it's an all or nothing decision. Thus, the only way
around to enforce compliance is to check applications or libraries
source code, configurations or logs and understand what they are
depending on.
Examples Summary
Throughout the previous scenarios, we have discussed security,
interoperability and performance concerns that may be addressed by the
Security Providers Filter. What all these cases have in common is
policy enforcement at provider, service type or algorithm level. We
think that the existing providers or algorithms preference
configurations miss the partial or total closure that the Filter
offers. In addition, the lack of granularity makes the installation of
a security provider an all or nothing decision. Thus, policy
enforcement can only be applied by auditing applications or libraries
source code, configuration or logs. This type of enforcement is not
always possible or practical: a new deployment or update of an
existing one requires a check. The existing functionality to block the
use of algorithms does not extend to all security APIs and it's, thus,
not enough from a policy enforcement and compliance perspective. While
we have showcased fabricated system administration scenarios in some
cases, others are of general interest, can be used more widely or
represent real cases. On a final note, we have intentionally left the
FIPS use-case out of this Appendix as it has been discussed in
previous comments.
--
[1] - https://bugs.openjdk.org/browse/JDK-8315487
[2] -
https://urldefense.com/v3/__https://access.redhat.com/articles/3666211__;!!ACWV5N9M2RV99hQ!Po2RuiDeYlR92dfYip2ezt4Rm3LYPgRNL7QyeHqnNxFwSisPcez2ySr3eFLgt2vLeJhwN2qZKwcedWuMlYnq$
On 9/19/23 16:42, Anthony Scarpino wrote:
Hi Martin,
Thanks for the proposal. Your documents mostly describe the solution.
Can you provide more of the motivations and use-cases for the change?
Do you see non FIPS-140 applications using this feature?
The feature does provide a comprehensive filtering system for JCA.
The syntax, while powerful, seems like it would be somewhat
error-prone and hard to use. We are also concerned that using the
filter requires the sysadmin or developer to know about the service
and algorithm details of every provider and which is required and
which is not, all of which is not easily determined.
thanks
Tony