[ 
https://issues.apache.org/jira/browse/CXF-6837?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15241076#comment-15241076
 ] 

Sergey Beryozkin edited comment on CXF-6837 at 4/14/16 12:54 PM:
-----------------------------------------------------------------

Neal, I've applied your patch but with a number of modifications. Let me list 
them here:

- ProviderCache can be disabled via the Bus configuration property. This is 
important because the selection of MBR and MBW is a sensitive process, users 
should have a chance to disable it if it starts causing side-effects.

- ProviderCache does not depend on SoftReference but on the periodic clean ups 
for the reasons outlined earlier. It is trues that for many simple applications 
it will stay small, but for highly dynamic applications with many media types 
(with subtype variations, example, xml+v1, etc) and dynamic classes we can have 
a lot of keys - and if it is the case then depending on GC (SoftRef) to prevent 
OOM will be suboptimal. Hence the approach Alessio applied to managing the 
MediaType cache is used.

- The most important: by default  ProviderCache will keep a single top 
candidate only. All Candidates as per your original patch can be kept if the 
configuration property is set. The reasoning behind it is that if we keep all 
the candidates then for every new key combination all the default and 
registered providers will be checked which is very sub-optimal, given the only 
reason this will be done is to support TCK-like edge cases where a top 
candidate as in your example decided to dynamically reject its candidacy. 
So it is not obvious to me that with checking all the candidates will not 
actually cause side-effects for highly dynamic applications. In most cases 
people just use Jackson or default JAXBProvider, etc, which simply do return 
'true' or do checks which are not context dependent, 80-90% case. I.e, the top 
candidate will always be selected. And the current solution is simply optimized 
for this case. 

It works for your example too, except that it will not be optimized if there is 
a custom provider there which is context dependent in its 
isReadable/isWriteable. As I said, the configuration property can be set to 
check to get all the candidates into the cache but it is definitely not a 
mainstream case and as such should not be done by default. Note MBR and MBW are 
also used by Clients and Clients will definitely not have multiple candidates 
for a given key, well, may be only in 1%. 

Have a look at the code please, let me know what you think.
Thanks for your effort so far  



 


was (Author: sergey_beryozkin):
Neal, I've applied your patch but with a number of modifications. Let me list 
theme here:

- ProviderCache can be disabled via the Bus configuration property. This is 
important because the selection of MBR and MBW is a sensitive process, users 
should have a chance to disable it if it starts causing side-effects.

- ProviderCache does not depend on SoftReference but on the periodic clean ups 
for the reasons outlined earlier. It is trues that for many simple applications 
it will stay small, but for highly dynamic applications with many media types 
(with subtype variations, example, xml+v1, etc) and dynamic classes we can have 
a lot of keys - and if it is the case then depending on GC (SoftRef) to prevent 
OOM will be suboptimal. Hence the approach Alessio applied to managing the 
MediaType cache is used.

- The most important: by default  ProviderCache will keep a single top 
candidate only. All Candidates as per your original patch can be kept if the 
configuration property is set. The reasoning behind it is that if we keep all 
the candidates then for every new key combination all the default and 
registered providers will be checked which is very sub-optimal, given the only 
reason this will be done is to support TCK-like edge cases where a top 
candidate as in your example decided to dynamically reject its candidacy. 
So it is not obvious to me that with checking all the candidates will not 
actually cause side-effects for highly dynamic applications. In most cases 
people just use Jackson or default JAXBProvider, etc, which simply do return 
'true' or do checks which are not context dependent, 80-90% case. I.e, the top 
candidate will always be selected. And the current solution is simply optimized 
for this case. 

It works for your example too, except that it will not be optimized if there is 
a custom provider there which is context dependent in its 
isReadable/isWriteable. As I said, the configuration property can be set to 
check to get all the candidates into the cache but it is definitely not a 
mainstream case and as such should not be done by default. Note MBR and MBW are 
also used by Clients and Clients will definitely not have multiple candidates 
for a given key, well, may be only in 1%. 

Have a look at the code please, let me know what you think.
Thanks for your effort so far  



 

> Add cache for MessageBodyReader/Writer
> --------------------------------------
>
>                 Key: CXF-6837
>                 URL: https://issues.apache.org/jira/browse/CXF-6837
>             Project: CXF
>          Issue Type: Improvement
>          Components: JAX-RS
>    Affects Versions: 3.1.5, 3.0.8
>         Environment: windows
>            Reporter: Neal Hu
>             Fix For: 3.2.0
>
>         Attachments: ListAProvider.java, ListBProvider.java, 
> ProviderCache.java, ProviderFactory.patch, Resource.java, beans.xml, web.xml
>
>
> CXF selects the msgBodyReader/writer in the reader/writer list for every 
> request, which has big impact to the performance. Jersey also has the cache 
> in 
> org.glassfish.jersey.message.internal.MessageBodyFactory._getMessageBodyReader(...).
>  I have tried add the cache for CXF in ProviderFactory and been proved that 
> it has improved 7-8% for json requests in JMeter. Please let me know if you'd 
> like me to add the enhancement for CXF. Thanks. 
> http://cxf.547215.n5.nabble.com/MessageBodyReader-Writer-cache-td5767091.html



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to