Hi,

I wonder whether we should not upgrade ivy to use the latest http client 
library too ?

Regards,

Antoine

On Apr 7, 2015, at 12:46 PM, Loren Kratzke (JIRA) <j...@apache.org> wrote:

> 
>    [ 
> https://issues.apache.org/jira/browse/IVY-1197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14483468#comment-14483468
>  ] 
> 
> Loren Kratzke edited comment on IVY-1197 at 4/7/15 4:45 PM:
> ------------------------------------------------------------
> 
> I would be happy to provide you with a project that will reproduce the issue. 
> I can and will do that. 
> 
> Generally speaking from a high level, the utility classes are calling 
> convenience methods and writing to streams that ultimately buffer the data 
> being written. There is buffering, then more buffering, and even more 
> buffering until you have multiple copies of the entire content of the stream 
> stored in over sized buffers (because they double in size when they fill up). 
> Oddly, the twist is that the JVM hits a limit no matter how much RAM you 
> allocate. Once the buffers total more than about ~1GB (which is what happens 
> with a 100-200MB upload) the JVM refuses to allocate more buffer space (even 
> if you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in 
> buffering any of this data to begin with, it is just a side effect of using 
> high level copy methods. There is no memory ballooning at all when the 
> content is written directly to the network.
> 
> I will provide a test project and note the break points where you can debug 
> and watch the process walk all the way down the isle to an OOME. I will have 
> this for you asap.
> 
> 
> was (Author: qphase):
> I would be happy to provide you with a project that will reproduce the issue. 
> I can and will do that. 
> 
> Generally speaking from a high level, the utility classes are calling 
> convenience methods and writing to streams that ultimately buffer the data 
> being written. There is buffering, then more buffering, and even more 
> buffering until you have multiple copies of the entire content of the stream 
> stored in over sized buffers (because they double in size when they fill up). 
> Oddly, the twist is that the JVM hits a limit no matter how much RAM you 
> allocate. Once the buffers total more than about ~1GB (which is what happens 
> with a 100-200MB upload) the JVM refuses to allocate more buffer space (even 
> is you jack up the RAM to 20GB, no cigar). Honestly, there is no benefit in 
> buffering any of this data to begin with, it is just a side effect of using 
> high level copy methods. There is no memory ballooning at all when the 
> content is written directly to the network.
> 
> I will provide a test project and note the break points where you can debug 
> and watch the process walk all the way down the isle to an OOME. I will have 
> this for you asap.
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@ant.apache.org
For additional commands, e-mail: dev-h...@ant.apache.org

Reply via email to