[ 
https://issues.apache.org/jira/browse/KAFKA-10380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17181366#comment-17181366
 ] 

Ewen Cheslack-Postava commented on KAFKA-10380:
-----------------------------------------------

Probably the best solution would be to reorganize the binary dist to have 
subdirectories for clients, core, streams, connect, etc, but that get's tricky 
to because then bin scripts need to deal with more classpath stuff or jars get 
duplicated, we'd need to figure out how config files get organized, etc. But 
that would substantially reduce certain use cases, e.g. connect also pulls in 
jetty/jersey since it was a REST API, both of which are pretty large with their 
transitive dependencies.

> Make dist flatten rocksdbjni
> ----------------------------
>
>                 Key: KAFKA-10380
>                 URL: https://issues.apache.org/jira/browse/KAFKA-10380
>             Project: Kafka
>          Issue Type: Task
>          Components: build
>    Affects Versions: 2.6.0
>            Reporter: Adrian Cole
>            Priority: Major
>
> I was looking for ways to reduce the size of our Kafka image, and the most 
> notable opportunity is handling rocksdbjni differently. It is currently a 
> 15MB jar.
> As mentioned in its description rocksdbjni includes binaries for a lot of OS 
> choices.
> du -k librocksdbjni-*
> 7220  librocksdbjni-linux-aarch64.so
> 8756  librocksdbjni-linux-ppc64le.so
> 7220  librocksdbjni-linux32.so
> 7932  librocksdbjni-linux64.so
> 5440  librocksdbjni-osx.jnilib
> 4616  librocksdbjni-win64.dll
> It may not seem obvious in normal dists, which aim to work for many operating 
> systems what is a problem here. When creating docker images, we currently 
> would need to repackage this to scrub out the irrelevant OS items or accept 
> files larger than alpine itself.
> While this might be something to kick back to rocksdb. having some options 
> here would be great.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to