Hi,

I would like to help with optimizing Spark memory usage. I have some experience 
with offheap, managed memory etc. For example I modified Hazelcast to run with 
'-
Xmx128M' [1] and XAP from Gigaspaces uses my memory store. 

I already studied Spark code, read blogs, videos etc... But I have questions,  
about 
current situation, future direction and what would be the best way to 
contribute. 
Perhaps if some developer would spare a 30 minutes for a chat, and assign me 
some 
Issues for start.

[1]
https://github.com/jankotek/mapdb-hz-offheap[1] 

Thanks,
Jan Kotek
MapDB author

--------
[1] https://github.com/jankotek/mapdb-hz-offheap

Reply via email to