Because this was a maintenance release, we should not have introduced any
binary backwards or forwards incompatibilities. Therefore, applications
that were written and compiled against 1.1.0 should still work against a
1.1.1 cluster, and vice versa.

On Wed, Dec 3, 2014 at 1:30 PM, Andrew Or <and...@databricks.com> wrote:

> By the Spark server do you mean the standalone Master? It is best if they
> are upgraded together because there have been changes to the Master in
> 1.1.1. Although it might "just work", it's highly recommended to restart
> your cluster manager too.
>
> 2014-12-03 13:19 GMT-08:00 Romi Kuntsman <r...@totango.com>:
>
> About version compatibility and upgrade path -  can the Java application
>> dependencies and the Spark server be upgraded separately (i.e. will 1.1.0
>> library work with 1.1.1 server, and vice versa), or do they need to be
>> upgraded together?
>>
>> Thanks!
>>
>> *Romi Kuntsman*, *Big Data Engineer*
>>  http://www.totango.com
>>
>> On Tue, Dec 2, 2014 at 11:36 PM, Andrew Or <and...@databricks.com> wrote:
>>
>>> I am happy to announce the availability of Spark 1.1.1! This is a
>>> maintenance release with many bug fixes, most of which are concentrated in
>>> the core. This list includes various fixes to sort-based shuffle, memory
>>> leak, and spilling issues. Contributions from this release came from 55
>>> developers.
>>>
>>> Visit the release notes [1] to read about the new features, or
>>> download [2] the release today.
>>>
>>> [1] http://spark.apache.org/releases/spark-release-1-1-1.html
>>> [2] http://spark.apache.org/downloads.html
>>>
>>> Please e-mail me directly for any typo's in the release notes or name
>>> listing.
>>>
>>> Thanks for everyone who contributed, and congratulations!
>>> -Andrew
>>>
>>
>>
>

Reply via email to