OK created this issue https://issues.apache.org/jira/browse/SPARK-15487
please comment on this and also let me know if anyone want to
collaborate on implementing it. Its my first contribution to Spark so
will be exciting.

- Gurvinder
On 05/23/2016 07:55 PM, Gurvinder Singh wrote:
> On 05/23/2016 07:18 PM, Radoslaw Gruchalski wrote:
>> Sounds surprisingly close to this:
>> https://github.com/apache/spark/pull/9608
>>
> I might have overlooked it but bridge mode work appears to make Spark
> work with docker containers and able to communicate with them when
> running on more than one machines.
> 
> Here I am trying to enable getting information from Spark UI
> irrespective of Spark running in containers or not. Spark UI's link to
> workers and application drivers are pointing to internal/protected
> network. So to get this information from user's machine, he/she has to
> connect to VPN. Therefore the proposal is to make Spark master UI
> reverse proxy this information back to user. So only Spark master UI
> needs to be opened up to internet and there is no need to change
> anything else how Spark runs internally either in Standalone mode, Mesos
> or in containers on kubernetes.
> 
> - Gurvinder
>> I can ressurect the work on the bridge mode for Spark 2. The reason why
>> the work on the old one was suspended was because Spark was going
>> through so many changes at that time that a lot of work done, was wiped
>> out by the changes towards 2.0.
>>
>> I know that Lightbend was also interested in having bridge mode.
>>
>> –
>> Best regards,

>> Radek Gruchalski
>> 
ra...@gruchalski.com <mailto:ra...@gruchalski.com>
>> de.linkedin.com/in/radgruchalski
>>
>> *Confidentiality:
>> *This communication is intended for the above-named person and may be
>> confidential and/or legally privileged.
>> If it has come to you in error you must take no action based on it, nor
>> must you copy or show it to anyone; please delete/destroy and inform the
>> sender immediately.
>>
>>
>> On May 23, 2016 at 7:14:51 PM, Timothy Chen (tnac...@gmail.com
>> <mailto:tnac...@gmail.com>) wrote:
>>
>>> This will also simplify Mesos users as well, DCOS has to work around
>>> this with our own proxying.
>>>
>>> Tim
>>>
>>> On Sun, May 22, 2016 at 11:53 PM, Gurvinder Singh
>>> <gurvinder.si...@uninett.no> wrote:
>>>> Hi Reynold,
>>>>
>>>> So if that's OK with you, can I go ahead and create JIRA for this. As it
>>>> seems this feature is missing currently and can benefit not just for
>>>> kubernetes users but in general Spark standalone mode users too.
>>>>
>>>> - Gurvinder
>>>> On 05/22/2016 12:49 PM, Gurvinder Singh wrote:
>>>>> On 05/22/2016 10:23 AM, Sun Rui wrote:
>>>>>> If it is possible to rewrite URL in outbound responses in Knox or other 
>>>>>> reverse proxy, would that solve your issue?
>>>>> Any process which can keep track of workers and application drivers IP
>>>>> addresses and route traffic to those will work. Considering Spark Master
>>>>> does exactly this due to all workers and application has to register to
>>>>> the master, therefore I propose master to be the place to add such a
>>>>> functionality.
>>>>>
>>>>> I am not aware with Knox capabilities but Nginx or any other normal
>>>>> reverse proxy will not be able to this on its own due to dynamic nature
>>>>> of application drivers and to some extent workers too.
>>>>>
>>>>> - Gurvinder
>>>>>>> On May 22, 2016, at 14:55, Gurvinder Singh <gurvinder.si...@uninett.no> 
>>>>>>> wrote:
>>>>>>>
>>>>>>> On 05/22/2016 08:32 AM, Reynold Xin wrote:
>>>>>>>> Kubernetes itself already has facilities for http proxy, doesn't it?
>>>>>>>>
>>>>>>> Yeah kubernetes has ingress controller which can act the L7 load
>>>>>>> balancer and router traffic to Spark UI in this case. But I am referring
>>>>>>> to link present in UI to worker and application UI. Replied in the
>>>>>>> detail to Sun Rui's mail where I gave example of possible scenario.
>>>>>>>
>>>>>>> - Gurvinder
>>>>>>>>
>>>>>>>> On Sat, May 21, 2016 at 9:30 AM, Gurvinder Singh
>>>>>>>> <gurvinder.si...@uninett.no <mailto:gurvinder.si...@uninett.no>> wrote:
>>>>>>>>
>>>>>>>>    Hi,
>>>>>>>>
>>>>>>>>    I am currently working on deploying Spark on kuberentes (K8s) and 
>>>>>>>> it is
>>>>>>>>    working fine. I am running Spark with standalone mode and 
>>>>>>>> checkpointing
>>>>>>>>    the state to shared system. So if master fails K8s starts it and 
>>>>>>>> from
>>>>>>>>    checkpoint it recover the earlier state and things just works fine. 
>>>>>>>> I
>>>>>>>>    have an issue with the Spark master Web UI to access the worker and
>>>>>>>>    application UI links. In brief, kubernetes service model allows me 
>>>>>>>> to
>>>>>>>>    expose the master service to internet, but accessing the
>>>>>>>>    application/workers UI is not possible as then I have to expose 
>>>>>>>> them too
>>>>>>>>    individually and given I can have multiple application it becomes 
>>>>>>>> hard
>>>>>>>>    to manage.
>>>>>>>>
>>>>>>>>    One solution can be that the master can act as reverse proxy to 
>>>>>>>> access
>>>>>>>>    information/state/logs from application/workers. As it has the
>>>>>>>>    information about their endpoint when application/worker register 
>>>>>>>> with
>>>>>>>>    master, so when a user initiate a request to access the information,
>>>>>>>>    master can proxy the request to corresponding endpoint.
>>>>>>>>
>>>>>>>>    So I am wondering if someone has already done work in this direction
>>>>>>>>    then it would be great to know. If not then would the community 
>>>>>>>> will be
>>>>>>>>    interesting in such feature. If yes then how and where I should get
>>>>>>>>    started as it would be helpful for me to have some guidance to start
>>>>>>>>    working on this.
>>>>>>>>
>>>>>>>>    Kind Regards,
>>>>>>>>    Gurvinder
>>>>>>>>
>>>>>>>>    
>>>>>>>> ---------------------------------------------------------------------
>>>>>>>>    To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>>>>>>    <mailto:dev-unsubscr...@spark.apache.org>
>>>>>>>>    For additional commands, e-mail: dev-h...@spark.apache.org
>>>>>>>>    <mailto:dev-h...@spark.apache.org>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> ---------------------------------------------------------------------
>>>>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> ---------------------------------------------------------------------
>>>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>>>
>>>>>
>>>>>
>>>>> ---------------------------------------------------------------------
>>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>>
>>>>
>>>>
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>>
>>>
>>> ---------------------------------------------------------------------
>>> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
>>> For additional commands, e-mail: dev-h...@spark.apache.org
>>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
> For additional commands, e-mail: dev-h...@spark.apache.org
> 


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@spark.apache.org
For additional commands, e-mail: dev-h...@spark.apache.org

Reply via email to