If you're using the SwarmSpawner, you can use the 
SwarmSpawner.extra_task_spec - this is basically the last chance you have 
to override the arguments that get passed to dockerpy's TaskTemplate.

Something like:

c.SwarmSpawner.extra_task_spec = { 'networks': [ 'backend', 'frontend'] } # 
> UNTESTED!
>

DockerSpawner, however, would need fiddling with extra_create_kwargs and 
dockerpy's create_networking_config - which would probably need to be done 
in the pre_spawn_hook.

Hope this gives you some ideas. If you're using and IDE like pycharm, you 
can easily navigate to the source code of Docker/SwarmSpawners and see 
what's going on. And then go lookup on dockerpy's documentation what the 
arguments for the methods are.

On Thursday, 8 August 2019 20:53:38 UTC+1, Jason Anderson wrote:
>
> Hi Mariusz,
>
> I don't think the DockerSpawner supports creating a container and 
> attaching it to more than one network. This is supported in Docker, but 
> there is no code in the spawner to perform the additional "attach". What 
> you would need is an additional "attach_container" call after the 
> create_container that attaches the spawned container to the [backend] 
> network.
>
> Absent that, you have some other, weirder, options. One option is to have 
> a very trim NAT container, which is itself attached to both the [frontend] 
> and [backend] network--let's call it frontend-db. This container can just 
> have some iptables rules that listen on the frontend interface and forward 
> all traffic to the 'database' host on the backend interface. Your 
> JupyterLab containers could then hit the database over the frontend-db host 
> (rather than the 'database' host.)
>
> Another option is performing the network attach by extending the 
> DockerSpawner class. You could define your own spawner class that extends 
> DockerSpawner, and overrides the create_object method. It could simply 
> call the parent implementation and then perform the attachment of the 
> additional network. You can define the class inline in the 
> jupyterhub_config.
>
> This question illuminated to me that all the spawned containers are indeed 
> on the same network, and can communicate with each other. I don't think 
> there is an option to isolate them further, but that could be an 
> interesting improvement to security.
>
> Hope that gives you some ideas,
> /Jason
>
> On 8/7/19 3:56 PM, Mariusz Wasik wrote:
>
>
> Hi,
>
> I have such a challenge :)
>
> 1. What I have got:
> a) docker
> b) two networks: [frontend] and [backend]
> c) Jupyterhub working in [frontend] network
> d) Jupyterlab spawned (by DockerSpawner) into [frontend] network
> Info: The above configuration works
>
> 2. Database runs in [backend] network.
>
> Problem:
> Spawned Jupyterlab and created inside notebook program dosn't see database.
> (because Jupyterlab runs in diffrent network)
>
> *** The challenge (problem) ***
>
> Is it possible for Spawned Jupyterlab to be connected to both the frontend 
> and backend networks?
> Or maybe there is another way for Jupyterlab to see a database placed in 
> another docker network?
>
>
> Best regards
>
>
> -- 
> You received this message because you are subscribed to the Google Groups 
> "Project Jupyter" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to [email protected] <javascript:>.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/jupyter/36dfbb56-a6a0-45e1-bb02-de203b4e00c3%40googlegroups.com
>  
> <https://groups.google.com/d/msgid/jupyter/36dfbb56-a6a0-45e1-bb02-de203b4e00c3%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Project Jupyter" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jupyter/db487ce4-801f-4ca3-94e9-8a3c8b905d61%40googlegroups.com.

Reply via email to