Thanks, it works.
Do I have to worry about s.close()?

On Wednesday, February 6, 2013 6:12:06 PM UTC-8, Massimo Di Pierro wrote:
>
> yes:
>
> def connect(address):
>     socket.settimeout(10)
>     s = socket.socket()
>     return s.connect(address)
>
> mysocket = cache.ram('socket',lambda address=(ip,port): 
> connect(address),3600)
> mysocket.send('hello world')
>
> But mind that s.connect may block.
>
> On Wednesday, 6 February 2013 19:32:49 UTC-6, Bernard wrote:
>>
>> Is it possible to use cache.ram for a TCP socket?
>> In my setup, establishing a TCP connection to a remote machine is time 
>> consuming and I need to find a workaround to have snappier response to the 
>> Web UI.
>>
>> Any help appreciated.
>>
>> Thanks,
>> Bernard
>>
>> On Monday, February 4, 2013 11:46:22 AM UTC-8, Bernard wrote:
>>>
>>> Hi web2py users,
>>>    I've been using web2py for a few months now, thank you to the 
>>> developers for the great work.
>>>
>>>    I'm working on an interactive web based monitoring and control 
>>> Application that communicates with ~30 mobile field units at a time to get 
>>> periodic 'semi-realtime' status reports (2-5 second poll period) and allow 
>>> the user to change settings of the field units on demand.  The 
>>> communications channel is using TCP sockets: the web2py workstation end is 
>>> the TCP client and each field unit is running as a TCP server on an 
>>> embedded low performance field unit.  The front end is currently doing 
>>> periodic Ajax polling every 2 seconds and updating the GUI.  I also 
>>> would like to support multiple web users connected to the Application 
>>> on the front end.
>>>
>>>    I've searched the mailing lists of web2py and other frameworks but 
>>> could not find a use case similar to mine.  There are many ways 
>>> implementing this, it's not easy to figure out which is best and what 
>>> pitfalls may lie ahead.
>>> Here are some of the approaches that I have considered:
>>> 1- Use a background asynchronous "Data Acquisition" task always running 
>>> and fills a "RealTime" table in the database (by polling all field units 
>>> every 2 seconds). For each web request, the controller would then pick up 
>>> the latest values from the database and serve them up to Web clients 
>>> without having to worry about pulling the data. The background task keeps 
>>> the sockets open to improve performance.
>>> 2- The controller communicates with the ~30 field units directly, 
>>> bypassing any database overhead. The controller needs a persistent 
>>> reference to the 30 TCP sockets to make the comms faster. Is there a way to 
>>> parallelize the TCP request/response in the request thread to 
>>> communicate with ~30 units quickly? To handle multiple Web users, I can 
>>> cache the controller function so that it doesn't run on every web client 
>>> request.
>>> 3- Have web2py controller communicate with a separate data acquisition 
>>> process 
>>> via message queues. The web2py parts would never deal with the low level 
>>> comms and the external data acquisition component would abstract all 
>>> that. However, this is at the expense of having to create an external 
>>> component and define the interface to it and adding a messaging framework 
>>> between web2py and the data acquisition process.
>>> 4- Controller kicks off a worker thread that collects the field unit 
>>> status. Controller function cached to avoid having a task for every web 
>>> request.
>>> 5- Other ideas that might be better suited to this application?
>>>
>>> If anybody has gone through something similar, can you please help with 
>>> your experience?
>>> If you see any issues or potential weaknesses in any of these 
>>> approaches, your feedback would be greatly appreciated.
>>>
>>> Regards,
>>> Bernard
>>>
>>>

-- 

--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Reply via email to