I'm a firm believer in devising an upgradable proof-of-concept system. 
 What drew us to this approach is that we could proof it out relatively 
quickly, and then just scale it out as needed in a rather performant manner 
-- making tradeoffs against dev time, dev-ops time, and performance.  

We haven't yet hit a concurrency where the routes need to be split out -- 
and when we do, we'll be able to build and deploy a dedicated service in a 
matter of minutes.  I would love to be at the situation where a dedicated 
service in a more performant technology is needed -- but this looks 
scalable to a decent level.  I also forgot that if you wanted to shave a 
bit of time off and stay in Python, you can also look at Falcon for 
something like this -- it handles large concurrencies of tiny api data 
really well.

I looked into Erlang a while back – a close friend is really big 
contributor to both Erlang and Python core libraries and on both their 
conference circuits, so he's been a sounding board and evangelist to me. 
 If you were doing Chat, Erlang would be perfect, but for what you're 
talking about - it sounds like overkill.    Our web spiders and a lot of 
internal services would be best off in erlang , as would some of our SOA 
components -- but all our concerns are in higher concurrency operations 
with lots of blocking events involved... and that blocking is our 
bottleneck.  You basically just have a very simple read/reply need of a 
single source -- that is going to be serving mostly from an in-memory 
database.  

My 2¢ is this: when it comes to different polling options, it's entirely a 
UX issue.
Short polling is a pretty crappy and un-savvy solution from a tech 
perspective, but it's usually "good enough" and has more pros than cons in 
many situations.  When you're dealing with a file upload, the user will hit 
"upload" and is accustomed to waiting for a few seconds.  If you toss a 
little animated gif in there, you're good for 5-10seconds before they 
worry.  If you put in a percentage counter with feedback, you can expand 
time.  With that experience, you're in a good spot for using short-polling. 
 You don't *need* to give a faster response, it's just nicer if you do.  So 
you can use this really clunky shot-polling technique that is easy to 
implement, and no one really notices or cares.  



On Tuesday, September 29, 2015 at 6:19:02 PM UTC-4, Iain Duncan wrote:
>
> Thanks Jonathan, your solution is what I was leaning to for 
> proof-of-concept, good to know it's somewhat performant too. I don't know 
> enough about fast response time situations to really know the pros and cons 
> of short-polling, long-polling, and websockets. 
>
> Did you also look into erlang at all? I think all our complex domain login 
> will stay on python, with some apps/services done in pyramid and some in 
> Django, but we are looking into adding more messaging to it, and I wonder 
> if for sending stuff back we might want to look at Erlang + Redis or Erlang 
> + RabbitMQ. 
>
>

 

-- 
You received this message because you are subscribed to the Google Groups 
"pylons-discuss" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/pylons-discuss.
For more options, visit https://groups.google.com/d/optout.

Reply via email to