How are you actually sending messages to the SMSC?
If you are directly connected - IE using SMPP or UCP then I would
imagine that there is a bottle neck at the SMSC. Large SMSC systems
in the US typically deliver upto 1000 sm/s with larger systems
delivering
2000+ sm/s - From the throughput you re
On Wed, 28 Sep 2005 21:58:15 -0400, rumours say that Jeff Schwab
<[EMAIL PROTECTED]> might have written:
>For many (most?) applications in need of
>serious scalability, multi-processor servers are preferable. IBM has
>eServers available with up to 64 processors each, and Sun sells E25Ks
>with
> Quite true and this lack of clarity was a mistake on my part. Requests
> from users do not really become a significant part of this equation
> because, as described above, once a user subscribes the onus is upon us
> to generate messages throughout a given period determined by the number
> of u
Aahz wrote:
> In article <[EMAIL PROTECTED]>,
> Jeff Schwab <[EMAIL PROTECTED]> wrote:
>
>>Sure, multiple machines are probably the right approach for the OP; I
>>didn't mean to disagree with that. I just don't think they are "the
>>only practical way for a multi-process application to scale b
In article <[EMAIL PROTECTED]>,
Jeff Schwab <[EMAIL PROTECTED]> wrote:
>
>Sure, multiple machines are probably the right approach for the OP; I
>didn't mean to disagree with that. I just don't think they are "the
>only practical way for a multi-process application to scale beyond a few
>proces
Thanks for the whitepapers and incredibly useful advice. I'm beginning
to get a picture of what I should be thinking about and implementing to
achieve this kind of scalability. Before I go down any particular
route here's a synopsis of the application.
1)User requests are received only during su
"yoda" <[EMAIL PROTECTED]> writes:
> Currently, the content is generated and a number of SMS per user are
> generated. I'll have to measure this more accurately but a cursory
> glance indicated that we're generting approximately 1000 sms per
> second. (I'm sure this can't be right.. the parser\gene
>1. How are you transmitting your SMSs?
Currently, a number of different gateways are being used: 2 provide a
SOAP web service interface, 1 other provides a REST based web service.
A transaction using the SOAP web services takes 3-5 seconds to complete
(from the point of calling the method to rece
yoda wrote:
> I'm considering moving to stackless python so that I can make use of
> continuations so that I can open a massive number of connections to the
> gateway and pump the messages out to each user simultaneously.(I'm
> thinking of 1 connection per user).
This won't help if your gateway wo
[EMAIL PROTECTED] wrote:
> Jeff> How many are more than "a few?"
>
> I don't know. What can you do today in commercial stuff, 16 processors?
> How many cores per die, two? Four? We're still talking < 100 processors
> with access to the same chunk of memory. For the OP's problem that's still
I would need to get a better picture of your app.
I use a package called twisted to handle large scale computing
on multicore, and multi-computer problems
http://twistedmatrix.com/
Hope this is useful,
Mike
yoda wrote:
> Hi guys,
> My situation is as follows:
>
> 1)I've developed a service th
On Wed, 28 Sep 2005 09:36:54 -0700, ncf wrote:
> If you have that many users, I don't know if Python really is suited
> well for such a large scale application. Perhaps it'd be better suited
> to do CPU intensive tasks it in a compiled language so you can max out
> proformance and then possibly us
[EMAIL PROTECTED] wrote:
>Damjan> Is there some python module that provides a multi process Queue?
>
>Skip> Not as cleanly encapsulated as Queue, but writing a class that
>Skip> does that shouldn't be all that difficult using a socket and the
>Skip> pickle module.
>
>Jeremy> Wh
Jeff> How many are more than "a few?"
I don't know. What can you do today in commercial stuff, 16 processors?
How many cores per die, two? Four? We're still talking < 100 processors
with access to the same chunk of memory. For the OP's problem that's still
10,000 users per processor. Mayb
Damjan> Is there some python module that provides a multi process Queue?
Skip> Not as cleanly encapsulated as Queue, but writing a class that
Skip> does that shouldn't be all that difficult using a socket and the
Skip> pickle module.
Here's a trivial implementation of a pair of b
[EMAIL PROTECTED] wrote:
> Damjan> Is there some python module that provides a multi process Queue?
>
> Skip> Not as cleanly encapsulated as Queue, but writing a class that
> Skip> does that shouldn't be all that difficult using a socket and the
> Skip> pickle module.
>
> Jere
Damjan> Is there some python module that provides a multi process Queue?
Skip> Not as cleanly encapsulated as Queue, but writing a class that
Skip> does that shouldn't be all that difficult using a socket and the
Skip> pickle module.
Jeremy> What about bsddb? The example cod
[EMAIL PROTECTED] wrote:
>Damjan> Is there some python module that provides a multi process Queue?
>
>Not as cleanly encapsulated as Queue, but writing a class that does that
>shouldn't be all that difficult using a socket and the pickle module.
>
>Skip
>
>
>
What about bsddb? The example
yoda wrote:
> Hi guys,
> My situation is as follows:
>
> 1)I've developed a service that generates content for a mobile service.
> 2)The content is sent through an SMS gateway (currently we only send
> text messages).
> 3)I've got a million users (and climbing).
> 4)The users need to get the data
Damjan> Is there some python module that provides a multi process Queue?
Not as cleanly encapsulated as Queue, but writing a class that does that
shouldn't be all that difficult using a socket and the pickle module.
Skip
--
http://mail.python.org/mailman/listinfo/python-list
> If you want to use a multithreaded design, then simply use a python
> Queue.Queue for each delivery channel. If you want to use a
> multi-process design, devise a simple protocol for communicating those
> messages from your generating database/process to your delivery channel
> over TCP sockets.
[yoda]
> I really need help because my application currently can't scale. Some
> user's end up getting their data 30 seconds after generation(best case)
> and up to 5 minutes after content generation. This is simply
> unacceptable. The subscribers deserve much better service if my
> startup is to
yoda wrote:
> 2)The content is sent through an SMS gateway (currently we only send
> text messages).
[...]
> 4)The users need to get the data a minimum of 5 seconds after it's
> generated. (not considering any bottlenecks external to my code).
You surely mean a "maximum of 5 seconds"! Unfortunat
Chris Curvey wrote:
> Multi-threading may help if your python program is spending all it's
> time waiting for the network (quite possible). If you're CPU-bound and
> not waiting on network, then multi-threading probably isn't the answer.
Unless you are on a multi cpu/ multi core machine.
(but mi
I guess I'd look at each part of the system independently to be sure
I'm finding the real bottleneck. (It may be Python, it may not).
Under your current system, is your python program still trying to send
messages after 5 seconds? 30 seconds, 300 seconds? (Or have the
messages been delivered to
If you have that many users, I don't know if Python really is suited
well for such a large scale application. Perhaps it'd be better suited
to do CPU intensive tasks it in a compiled language so you can max out
proformance and then possibly use a UNIX-style socket to send/execute
instructions to th
Hi guys,
My situation is as follows:
1)I've developed a service that generates content for a mobile service.
2)The content is sent through an SMS gateway (currently we only send
text messages).
3)I've got a million users (and climbing).
4)The users need to get the data a minimum of 5 seconds after
27 matches
Mail list logo