Dennis Lee Bieber wrote:
> If a process is known to be CPU bound, I think it is typical
> practice to "nice" the process... Lowering its priority by direct
> action.
Yes, but one usually only bothers with this for long-running
tasks. It's a nicety, not an absolute requirement.
It seems lik
Dennis Lee Bieber <[EMAIL PROTECTED]> wrote:
...
> Think VMS was the most applicable for that behavior... Haven't seen
> any dynamic priorities on the UNIX/Linux/Solaris systems I've
> encountered...
Dynamic priority scheduling is extremely common in Unixen today (and has
been for many ye
John Nagle wrote:
> C gets to
> run briefly, drains out the pipe, and blocks. P gets to run,
> fills the pipe, and blocks. The compute-bound thread gets to run,
> runs for a full time quantum, and loses the CPU to C. Wash,
> rinse, repeat.
I thought that unix schedulers were usually a bit more
Karthik Gurusamy wrote:
> On Jul 2, 10:57 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
>
>I have found the stop-and-go between two processes on the same machine
>leads to very poor throughput. By stop-and-go, I mean the producer and
>consumer are constantly getting on and off of th
"Karthik Gurusamy" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
|If all you had is just two processes, P and C and the amount of data
|flowing is less (say on the order of 10's of buffer-size ... e.g. 20
|times 4k), *a lot* may not be right quantifier.
Have pipe buffer sizes real
On Jul 3, 2:33 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> > If the problem does not require two way communication, which is
> > typical of a producer-consumer, it is a lot faster to allow P to fully
> > run before C is started.
>
> Why do you say it's *a lot* faster. I find that it is a lit
> If the problem does not require two way communication, which is
> typical of a producer-consumer, it is a lot faster to allow P to fully
> run before C is started.
Why do you say it's *a lot* faster. I find that it is a little faster.
The only additional overhead from switching forth and back be
On Jul 2, 10:57 pm, "Martin v. Löwis" <[EMAIL PROTECTED]> wrote:
> >>> I have found the stop-and-go between two processes on the same machine
> >>> leads to very poor throughput. By stop-and-go, I mean the producer and
> >>> consumer are constantly getting on and off of the CPU since the pipe
> >>>
Steve Holden wrote:
> Karthik Gurusamy wrote:
>
>> On Jul 1, 12:38 pm, dlomsak <[EMAIL PROTECTED]> wrote:
>
> [...]
>
>>
>> I have found the stop-and-go between two processes on the same machine
>> leads to very poor throughput. By stop-and-go, I mean the producer and
>> consumer are constantly
dlomsak wrote:
> Paul Rubin wrote:
>
>>dlomsak <[EMAIL PROTECTED]> writes:
>>
>>>knowledge of the topic to help. If the above are not possible but you
>>>have a really good idea for zipping large amounts of data from one
>>>program to another, I'd like to hear it.
> Well, I was using the regular
>>> I have found the stop-and-go between two processes on the same machine
>>> leads to very poor throughput. By stop-and-go, I mean the producer and
>>> consumer are constantly getting on and off of the CPU since the pipe
>>> gets full (or empty for consumer). Note that a producer can't run at
>>>
if both the search server and the web server/script are in the same
computer you could use POSH(http://poshmodule.sourceforge.net/) for
memory sharing or if you are in UNIX you can use mmap.
this is way faster than using sockets and doesn`t require the
serialization/deserialization step.
--
http:
On Jul 2, 6:32 pm, Steve Holden <[EMAIL PROTECTED]> wrote:
> Karthik Gurusamy wrote:
> > On Jul 2, 3:01 pm, Steve Holden <[EMAIL PROTECTED]> wrote:
> >> Karthik Gurusamy wrote:
> >>> On Jul 1, 12:38 pm, dlomsak <[EMAIL PROTECTED]> wrote:
> >> [...]
>
> >>> I have found the stop-and-go between two p
Karthik Gurusamy wrote:
> On Jul 2, 3:01 pm, Steve Holden <[EMAIL PROTECTED]> wrote:
>> Karthik Gurusamy wrote:
>>> On Jul 1, 12:38 pm, dlomsak <[EMAIL PROTECTED]> wrote:
>> [...]
>>
>>> I have found the stop-and-go between two processes on the same machine
>>> leads to very poor throughput. By sto
On Jul 2, 3:01 pm, Steve Holden <[EMAIL PROTECTED]> wrote:
> Karthik Gurusamy wrote:
> > On Jul 1, 12:38 pm, dlomsak <[EMAIL PROTECTED]> wrote:
> [...]
>
> > I have found the stop-and-go between two processes on the same machine
> > leads to very poor throughput. By stop-and-go, I mean the producer
Karthik Gurusamy wrote:
> On Jul 1, 12:38 pm, dlomsak <[EMAIL PROTECTED]> wrote:
[...]
>
> I have found the stop-and-go between two processes on the same machine
> leads to very poor throughput. By stop-and-go, I mean the producer and
> consumer are constantly getting on and off of the CPU since t
On Jul 1, 12:38 pm, dlomsak <[EMAIL PROTECTED]> wrote:
> Thanks for the responses folks. I'm starting to think that there is
> merely an inefficiency in how I'm using the sockets. The expensive
> part of the program is definitely the socket transfer because I timed
> each part of the routine indivi
Okay, Im back at work and got to put some of these suggestions to use.
cPickle is doing a great job a hiking up the serialization rate and
cutting out the +=data helped a lot too. The entire search process now
for this same data set is down to about 4-5 seconds from pressing
'search' to having the
dlomsak <[EMAIL PROTECTED]> wrote:
...
> search and return takes a fraction of a second. For a large return (in
> this case 21,000 records - 8.3 MB) is taking 18 seconds. 15 of those
> seconds are spent sending the serialized results from the server to
> the client. I did a little bit of a blind
Martin v. Löwis wrote:
> > I guess now I'd like to know what are good practices in general to get
> > better results with sockets on the same local machine. I'm only
> > instantiating two sockets total right now - one client and one server,
> > and the transfer is taking 15 seconds for only 8.3MB.
> I guess now I'd like to know what are good practices in general to get
> better results with sockets on the same local machine. I'm only
> instantiating two sockets total right now - one client and one server,
> and the transfer is taking 15 seconds for only 8.3MB.
It would be good if you had sh
Thanks for the responses folks. I'm starting to think that there is
merely an inefficiency in how I'm using the sockets. The expensive
part of the program is definitely the socket transfer because I timed
each part of the routine individually. For a small return, the whole
search and return takes a
"Martin v. Löwis" <[EMAIL PROTECTED]> writes:
> > If this is a Linux server, it might be possible to use the SCM_RIGHTS
> > message to pass the socket between processes.
>
> I very much doubt that the OP's problem is what he thinks it is,
> i.e. that copying over a local TCP connection is what mak
> If this is a Linux server, it might be possible to use the SCM_RIGHTS
> message to pass the socket between processes.
I very much doubt that the OP's problem is what he thinks it is,
i.e. that copying over a local TCP connection is what makes his
application slow.
> That would require a
> patch
"Martin v. Löwis" <[EMAIL PROTECTED]> writes:
> No. The CGI script has a file handle, and it is not possible to pass
> a file handle to a different process.
>
> > If there is not a good Pythonic way to do the above, I am open to
> > mixing in some C to do the job if that is what it takes.
>
> No,
> b) use a single Python server (possibly shared with the database
>process), and connect this to Apache through the
>reverse proxy protocol.
Following up to myself: Instead of using a reverse proxy, you can
also implement the FastCGI protocol in the server.
Regards,
Martin
--
http://mai
> I have searched a good deal about this topic and have not found
> any good information yet. It seems that the people asking all want
> something a bit different than what I want and also don't divulge much
> about their intentions. I wish to improve the rate of data transfer
> between two pyt
Paul Rubin wrote:
> dlomsak <[EMAIL PROTECTED]> writes:
> > knowledge of the topic to help. If the above are not possible but you
> > have a really good idea for zipping large amounts of data from one
> > program to another, I'd like to hear it.
>
> One cheesy thing you might try is serializing wi
dlomsak <[EMAIL PROTECTED]> writes:
> knowledge of the topic to help. If the above are not possible but you
> have a really good idea for zipping large amounts of data from one
> program to another, I'd like to hear it.
One cheesy thing you might try is serializing with marshal rather than
pickle.
On 6/30/07, dlomsak <[EMAIL PROTECTED]> wrote:
> If there is not a good Pythonic way to do the above, I am open to
> mixing in some C to do the job if that is what it takes. I apologize
> if this topic has been brought up many times before but hopefully I
> have stated my intentions clearly enough
Hello,
I have searched a good deal about this topic and have not found
any good information yet. It seems that the people asking all want
something a bit different than what I want and also don't divulge much
about their intentions. I wish to improve the rate of data transfer
between two python
31 matches
Mail list logo