Abhishek Pratap wrote:
>
>My application is not I/O bound as far as I can understand it. Each
>line is read and then processed independently of each other. May be
>this might sound I/O intensive as #N files will be read but I think if
>I have 10 processes running under a parent then it might not b
Hi All
@Roy : split in unix sounds good but will it be as efficient as
opening 10 different file handles on a file. I haven't tried it so
just wondering if you have any experience with it.
Thanks for your input. Also I was not aware of the python's GIL limitation.
My application is not I/O boun
In article
,
aspineux wrote:
> On Sep 9, 12:49 am, Abhishek Pratap wrote:
> > 1. My input file is 10 GB.
> > 2. I want to open 10 file handles each handling 1 GB of the file
> > 3. Each file handle is processed in by an individual thread using the
> > same function ( so total 10 cores are assu
On Sep 9, 12:49 am, Abhishek Pratap wrote:
> Hi Guys
>
> My experience with python is 2 days and I am looking for a slick way
> to use multi-threading to process a file. Here is what I would like to
> do which is somewhat similar to MapReduce in concept.
>
> # test case
>
> 1. My input file is 10
On 01/-10/-28163 02:59 PM, Abhishek Pratap wrote:
Hi Guys
My experience with python is 2 days and I am looking for a slick way
to use multi-threading to process a file. Here is what I would like to
do which is somewhat similar to MapReduce in concept.
# test case
1. My input file is 10 GB.
2.
Abhishek Pratap wrote:
3. Each file handle is processed in by an individual thread using the
same function ( so total 10 cores are assumed to be available on the
machine)
Are you expecting the processing to be CPU bound or
I/O bound?
If it's I/O bound, multiple cores won't help you, and
neith
Hi Guys
My experience with python is 2 days and I am looking for a slick way
to use multi-threading to process a file. Here is what I would like to
do which is somewhat similar to MapReduce in concept.
# test case
1. My input file is 10 GB.
2. I want to open 10 file handles each handling 1 GB of