Hey Guys
I am wondering if I can execute/run python functions through
multiprocessing package on a grid/cluster rather than on the same local
machine. It will help me create 100's of jobs on which same function has to
be used and farm them out to our local cluster through DRMAA. I am not sure
if t
(from IBM). I realize this can be
more suited to HDFS but wanted to know if people have implemented
something similar on a normal linux based NFS
-Abhi
On Mon, Mar 26, 2012 at 6:44 PM, Steve Howell wrote:
> On Mar 26, 3:56 pm, Abhishek Pratap wrote:
>> Hi Guys
>>
>> I am fwd
ading using python
To: tu...@python.org
Abhishek Pratap wrote:
>
> Hi Guys
>
>
> I want to utilize the power of cores on my server and read big files
> (> 50Gb) simultaneously by seeking to N locations.
Yes, you have many cores on the server. But how many hard drives is
each fil
Hey Guys
Pushing this one again just in case it was missed last night.
Best,
-Abhi
On Mon, Oct 31, 2011 at 10:31 PM, Abhishek Pratap wrote:
> Hey Guys
>
> I shud mention I am relative new to the language. Could you please let me
> know based on your experience which module could
Hey Guys
I shud mention I am relative new to the language. Could you please let me
know based on your experience which module could help me with farm out jobs
to our existing clusters(we use SGE here) using python.
Ideally I would like to do the following.
1. Submit #N jobs to cluster
2. monitor
11 at 6:19 AM, Roy Smith wrote:
> In article
> ,
> aspineux wrote:
>
>> On Sep 9, 12:49 am, Abhishek Pratap wrote:
>> > 1. My input file is 10 GB.
>> > 2. I want to open 10 file handles each handling 1 GB of the file
>> > 3. Each file handle is p
Hi Guys
My experience with python is 2 days and I am looking for a slick way
to use multi-threading to process a file. Here is what I would like to
do which is somewhat similar to MapReduce in concept.
# test case
1. My input file is 10 GB.
2. I want to open 10 file handles each handling 1 GB of