thanks for your prompt reply. Why would I have to use atexeit?
According to the documentation, curses.wrapper should handle what
cleanup() should be doing.
Neverthless, good to know it exists :p
On Sat, Dec 31, 2011 at 2:34 PM, Alexander Kapps wrote:
> On 31.12.2011 20:24, Mag Gam wr
Hello,
I have been struggling reseting the terminal when I try to do
KeyboardInterrupt exception therefore I read the documentation for
curses.wrapper and it seems to take care of it for me,
http://docs.python.org/library/curses.html#curses.wrapper.
Can someone please provide a Hello World exampl
Then I would like to bucket findings with days (date).
Overall, I would like to build a log file analyzer.
On Sat, Apr 2, 2011 at 10:59 PM, Dan Stromberg wrote:
>
> On Sat, Apr 2, 2011 at 5:24 PM, Chris Angelico wrote:
>>
>> On Sun, Apr 3, 2011 at 9:58 AM, Mag Gam wrote:
>&
I have a file like this,
cat file
aaa
bbb
aaa
aaa
aaa
awk '{x[$1]++}END { for (i in x) {print i,x[i]} } ' test
bbb 1
aaa 4
I suppose I can do something like this.
(pseudocode)
d={}
try:
d[key]+=1
except KeyError:
d[key]=1
I was wondering if there is a pythonic way of doing this? I plan on
Hello,
When measuring round trip time for the UDP echo client/server the C
version is much faster. I was wondering if there is anything I can do
to speed up.
My current code for client looks like this
sock=socket(AF_INET,SOCK_DGRAM)
for x in range (1000):
sock.sendto("foo",(server,port))
I am having some trouble understanding how padding/windowing works for
Python curses. According to this example on
http://docs.python.org/howto/curses.html I see:
pad = curses.newpad(100, 100)
# These loops fill the pad with letters; this is
# explained in the next section
for y in range(0, 100):
Thanks for your response.
I was going by this thread,
http://mail.python.org/pipermail/tutor/2009-January/066101.html makes
you wonder even if its possible.
I will try your first solution by doing mkfifo on the files.
On Thu, Sep 9, 2010 at 9:19 PM, Alain Ketterlin
wrote:
> Mag Gam wri
I have 3 files which are constantly being updated therefore I use tail
-f /var/log/file1, tail -f /var/log/file2, and tail -f /var/log/file3
For 1 file I am able to manage by
tail -f /var/log/file1 | python prog.py
prog.py looks like this:
f=sys.stdin
for line in f:
print line
But how can I ge
Just curious if anyone had the chance to build pypy on a 64bit
environment and to see if it really makes a huge difference in
performance. Would like to hear some thoughts (or alternatives).
--
http://mail.python.org/mailman/listinfo/python-list
Currently, I have a bash shell script which does timing for me. For
example, if I have a Unix Command I typically run time against it for
10 times and then get an average. It works fine but I have to write a
post processing script to get the time and then do a graph on
matplotlib.
I was wondering
, Jun 26, 2010 at 2:39 AM, Dennis Lee Bieber
wrote:
> On Fri, 25 Jun 2010 19:41:43 -0400, Mag Gam
> declaimed the following in gmane.comp.python.general:
>
>> Thanks everyone for your responses. They were very useful and I am
>> glad I asked the question.
>>
>>
def Left(self):
return 'Left at: '
Does that look right? Lets say I want to figure out how long each
train waited on the platfor
On Fri, Jun 25, 2010 at 3:00 AM, geremy condra wrote:
> On Thu, Jun 24, 2010 at 9:04 AM, Alf P. Steinbach /Usenet
> wrote:
>> * Mag Gam, on
I have been using python for about 1 year now and I really like the
language. Obviously there was a learning curve but I have a programing
background which made it an easy transition. I picked up some good
habits such as automatic code indenting :-), and making my programs
more modular by having fu
I am looking for a simple multi threaded example.
Lets say I have to ssh to 20 servers and I would like to that in
parallel. Can someone please provide a an example for that?
thanks
--
http://mail.python.org/mailman/listinfo/python-list
I have a file with bunch of nfsstat -c (on AIX) which has all the
hostnames, for example
r1svr==
Client rpc:
Connection oriented
calls badcalls badxids timeouts newcreds badverfs timers
0 0 0 0 0 0 0
nomem cantconn i
Oh, thats nice to know!
But I use the CSV module with gzip module. Is it still possible to do
it with the subprocess?
On Thu, Apr 8, 2010 at 7:31 AM, Stefan Behnel wrote:
> Mag Gam, 08.04.2010 13:21:
>>
>> I am in the process of reading a zipped file which is about 6gb.
>&g
I am in the process of reading a zipped file which is about 6gb.
I would like to know if there is a command similar to grep in python
because I would like to emulate, -A -B option of GNU grep.
Lets say I have this,
083828.441,AA
093828.441,AA
094028.441,AA
094058.441,CC
094828.441,AA
103828.441,
sorry for the vague answer.
Its Linux.
The configure build does not say anything actually. This is for SAGE.
I managed to have it pick it up by compiling/installing tcl and tk and
then recompile python
On Wed, Feb 24, 2010 at 4:50 PM, Diez B. Roggisch wrote:
> Am 24.02.10 03:00, schrieb
I am trying to compile python with Tk bindings. Do I need to do
anything special for python to detect and enable Tk?
This is mainly for matplotlib's TkAgg. It seems my compiled version of
python isn't finding the module _tkagg
--
http://mail.python.org/mailman/listinfo/python-list
Hello All,
I used tcpdump to capture data on my network. I would like to analyze
the data using python -- currently using ethereal and wireshark.
I would like to get certain type of packets (I can get the hex code
for them), what is the best way to do this? Lets say I want to capture
all events o
its not a google/bing issue. Its about getting opinions from the
community and hear their experiences. I wanted to hear opinions and
war stories of people who tried using ssh with python, thats all.
On Fri, Sep 4, 2009 at 8:43 AM, Diez B. Roggisch wrote:
> Mag Gam wrote:
>
>&
Hello,
Currently, I am using a bash script to ssh into 400 servers and get an
output, ie check if a file exists on a local filesystem. I am doing
this linearly, however I am interesting in doing this with threads and
more important using Python standard threading library.
My pseudo code would be
Is there something similar to NetSSH
(http://search.cpan.org/dist/Net-SSH-Perl/) for python?
--
http://mail.python.org/mailman/listinfo/python-list
XML is a structured file. I never knew you can read it line by line
and process. iterparse()
More info on iterparse():
http://effbot.org/zone/element-iterparse.htm
On Thu, Aug 27, 2009 at 10:39 AM, Stefan Behnel wrote:
> loial wrote:
>> Is there a quick way to retrieve data from an xml file in p
all, thank you very much!!!
Now my question is, how do I simulate a argv? My program has take an
argv, like "foo.py File" is necessary. How and where do I put it in my
test? I suppose in the setUp(), but I am not sure how.
any thoughts or ideas?
TIA
On Sun, Aug 16, 2009 at 9:25 A
ran the program against the test data.
>
> I just get slightly confused when "test suites" start to have to apply?
>
> On Fri, Aug 14, 2009 at 9:28 PM, Mag Gam wrote:
>>
>> I am writing an application which has many command line arguments.
>> For example:
So, in this example:
"import random"
In my case I would do "import foo" ? is there anything I need to do for that?
On Sat, Aug 15, 2009 at 2:24 AM, Richard Thomas wrote:
> On Aug 15, 4:28 am, Mag Gam wrote:
>> I am writing an application which has many command l
I am writing an application which has many command line arguments.
For example: foo.py -args "bar bee"
I would like to create a test suit using unittest so when I add
features to "foo.py" I don't want to break other things. I just heard
about unittest and would love to use it for this type of thin
At my university we are trying to compile python with --enable-shared
however when I do a make many things fail. Is it a good idea to
compile python with shared libraries? It seems mod-python needs it
this way.
TIA
--
http://mail.python.org/mailman/listinfo/python-list
AM, Lawrence
D'Oliveiro wrote:
> In message , Mag Gam
> wrote:
>
>> I am very new to python and I am in the process of loading a very
>> large compressed csv file into another format. I was wondering if I
>> can do this in a multi thread approach.
>
> Why b
Thanks for the response Gabriel.
On Wed, Jul 1, 2009 at 12:54 AM, Gabriel
Genellina wrote:
> En Tue, 30 Jun 2009 22:52:18 -0300, Mag Gam escribió:
>
>> I am very new to python and I am in the process of loading a very
>> large compressed csv file into another format. I
Hello All,
I am very new to python and I am in the process of loading a very
large compressed csv file into another format. I was wondering if I
can do this in a multi thread approach.
Here is the pseudo code I was thinking about:
Let T = Total number of lines in a file, Example 100 (1 mil
Thankyou!
On Tue, Jun 30, 2009 at 12:17 AM, Chris Rebert wrote:
> On Mon, Jun 29, 2009 at 8:47 PM, Lawrence
> D'Oliveiro wrote:
>> In message , MRAB
>> wrote:
>>
>>> row = [r or "NULL" for r in row]
>>
>> I know that, in this particular case, all the elements of row are strings,
>> and the
009 at 10:03 AM, Peter Otten<__pete...@web.de> wrote:
> Mag Gam wrote:
>
>> well, I am actually loading the row into a fixed width array
>>
>> reader=csv.reader(fs)
>> for s,row in enumerate(reader):
>>
> t=np.array([(row[0],row[1],row[2],row[3],row
On Sat, Jun 27, 2009 at 9:04 AM, MRAB wrote:
> Mag Gam wrote:
>>
>> I am using the csv package to parse a compressed .csv.gz file. So far
>> its working perfectly fine but it fails when I have a missing value in
>> on of the fields.
>>
>> For example,
I am using the csv package to parse a compressed .csv.gz file. So far
its working perfectly fine but it fails when I have a missing value in
on of the fields.
For example, I have this
Abc,def,,jkl
Is it possible to fill the missing column with a null?
I want,
Abc,def,NULL,jkl
TIA
--
http://ma
Thankyou everyone for the responses! I took some of your suggestions
and my loading sped up by 25%
On Wed, Jun 24, 2009 at 3:57 PM, Lie Ryan wrote:
> Mag Gam wrote:
>> Sorry for the delayed response. I was trying to figure this problem
>> out. The OS is Linux, BTW
>
>
I have a compressed CSV gziped file. I was wondering if it is possible
to seek thru a file
For example:
I want to load the first 100 lines into an array. Process the data
Seek from 101 line to 200 lines. Process the data (remove lines 0 -
100) from memory
Seek 201 to 300 line. Process the data
#x27;,data=arr,compression='gzip')
s=0
#Takes the longest here
for y in fs:
continue
a=y.split(',')
s=s+1
dset.resize(s,axis=0)
fs.close()
f.close()
This works but just takes a VERY long time.
Any way to optimize this?
TIA
On Wed, Jun 24, 2009 at 12:13 AM, Chris Wi
Yes, the system has 64Gig of physical memory.
What I meant was, is it possible to load to a hdf5 dataformat
(basically NumPy array) without reading the entire file at first? I
would like to splay to disk beforehand so it would be a bit faster
instead of having 2 copies in memory.
On Tue, Jun
Hello All,
I have a very large csv file 14G and I am planning to move all of my
data to hdf5. I am using h5py to load the data. The biggest problem I
am having is, I am putting the entire file into memory and then
creating a dataset from it. This is very inefficient and it takes over
4 hours to cr
41 matches
Mail list logo