Skip Montanaro writes:
> How about just replacing *\(([^)]*)\)* with *"\1"* in a wrapper class's
> line reading method? (I think I have the re syntax approximately right.)
> The csv reader will "just work". Again, nesting parens not allowed.
>
> Skip
here is some working code:
def PReader(csvfi
> Besides, the point isn't the shortest code but to illustrate the idea
> of handling special syntax.
In my defense, I was typing on my phone while watching a show on
Netflix. I was hardly in a position to test any code. :-)
As you indicated though, the problem is under-specified (nesting?,
pres
On 24Sep2019 19:02, Skip Montanaro wrote:
How about just replacing *\(([^)]*)\)* with *"\1"* in a wrapper class's
line reading method?
Will that work if the OP's (TEST1,TEST2) term itself contains quotes?
Not that his example data did, but example data are usually incomplete
:-)
Also, tha
How about just replacing *\(([^)]*)\)* with *"\1"* in a wrapper class's
line reading method? (I think I have the re syntax approximately right.)
The csv reader will "just work". Again, nesting parens not allowed.
Skip
--
https://mail.python.org/mailman/listinfo/python-list
On 2019-09-25 00:09, Cameron Simpson wrote:
On 24Sep2019 15:55, Mihir Kothari wrote:
I am using python 3.4. I have a CSV file as below:
ABC,PQR,(TEST1,TEST2)
FQW,RTE,MDE
Really? No quotes around the (TEST1,TEST2) column value? I would have
said this is invalid data, but that does not help yo
On 24Sep2019 15:55, Mihir Kothari wrote:
I am using python 3.4. I have a CSV file as below:
ABC,PQR,(TEST1,TEST2)
FQW,RTE,MDE
Really? No quotes around the (TEST1,TEST2) column value? I would have
said this is invalid data, but that does not help you.
Basically comma-separated rows, where
>Any idea why it might throw an exception on encountering a NULL in the
>input stream? It accepts all other 255 byte values. Was this behaviour
>intended? Perhaps a comment should be added to the docs.
>Thanks for your work on the module anyway.
The original module was like this - it comes about
On 07/03/2018 07:59, Andrew McNamara wrote:
Last time I read the documentation, it was recommended that
the file be opened in BINARY mode ("rb").
It recommends binary mode, but seems to largely work fine with
text/ascii mode or even arbitrary iterables. I've not seen the
rationale behin
>> Last time I read the documentation, it was recommended that
>> the file be opened in BINARY mode ("rb").
>
>It recommends binary mode, but seems to largely work fine with
>text/ascii mode or even arbitrary iterables. I've not seen the
>rationale behind the binary recommendation, but in 10+
On 2018-03-01 23:57, John Pote wrote:
> On 01/03/2018 01:35, Tim Chase wrote:
> > While inelegant, I've "solved" this with a wrapper/generator
> >
> >f = file(fname, …)
> >g = (line.replace('\0', '') for line in f)
> I wondered about something like this but thought if there's a way
> of a
On 01/03/2018 01:35, Tim Chase wrote:
While inelegant, I've "solved" this with a wrapper/generator
f = file(fname, …)
g = (line.replace('\0', '') for line in f)
I wondered about something like this but thought if there's a way of
avoiding the extra step it would keep the execution speed u
On 01/03/2018 02:38, Dennis Lee Bieber wrote:
On Wed, 28 Feb 2018 23:40:41 +, John Pote
declaimed the following:
with open( fname, 'rt', encoding='iso-8859-1' ) as csvfile:
Pardon? Has the CSV module changed in the last year or so?
Python 3.6 docs say csv reader has to be gi
On 2018-03-01, Tim Chase wrote:
> On 2018-02-28 21:38, Dennis Lee Bieber wrote:
>> > with open( fname, 'rt', encoding='iso-8859-1' ) as csvfile:
>>
>> Pardon? Has the CSV module changed in the last year or so?
>>
>> Last time I read the documentation, it was recommended that
>> t
On 2/28/2018 8:35 PM, Tim Chase wrote:
While inelegant, I've "solved" this with a wrapper/generator
f = file(fname, …)
g = (line.replace('\0', '') for line in f)
reader = csv.reader(g, …)
for row in reader:
process(row)
I think this is elegant in that is cleans the input strea
On 2018-02-28 21:38, Dennis Lee Bieber wrote:
> > with open( fname, 'rt', encoding='iso-8859-1' ) as csvfile:
>
> Pardon? Has the CSV module changed in the last year or so?
>
> Last time I read the documentation, it was recommended that
> the file be opened in BINARY mode ("rb")
While inelegant, I've "solved" this with a wrapper/generator
f = file(fname, …)
g = (line.replace('\0', '') for line in f)
reader = csv.reader(g, …)
for row in reader:
process(row)
My actual use at $DAYJOB cleans out a few other things
too, particularly non-breaking spaces coming from
Manuel Rincon writes:
[...]
> Type=0 MarketTime=11:18:26.549 Price=112.8300
> Type=0 MarketTime=11:18:28.792 Price=112.8300
[...]
>
> I would need to filter only the numeric part of all the columns.
I assume that by "numeric" you mean the value after Price=
line.split()[2].split('=')[1]
lin
One other thing besides the issues noted with filename - newline is set to
a space. It should be set to an empty string.
See: https://docs.python.org/3/library/csv.html#id3
Regards,
Nate
On Wed, Feb 22, 2017 at 3:52 PM, wrote:
> On Wednesday, February 22, 2017 at 5:55:47 PM UTC, Braxton Alfred
On Wednesday, February 22, 2017 at 5:55:47 PM UTC, Braxton Alfred wrote:
> Why does this not run? It is right out of the CSV file in the Standard Lib.
>
> Python ver 3.4.4, 64 bit.
>
> import csv
> """ READ EXCEL FILE """
> filename = 'c:\users\user\my documents\Braxton\Excel\personal\bp.csv'
>
Braxton Alfred writes:
> Why does this not run? It is right out of the CSV file in the Standard Lib.
>
>
>
>
> Python ver 3.4.4, 64 bit.
>
>
>
>
>
>
>
> import csv
> """ READ EXCEL FILE """
> filename = 'c:\users\user\my documents\Braxton\Excel\personal\bp.csv'
'\b' is backspace. A coupl
On 22-2-2017 18:26, Braxton Alfred wrote:
> Why does this not run? It is right out of the CSV file in the Standard Lib.
What does "not run" mean.
We can't help if you are not telling us the exact error message you're getting
(if any)
> filename = 'c:\users\user\my documents\Braxton\Excel\perso
On Thu, Feb 23, 2017 at 4:26 AM, Braxton Alfred wrote:
> filename = 'c:\users\user\my documents\Braxton\Excel\personal\bp.csv'
Use forward slashes instead.
Also, if something isn't working, post the actual output - probably an
exception. Don't assume that we can read your mind, even when we can.
handa...@gmail.com wrote:
> I am trying to split a specific column of csv into multiple column and
> then appending the split values at the end of each row.
>
> `enter code here`
>
> import csv
> fOpen1=open('Meta_D1.txt')
>
> reader=csv.reader(fO
Add some print statements to see what is happening, especially after the for
elem in mylist1: statement
--
https://mail.python.org/mailman/listinfo/python-list
On 1/31/2015 10:45 PM, Frank Millman wrote:
If the opening balance is positive, it appears as '+0021.45'
If it is negative, it appears as '+0-21.45'
My advise is to get cash in payment.
:)
Emile
--
https://mail.python.org/mailman/listinfo/python-list
On Sun, Feb 1, 2015 at 12:45 AM, Frank Millman wrote:
> Is this a recognised format, and is there a standard way of parsing it? If
> not, I will have to special-case it, but I would prefer to avoid that if
> possible.
Doesn't look "standard" to me in any fashion. You shouldn't need to
special cas
On 01/31/2015 11:23 PM, Mark Lawrence wrote:
> On 01/02/2015 06:45, Frank Millman wrote:
>>
>>
>> Most transaction amounts are in the format '-0031.23' or '+0024.58'
>>
>> This can easily be parsed using decimal.Decimal().
>>
>> If the opening balance is positive, it appears as '+0021.4
On 01/02/2015 06:45, Frank Millman wrote:
Hi all
I downloaded some bank statements in CSV format with a view to providing an
automated bank reconciliation feature for my accounting software.
One of them shows the opening balance in an unusual format.
Most transaction amounts are in the format
On 2014-12-29 16:37, JC wrote:
> On Mon, 29 Dec 2014 10:32:03 -0600, Skip Montanaro wrote:
>
> > On Mon, Dec 29, 2014 at 10:11 AM, JC
> > wrote:
> >> Do I have to open the file again to get 'rdr' work again?
> >
> > Yes, but if you want the number of records, just operate on the
> > rows list, e
On 2014-12-29 16:11, JC wrote:
> On Mon, 29 Dec 2014 09:47:23 -0600, Skip Montanaro wrote:
>
> > On Mon, Dec 29, 2014 at 9:35 AM, JC wrote:
> >> How could I get the all the records?
> >
> > This should work:
> >
> > with open('x.csv','rb') as f:
> > rdr = csv.DictReader(f,delimiter=',')
> >
On Mon, 29 Dec 2014 10:32:03 -0600, Skip Montanaro wrote:
> On Mon, Dec 29, 2014 at 10:11 AM, JC wrote:
>> Do I have to open the file again to get 'rdr' work again?
>
> Yes, but if you want the number of records, just operate on the rows
> list, e.g. len(rows).
>
> Skip
Yes, I did that. But if
On Mon, Dec 29, 2014 at 10:11 AM, JC wrote:
> Do I have to open the file again to get 'rdr' work again?
Yes, but if you want the number of records, just operate on the rows
list, e.g. len(rows).
Skip
--
https://mail.python.org/mailman/listinfo/python-list
On Mon, 29 Dec 2014 09:47:23 -0600, Skip Montanaro wrote:
> On Mon, Dec 29, 2014 at 9:35 AM, JC wrote:
>> How could I get the all the records?
>
> This should work:
>
> with open('x.csv','rb') as f:
> rdr = csv.DictReader(f,delimiter=',')
> rows = list(rdr)
>
> You will be left with a
On Mon, Dec 29, 2014 at 9:35 AM, JC wrote:
> How could I get the all the records?
This should work:
with open('x.csv','rb') as f:
rdr = csv.DictReader(f,delimiter=',')
rows = list(rdr)
You will be left with a list of dictionaries, one dict per row, keyed
by the values in the first row:
On Mon, Dec 29, 2014 at 3:14 AM, Mark Lawrence wrote:
>> Is anyone else interested in the patch? Should I create a tracker
>> issue and upload it?
>
> I'd raise a tracker issue so it's easier to find in the future.
http://bugs.python.org/issue23126
ChrisA
--
https://mail.python.org/mailman/list
On 28/12/2014 15:38, Chris Angelico wrote:
On Mon, Dec 29, 2014 at 1:22 AM, Chris Angelico wrote:
I wonder how hard it would be to tinker at the C level and add a
__getattr__ style of hook...
You know what, it's not that hard. It looks largeish as there are four
places where NameError (not co
On Mon, Dec 29, 2014 at 2:38 AM, Chris Angelico wrote:
> It's just like __getattr__: if it returns something, it's as
> if the name pointed to that thing, otherwise it raises NameError.
To clarify: The C-level patch has nothing about imports. What it does
is add a hook at the point where NameErro
On Mon, Dec 29, 2014 at 1:22 AM, Chris Angelico wrote:
> I wonder how hard it would be to tinker at the C level and add a
> __getattr__ style of hook...
You know what, it's not that hard. It looks largeish as there are four
places where NameError (not counting UnboundLocalError, which I'm not
tou
On Mon, Dec 29, 2014 at 1:15 AM, Skip Montanaro
wrote:
>> We were discussing something along these lines a while ago, and I
>> never saw anything truly satisfactory - there's no easy way to handle
>> a missing name by returning a value (comparably to __getattr__), you
>> have to catch it and then
> We were discussing something along these lines a while ago, and I
> never saw anything truly satisfactory - there's no easy way to handle
> a missing name by returning a value (comparably to __getattr__), you
> have to catch it and then try to re-execute the failing code, which
> isn't perfect. H
On Mon, Dec 29, 2014 at 12:58 AM, Skip Montanaro
wrote:
> (Ignore the "autoloading" message. I use an autoloader in interactive
> mode which comes in handy when I forget to import a module, as I did
> here.)
We were discussing something along these lines a while ago, and I
never saw anything trul
Hmmm... Works for me.
% python
Python 2.7.6+ (2.7:db842f730432, May 9 2014, 23:53:26)
[GCC 4.2.1 Compatible Apple LLVM 5.1 (clang-503.0.40)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> with open("coconutBattery.csv", "rb") as f:
... r = csv.DictReader(
On Sun, 28 Dec 2014 14:41:55 +0200, Jussi Piitulainen wrote:
> Skip Montanaro writes:
>
>> > ValueError: I/O operation on closed file
>> >
>> > Here is my code in a Python shell -
>> >
>> > >>> with open('x.csv','rb') as f:
>> > ... r = csv.DictReader(f,delimiter=",")
>> > >>> r.fieldnames
>>
On Sun, 28 Dec 2014 06:19:58 -0600, Skip Montanaro wrote:
>> ValueError: I/O operation on closed file
>>
>> Here is my code in a Python shell -
>>
>> >>> with open('x.csv','rb') as f:
>> ... r = csv.DictReader(f,delimiter=",")
>> >>> r.fieldnames
>
> The file is only open during the context o
Skip Montanaro writes:
> > ValueError: I/O operation on closed file
> >
> > Here is my code in a Python shell -
> >
> > >>> with open('x.csv','rb') as f:
> > ... r = csv.DictReader(f,delimiter=",")
> > >>> r.fieldnames
>
> The file is only open during the context of the with statement.
> Inde
> ValueError: I/O operation on closed file
>
> Here is my code in a Python shell -
>
> >>> with open('x.csv','rb') as f:
> ... r = csv.DictReader(f,delimiter=",")
> >>> r.fieldnames
The file is only open during the context of the with statement. Indent the
last line to match the assignment to
jayte wrote:
> On Tue, 16 Sep 2014 13:22:02 +0200, Peter Otten <__pete...@web.de> wrote:
>
>>jayte wrote:
>>
>>> On Mon, 15 Sep 2014 09:29:02 +0200, Peter Otten <__pete...@web.de>
>>> wrote:
>
> [...]
>
but you can read raw data
with numpy. Something like
with open(filename, "
jayte wrote:
> On Mon, 15 Sep 2014 09:29:02 +0200, Peter Otten <__pete...@web.de> wrote:
>
>>jayte wrote:
>>
>>> Sorry, I neglected to mention the values' significance. The MXP program
>>> uses the "distance estimate" algorithm in its fractal data generation.
>>> The values are thus, for each po
jayte wrote:
> Sorry, I neglected to mention the values' significance. The MXP program
> uses the "distance estimate" algorithm in its fractal data generation.
> The values are thus, for each point in a 1778 x 1000 image:
>
> Distance, (an extended double)
> Iterations, (a 16 bit int)
> zc_x
je...@newsguy.com writes:
> Hello. Back in the '80s, I wrote a fractal generator, which, over the years,
> I've modified/etc to run under Windows. I've been an Assembly Language
> programmer for decades. Recently, I decided to learn a new language,
> and decided on Python, and I just love it, a
On 14Sep2014 01:56, rusi wrote:
On Sunday, September 14, 2014 2:09:51 PM UTC+5:30, Cameron Simpson wrote:
If you have a nice regular CSV file, with say 3 values per row, you can go:
reader = csv.reader(f)
for row in reader:
a, b, c - row
I guess you meant: a, b, c = row
?
On 9/14/2014 12:56 PM, jayte wrote:
On Sun, 14 Sep 2014 03:02:12 -0400, Terry Reedy wrote:
On 9/13/2014 9:34 PM, je...@newsguy.com wrote:
[...]
First you need to think about (and document) what your numbers mean and
how they should be organized for analysis.
An example of the data:
1.850
On Mon, Sep 15, 2014 at 2:56 AM, jayte wrote:
> Anyway, thanks (everyone) for responding. I'm very anxious to
> try some data analysis (what I'm hoping, is to discover some new
> approaches / enhancements to coloring, as I'm not convinced we've
> seen all there is to see, from The Mandelbrot Set)
On Sunday, September 14, 2014 2:09:51 PM UTC+5:30, Cameron Simpson wrote:
> If you have a nice regular CSV file, with say 3 values per row, you can go:
>reader = csv.reader(f)
>for row in reader:
>a, b, c - row
I guess you meant: a, b, c = row
?
Also you will want to do appro
On 13Sep2014 21:34, je...@newsguy.com wrote:
Hello. Back in the '80s, I wrote a fractal generator, [...]
Anyway, something I thought would be interesting, would be to export
some data from my fractal program (I call it MXP), and write something
in Python and its various scientific data analysis
On 9/13/2014 9:34 PM, je...@newsguy.com wrote:
Hello. Back in the '80s, I wrote a fractal generator, which, over the years,
I've modified/etc to run under Windows. I've been an Assembly Language
programmer for decades. Recently, I decided to learn a new language,
and decided on Python, and I
je...@newsguy.com wrote:
>
> Hello. Back in the '80s, I wrote a fractal generator, which, over the years,
> I've modified/etc to run under Windows. I've been an Assembly Language
> programmer for decades. Recently, I decided to learn a new language,
> and decided on Python, and I just love it
On 21/03/2014 14:46, chip9m...@gmail.com wrote:
On Friday, March 21, 2014 2:39:37 PM UTC+1, Tim Golden wrote:
Without disturbing your existing code too much, you could wrap the
input_reader in a generator which skips malformed lines. That would look
something like this:
def unfussy_reader(
On 21/03/2014 14:46, chip9m...@gmail.com wrote:
> I am sorry I do not understand how to get to each row in this way.
>
> Please could you explain also this:
> If I define this function,
> how do I change my for loop to get each row?
Does this help?
#!python3
import csv
def unfussy_reader(csv_
Ok, I have figured it out:
for i, row in enumerate(unfussy_reader(input_reader):
# and I do something on each row
Sorry, it is my first "face to face" with generators!
Thank you very much!
Best,
Chip Munk
--
https://mail.python.org/mailman/listinfo/python-list
On Friday, March 21, 2014 2:39:37 PM UTC+1, Tim Golden wrote:
> Without disturbing your existing code too much, you could wrap the
>
> input_reader in a generator which skips malformed lines. That would look
>
> something like this:
>
>
>
> def unfussy_reader(reader):
>
> while True:
>
On 21/03/2014 13:29, chip9m...@gmail.com wrote:
> Hi all!
>
> I am reading from a huge csv file (> 20 Gb), so I have to read line by line:
>
> for i, row in enumerate(input_reader):
> # and I do something on each row
>
> Everything works fine until i get to a row with some strange symbols
Peter nailed it. Adding in the two lines of code to ensure I was just working
with *.csv files fixed the problem. Thanks to everyone for the help and
suggestions on best practices.
--
https://mail.python.org/mailman/listinfo/python-list
On 17/9/2013 22:28, Bryan Britten wrote:
> Dave -
>
> I can't print the output because there are close to 1,000,000 records. It
> would be extremely inefficient and resource intensive to look at every row.
Not if you made a sample directory with about 3 files, each containing
half a dozen lines.
On Wednesday, September 18, 2013 7:12:21 AM UTC+5:30, Bryan Britten wrote:
> Hey, gang, I've got a problem here that I'm sure a handful of you will know
> how to solve. I've got about 6 *.csv files that I am trying to open; change
> the header names (to get rid of spaces); add two new columns, wh
Bryan Britten wrote:
> Hey, gang, I've got a problem here that I'm sure a handful of you will
> know how to solve. I've got about 6 *.csv files that I am trying to open;
> change the header names (to get rid of spaces); add two new columns, which
> are just the results of a string.split() command;
Dave -
I can't print the output because there are close to 1,000,000 records. It would
be extremely inefficient and resource intensive to look at every row. Like I
said, when I take just one file and run the code over the first few records I
get what I'd expect to see. Here's an example(non-red
On 17/9/2013 21:42, Bryan Britten wrote:
> Hey, gang, I've got a problem here that I'm sure a handful of you will know
> how to solve. I've got about 6 *.csv files that I am trying to open; change
> the header names (to get rid of spaces); add two new columns, which are just
> the results of a
On 13 April 2013 16:30, Ana Dionísio wrote:
> It's still not working. I still have one column with all the data inside,
> like this:
>
> 2999;T3;3;1;1;Off;ON;OFF;ON;ON;ON;ON;Night;;
>
> How can I split this data in a way that if I want to print "T3" I would just
> do "print array[0][1]"?
Yo
Dear Ana,
your example data could be transformed into a matrix with
>>>import csv
>>>rows = csv.reader(open("your_data_file.csv"), delimiter=" ")
>>>array = [row for row in rows]
>>>array[0][3]
4
HTH
Paolo
Am Freitag, 12. April 2013 19:29:05 UTC+2 schrieb Ana Dionísio:
> That only puts the data
On 13/04/2013 16:30, Ana Dionísio wrote:
It's still not working. I still have one column with all the data inside, like
this:
2999;T3;3;1;1;Off;ON;OFF;ON;ON;ON;ON;Night;;
How can I split this data in a way that if I want to print "T3" I would just do
"print array[0][1]"?
I said before
It's still not working. I still have one column with all the data inside, like
this:
2999;T3;3;1;1;Off;ON;OFF;ON;ON;ON;ON;Night;;
How can I split this data in a way that if I want to print "T3" I would just do
"print array[0][1]"?
--
http://mail.python.org/mailman/listinfo/python-list
Ana Dionísio writes:
> Hello!
>
> I have a CSV file with 20 rows and 12 columns and I need to store it
> as a matrix.
array=numpy.array([row for row in csv.reader(open('Cenarios.csv'))])
NB: i used "array=" as in your sample code, BUT
--
http://mail.python.org/mailman/listinfo/python-list
> I have a CSV file with 20 rows and 12 columns and I need to store it as a
> matrix.
If you can use pandas, the pandas.read_csv is what you want.
--
http://mail.python.org/mailman/listinfo/python-list
Keep the flattened data array others suggested, and then just split it like
this: *(replace `example_data`, `_array`, and `columns`)*
>>> example_data = range(15)
>>> split_array = lambda _array, colums: \
. . .[_array[i:i + colums] for i in \
. . .xrange(0, len(_array),
On 04/12/2013 01:29 PM, Ana Dionísio wrote:
That only puts the data in one column, I wanted to separate it.
For example:
data in csv file:
1 2 3 4 5
7 8 9 10 11
a b c d e
I wanted an array where I could pick an element in each position. In the case
above if I did print array[0][3] it would pi
On Apr 12, 10:12 pm, Ana Dionísio wrote:
> Hi, thanks for yor answer! ;)
>
> Anyone has more suggestions?
My suggestions:
1. Tell us what was lacking in Mark's suggestion (to use loadtxt)
2. Read his postscript (for googlegroup posters).
[In case you did not notice your posts are arriving in dou
That only puts the data in one column, I wanted to separate it.
For example:
data in csv file:
1 2 3 4 5
7 8 9 10 11
a b c d e
I wanted an array where I could pick an element in each position. In the case
above if I did print array[0][3] it would pick 4
--
http://mail.python.org/mailman/lis
- Original Message -
> Hello!
>
> I have a CSV file with 20 rows and 12 columns and I need to store it
> as a matrix. I already created an array with zeros, but I don't know
> how to fill it with the data from the csv file. I have this script:
>
> import numpy
> from numpy import array
>
Hi, thanks for yor answer! ;)
Anyone has more suggestions?
--
http://mail.python.org/mailman/listinfo/python-list
On 12/04/2013 15:22, Ana Dionísio wrote:
Hello!
I have a CSV file with 20 rows and 12 columns and I need to store it as a
matrix. I already created an array with zeros, but I don't know how to fill it
with the data from the csv file. I have this script:
import numpy
from numpy import array
fr
On Mar 20, 6:37 am, Roy Smith wrote:
>
> Another possibility is to use pandas (http://pandas.pydata.org/).
Thanks for the link -- looks interesting!
--
http://mail.python.org/mailman/listinfo/python-list
In article ,
Dave Angel wrote:
> But you should switch to using the csv module. And unless you have data
> that consists of millions of lines, you should just read the whole thing
> in once, and then extract the various columns by simple list
> manipulations and/or comprehensions.
Another p
On 03/19/2013 06:59 PM, C.T. wrote:
Hello,
Currently doing a project for class an I'm stuck. I have a csv file that I'm
suppose to extract some information from. I've created a function that ignores
the first six lines of the csv file and creates a list of values in a
particular column. Here
In article <1d7fcebe-8677-42ec-a53d-284214296...@googlegroups.com>,
"C.T." wrote:
> Currently doing a project for class an I'm stuck. I have a csv file that I'm
> suppose to extract some information from. I've created a function that
> ignores the first six lines of the csv file and creates a
On Tue, Dec 4, 2012 at 2:58 PM, Neil Cerutti wrote:
> On 2012-12-04, Anatoli Hristov wrote:
>> The issue is now solved I did:
>>
>> for x in mylist:
>> try:
>> sku.append(x[4])
>> except IndexError:
>> pass
>>
>> Thank you for your help
>
> Optionally:
>
> for x in mylist:
On 2012-12-04, Anatoli Hristov wrote:
> The issue is now solved I did:
>
> for x in mylist:
> try:
> sku.append(x[4])
> except IndexError:
> pass
>
> Thank you for your help
Optionally:
for x in mylist:
if len(x) >= 4:
sku.append(x[4])
But do you really need
The issue is now solved I did:
for x in mylist:
try:
sku.append(x[4])
except IndexError:
pass
Thank you for your help
Anatoli
--
http://mail.python.org/mailman/listinfo/python-list
On Tue, Dec 4, 2012 at 12:31 PM, Thomas Bach
wrote:
> Hi there,
>
> Please be a bit more precise…
>
> On Tue, Dec 04, 2012 at 12:00:05PM +0100, Anatoli Hristov wrote:
>>
>> The problem comes when I try to index the SKU array and the field is
>> empty
>
> Can you provide an example for that?
>
>> a
Hi there,
Please be a bit more precise…
On Tue, Dec 04, 2012 at 12:00:05PM +0100, Anatoli Hristov wrote:
>
> The problem comes when I try to index the SKU array and the field is
> empty
Can you provide an example for that?
> and it seems that there I have empty array, I wanted to ignore the
>
On 2/11/12 18:25:09, Sacha Rook wrote:
> I have a problem with a csv file from a supplier, so they export data to csv
> however the last column in the record is a description which is marked up
> with html.
>
> trying to automate the processing of this csv to upload elsewhere in a
> useable format
On 2012-11-02, Sacha Rook wrote:
> Hi
>
> I have a problem with a csv file from a supplier, so they
> export data to csv however the last column in the record is a
> description which is marked up with html.
>
> trying to automate the processing of this csv to upload
> elsewhere in a useable forma
On 4/26/2012 9:12 AM, Neil Cerutti wrote:
On 2012-04-26, Neil Cerutti wrote:
I made the following wrong assumption about the csv EBNF
recognized by Python (ignoring record seps):
record -> field {delim field}
Is that in the docs?
There's at least some csv "standard" documents requiring m
On 2012-04-26, Neil Cerutti wrote:
> I made the following wrong assumption about the csv EBNF
> recognized by Python (ignoring record seps):
>
> record -> field {delim field}
>
> There's at least some csv "standard" documents requiring my
> interprestion, e.g.,
>
> http://mastpoint.curzonnassau.co
On 2012-04-26, Tim Roberts wrote:
> Neil Cerutti wrote:
>
>>Is there an explanation or previous dicussion somewhere for the
>>following behavior? I haven't yet trolled the csv mailing list
>>archive, though that would probably be a good place to check.
>>
>>Python 3.2 (r32:88445, Feb 20 2011, 21:
Neil Cerutti wrote:
>Is there an explanation or previous dicussion somewhere for the
>following behavior? I haven't yet trolled the csv mailing list
>archive, though that would probably be a good place to check.
>
>Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit
>(Intel)] on win
On 2012-04-25, Kiuhnm wrote:
> On 4/25/2012 20:05, Neil Cerutti wrote:
>> Is there an explanation or previous dicussion somewhere for the
>> following behavior? I haven't yet trolled the csv mailing list
>> archive, though that would probably be a good place to check.
>>
>> Python 3.2 (r32:88445,
On 4/25/2012 20:05, Neil Cerutti wrote:
Is there an explanation or previous dicussion somewhere for the
following behavior? I haven't yet trolled the csv mailing list
archive, though that would probably be a good place to check.
Python 3.2 (r32:88445, Feb 20 2011, 21:29:02) [MSC v.1500 32 bit
(I
Jason Swails wrote:
> Hello,
>
> I have a question about a csv.writer instance. I have a utility that I
> want to write a full CSV file from lots of data, but due to performance
> (and memory) considerations, there's no way I can write the data
> sequentially. Therefore, I write the data in chun
On 24/10/2011 08:03, Chris Angelico wrote:
On Mon, Oct 24, 2011 at 4:18 PM, Jason Swails wrote:
my_csv = csv.writer(open('temp.1.csv', 'wb'))
Have you confirmed, or can you confirm, whether or not the file gets
closed automatically when the writer gets destructed? If so, all you
need to do is
1 - 100 of 298 matches
Mail list logo