On 2020-11-30 03:59, Jason Friedman wrote:
csv.DictReader appears to be happy with a list of strings representing
the lines.
Try this:
contents = source_file.content()
for row in csv.DictReader(contents.decode('utf-8').splitlines()):
print(row)
Works great, thank you! Question ... wil
>
> csv.DictReader appears to be happy with a list of strings representing
> the lines.
>
> Try this:
>
> contents = source_file.content()
>
> for row in csv.DictReader(contents.decode('utf-8').splitlines()):
> print(row)
>
Works great, thank you! Question ... will this form potentially use l
On 2020-11-30 01:31, Jason Friedman wrote:
Using the Box API:
print(source_file.content())
returns
b'First Name,Last Name,Email Address,Company,Position,Connected
On\nPeter,van
(and more data, not pasted here)
Trying to read it via:
with io.TextIOWrapper(source_file.content(), encoding=
Using the Box API:
print(source_file.content())
returns
b'First Name,Last Name,Email Address,Company,Position,Connected
On\nPeter,van
(and more data, not pasted here)
Trying to read it via:
with io.TextIOWrapper(source_file.content(), encoding='utf-8') as text_file:
reader = csv.DictRead
On 15 Apr 2020 10:28, Peter Otten <__pete...@web.de> wrote:
sjeik_ap...@hotmail.com wrote:
> On 12 Apr 2020 12:30, Peter Otten <__pete...@web.de> wrote:
>
> Rahul Gupta wrote:
>
> >for line in enumerate(csv_reader):
> >print(line[csv_reader.
sjeik_ap...@hotmail.com wrote:
>On 12 Apr 2020 12:30, Peter Otten <__pete...@web.de> wrote:
>
> Rahul Gupta wrote:
>
> >for line in enumerate(csv_reader):
> >print(line[csv_reader.fieldnames[1]])
>
> enumerate() generates (index, line) tuples that you need to unpack:
>
On 12 Apr 2020 12:30, Peter Otten <__pete...@web.de> wrote:
Rahul Gupta wrote:
> for line in enumerate(csv_reader):
> print(line[csv_reader.fieldnames[1]])
enumerate() generates (index, line) tuples that you need to unpack:
for index, line in enumerat
@Peter Thanks alot
--
https://mail.python.org/mailman/listinfo/python-list
Rahul Gupta wrote:
> for line in enumerate(csv_reader):
> print(line[csv_reader.fieldnames[1]])
enumerate() generates (index, line) tuples that you need to unpack:
for index, line in enumerate(csv_reader):
print(line[csv_reader.fieldnames[1]])
If you want to keep track o
import csv
import numpy as np
with open("D:\PHD\obranking\\cell_split_demo.csv", mode='r') as csv_file:
csv_reader = csv.DictReader(csv_file)
print(csv_reader.fieldnames)
col_count = print(len(csv_reader.fieldnames))
#print(sum(1 for row in csv_file))
row_count = 0
Rahul Gupta wrote:
> On Sunday, April 12, 2020 at 1:35:10 PM UTC+5:30, Rahul Gupta wrote:
>> the cells in my csv that i wrote looks likes this
>>
['82#201#426#553#602#621#811#908#1289#1342#1401#1472#1593#1641#1794#2290#2341#2391#3023#3141#3227#3240#3525#3529#3690#3881#4406#4421#4497#4719#4722#492
On Sunday, April 12, 2020 at 1:35:10 PM UTC+5:30, Rahul Gupta wrote:
> the cells in my csv that i wrote looks likes this
> ['82#201#426#553#602#621#811#908#1289#1342#1401#1472#1593#1641#1794#2290#2341#2391#3023#3141#3227#3240#3525#3529#3690#3881#4406#4421#4497#4719#4722#4920#5053#5146#5433']
> and
Rahul Gupta wrote:
> the cells in my csv that i wrote looks likes this
>
['82#201#426#553#602#621#811#908#1289#1342#1401#1472#1593#1641#1794#2290#2341#2391#3023#3141#3227#3240#3525#3529#3690#3881#4406#4421#4497#4719#4722#4920#5053#5146#5433']
> and the cells which are empty looks like [''] i have
the cells in my csv that i wrote looks likes this
['82#201#426#553#602#621#811#908#1289#1342#1401#1472#1593#1641#1794#2290#2341#2391#3023#3141#3227#3240#3525#3529#3690#3881#4406#4421#4497#4719#4722#4920#5053#5146#5433']
and the cells which are empty looks like ['']
i have tried the following code
>Any idea why it might throw an exception on encountering a NULL in the
>input stream? It accepts all other 255 byte values. Was this behaviour
>intended? Perhaps a comment should be added to the docs.
>Thanks for your work on the module anyway.
The original module was like this - it comes about
n the
rationale behind the binary recommendation, but in 10+ years of using
the csv module, I've not found any issues in using text/ascii mode
that were solved by switching to using binary mode.
The CSV module was originally written by Dave Cole. I subsequently
made changes necessary to get it incl
nd the binary recommendation, but in 10+ years of using
>the csv module, I've not found any issues in using text/ascii mode
>that were solved by switching to using binary mode.
The CSV module was originally written by Dave Cole. I subsequently
made changes necessary to get it included
On 2018-03-01 23:57, John Pote wrote:
> On 01/03/2018 01:35, Tim Chase wrote:
> > While inelegant, I've "solved" this with a wrapper/generator
> >
> >f = file(fname, …)
> >g = (line.replace('\0', '') for line in f)
> I wondered about something like this but thought if there's a way
> of a
On 01/03/2018 01:35, Tim Chase wrote:
While inelegant, I've "solved" this with a wrapper/generator
f = file(fname, …)
g = (line.replace('\0', '') for line in f)
I wondered about something like this but thought if there's a way of
avoiding the extra step it would keep the execution speed u
On 01/03/2018 02:38, Dennis Lee Bieber wrote:
On Wed, 28 Feb 2018 23:40:41 +, John Pote
declaimed the following:
with open( fname, 'rt', encoding='iso-8859-1' ) as csvfile:
Pardon? Has the CSV module changed in the last year or so?
Python 3.6 docs say
On 2018-03-01, Tim Chase wrote:
> On 2018-02-28 21:38, Dennis Lee Bieber wrote:
>> > with open( fname, 'rt', encoding='iso-8859-1' ) as csvfile:
>>
>> Pardon? Has the CSV module changed in the last year or so?
>>
>> Las
On 2/28/2018 8:35 PM, Tim Chase wrote:
While inelegant, I've "solved" this with a wrapper/generator
f = file(fname, …)
g = (line.replace('\0', '') for line in f)
reader = csv.reader(g, …)
for row in reader:
process(row)
I think this is elegant in that is cleans the input strea
On 2018-02-28 21:38, Dennis Lee Bieber wrote:
> > with open( fname, 'rt', encoding='iso-8859-1' ) as csvfile:
>
> Pardon? Has the CSV module changed in the last year or so?
>
> Last time I read the documentation, it was recommended that
&g
While inelegant, I've "solved" this with a wrapper/generator
f = file(fname, …)
g = (line.replace('\0', '') for line in f)
reader = csv.reader(g, …)
for row in reader:
process(row)
My actual use at $DAYJOB cleans out a few other things
too, particularly non-breaking spaces coming from
I have a csv data file that may become corrupted (already happened)
resulting in a NULL byte appearing in the file. The NULL byte causes an
_csv.Error exception.
I'd rather like the csv reader to return csv lines as best it can and
subsequent processing of each comma separated field deal with
dialect control,
I'd be a touch surprised, but, it is possible that your other csv
readers and writers are more finicky.
Did you see the parameters that are available to you for tuning how
the csv module turns your csv data into records?
https://docs.python.org/3/library/csv.html#dialects-
Thank you Skip, worked great. And thank you Tim for Tidying things up!
--
http://mail.python.org/mailman/listinfo/python-list
On 2013-05-16 14:08, Skip Montanaro wrote:
> > So rather than
> >>a
> >>b
> >>c
> >>d
> >>e
> >>f
> > I would get [a, b, c, d, e, f]
>
> all_items = []
> for row in reader:
> all_items.append(row[0])
And following up here, this could be tidily rewritten as
all_items = [row[0] for row in re
On 2013-05-16 14:07, Skip Montanaro wrote:
> > len(reader) gives me an error.
>
> Apologies. len(list(reader)) should work. Of course, you'll wind
> up loading the entire CSV file into memory. You might want to just
> count row-by-row:
>
> n = 0
> for row in reader:
> n += 1
which can nic
> So rather than
>>a
>>b
>>c
>>d
>>e
>>f
> I would get [a, b, c, d, e, f]
all_items = []
for row in reader:
all_items.append(row[0])
Skip
--
http://mail.python.org/mailman/listinfo/python-list
> len(reader) gives me an error.
Apologies. len(list(reader)) should work. Of course, you'll wind up
loading the entire CSV file into memory. You might want to just count
row-by-row:
n = 0
for row in reader:
n += 1
Skip
--
http://mail.python.org/mailman/listinfo/python-list
I guess another way to accomplish this would be, is there any way that I can
turn the returned value for (column) into 1 list?
So rather than
>a
>b
>c
>d
>e
>f
I would get [a, b, c, d, e, f]
--
http://mail.python.org/mailman/listinfo/python-list
On Thursday, May 16, 2013 2:40:08 PM UTC-4, Skip Montanaro wrote:
> Perhaps you want len(reader) instead? Or a counter which increments for
> every row read which has an item in column A?
>
>
>
> Skip
len(reader) gives me an error.
I tried a counter, but unfortunately due to the simplicity
Perhaps you want len(reader) instead? Or a counter which increments for
every row read which has an item in column A?
Skip
--
http://mail.python.org/mailman/listinfo/python-list
I'm using the csv module to get information from a csv file. I have items
listed in Column A. I want to know how many items are listed in Column A.
import csv
with open('test.csv', 'r') as f:
reader = csv.reader(f)
for column in reader:
column = (column
On Wed, Feb 20, 2013 at 12:04 PM, Dave Angel wrote:
> On 02/20/2013 05:38 AM, inshu chauhan wrote:
>
>> On Wed, Feb 20, 2013 at 11:26 AM, Roland Koebler
>> wrote:
>>
>>
>>>
>>>
>>>
>>> If you only want to concat the files, I would use some shell-tools,
>>> like "cat" on Linux or "copy" on Windo
On 02/20/2013 06:01 AM, inshu chauhan wrote:
For simple concating the files , I tried the following code :
import glob
with open(r"C:\Users\inshu.chauhan\Desktop\test2.arff", "w") as w:
print w
for f in glob.glob(r"C:\Users\inshu.chauhan\Desktop\For
Model_600\*.arff"):
You fo
On 02/20/2013 05:38 AM, inshu chauhan wrote:
On Wed, Feb 20, 2013 at 11:26 AM, Roland Koebler wrote:
If you only want to concat the files, I would use some shell-tools,
like "cat" on Linux or "copy" on Windows, so
copy C:\Users\inshu.chauhan\Desktop\ForModel_600\*.arff
C:\Users\inshu.chauh
in
>> > reading and there is space between every row too..
>> Because there's a "split()" missing in your code. You currently tell the
>> CSV-writer to write the columns 2,9,9, , , ,4,4,6, , , ,2 as
>> space-separated CSV. So, try something like
>>
inshu chauhan wrote:
> Yes I just want to concat the files , not parse/mangle the files. How
> can
> i simply read all files in a folder in my computer and write them into a
> single file ? just by 'printf ' is it possible ?
Assuming the files' last line always ends with a newline and the files
e the columns 2,9,9, , , ,4,4,6, , , ,2 as
> space-separated CSV. So, try something like
> rows = [r.split() for r in open(f, "r").readlines()]
>
> > Or can I merge these text files without using csv module , directly in
> > python ?
> If you don't need to pars
ecause there's a "split()" missing in your code. You currently tell the
CSV-writer to write the columns 2,9,9, , , ,4,4,6, , , ,2 as
space-separated CSV. So, try something like
rows = [r.split() for r in open(f, "r").readlines()]
> Or can I merge these text files without usi
nt why there is space between the attribute of first column in
reading and there is space between every row too..
Or can I merge these text files without using csv module , directly in
python ?
Looking forward to your suggestions.
Thanks in advance !!!
--
http://mail.python.org/mailman/listinfo/python-list
On Aug 20, 9:10 am, MRAB wrote:
> JonathanB wrote:
> > On Aug 13, 3:52 pm, alex23 wrote:
> >> On Aug 13, 4:22 pm, JonathanB wrote:
>
> >>> writer = csv.writer(open(output, 'w'), dialect='excel')
> >> I think - not able to test atm - that if you open the file in 'wb'
> >> mode instead it
JonathanB wrote:
On Aug 13, 3:52 pm, alex23 wrote:
On Aug 13, 4:22 pm, JonathanB wrote:
writer = csv.writer(open(output, 'w'), dialect='excel')
I think - not able to test atm - that if you open the file in 'wb'
mode instead it should be fine.
changed that to
writer = csv.writer(op
On Aug 13, 3:52 pm, alex23 wrote:
> On Aug 13, 4:22 pm, JonathanB wrote:
>
> > writer = csv.writer(open(output, 'w'), dialect='excel')
>
> I think - not able to test atm - that if you open the file in 'wb'
> mode instead it should be fine.
changed that to
writer = csv.writer(open(output,
On Aug 13, 4:22 pm, JonathanB wrote:
> writer = csv.writer(open(output, 'w'), dialect='excel')
I think - not able to test atm - that if you open the file in 'wb'
mode instead it should be fine.
--
http://mail.python.org/mailman/listinfo/python-list
The subject basically says it all, here's the code that's producing
the csv file:
def write2CSV(self,output):
writer = csv.writer(open(output, 'w'), dialect='excel')
writer.writerow(['Name','Description','Due Date','Subject',
'Grade','Maximum Grade', se
On Jul 2, 6:04 am, MRAB wrote:
> The csv module imports from _csv, which suggests to me that there's code
> written in C which thinks that the "\x00" is a NUL terminator, so it's a
> bug, although it's very unusual to want to write characters like "\
V N wrote:
string "\x00" has a length of 1. When I use the csv module to write
that to a file
csv_f = csv.writer(file("test.csv","wb"),delimiter="|")
csv_f.writerow(["\x00","zz"])
The output file looks like this:
|zz
Is it possi
V N wrote:
string "\x00" has a length of 1. When I use the csv module to write
that to a file
csv_f = csv.writer(file("test.csv","wb"),delimiter="|")
csv_f.writerow(["\x00","zz"])
The output file looks like this:
|zz
Is it possi
string "\x00" has a length of 1. When I use the csv module to write
that to a file
csv_f = csv.writer(file("test.csv","wb"),delimiter="|")
csv_f.writerow(["\x00","zz"])
The output file looks like this:
|zz
Is it possible to force
Carlos Grohmann wrote:
>
>Hi all, I'm using csv to read text files, and its working fine, except
>in two cases:
>
>- when there is only one line of text (data) in the file
>- when there is a blank line after the last data line
>dialect = csv.Sniffer().sniff(sample) # Check for file format wit
On 2010-06-03, Carlos Grohmann wrote:
>> Use:
>> ? ?csvfile = csv.reader(csvfile, dialect=dialect)
>> dialect is a keyword argument.
>
> thanks for pointing that out.it stopped the errors when there s
> only one data line, but it still can't get the values for that
> line
Is it possible your data
Thanks for your prompt response, Neil.
> That data doesn't appear to be csv worthy. Why not use str.split
> or str.partition?
Well, I should have said that this is one kind of data. The function
is part of a larger app, and there is the possibility that someone
uses headers in the data files, or
On 2010-06-03, Neil Cerutti wrote:
> Do you really need to use the Sniffer? You'll probably be better
> off...
...defining your own dialect based on what you know to be the
file format.
--
Neil Cerutti
--
http://mail.python.org/mailman/listinfo/python-list
On 2010-06-03, Carlos Grohmann wrote:
> Hi all, I'm using csv to read text files, and its working fine, except
> in two cases:
>
> - when there is only one line of text (data) in the file
> - when there is a blank line after the last data line
>
> this is the kind of data:
>
> 45 67 89
> 23 45 06
Hi all, I'm using csv to read text files, and its working fine, except
in two cases:
- when there is only one line of text (data) in the file
- when there is a blank line after the last data line
this is the kind of data:
45 67 89
23 45 06
12 34 67
...
and this is the function:
def getData(pa
quote:
d = csv.Sniffer().sniff("1,2,3")
def eq(a, b, attributes=[name for name in dir(d) if not
> name.startswith("_")]):
> ... return all(getattr(a, n, None) == getattr(b, n, None) for n in
> attributes)
Only change I made is substituting "dir(csv.excel)" or "dir(csv.Dialect)"
for
quote:
> An implementation for the lazy
>
import csv
d = csv.Sniffer().sniff("1,2,3")
def eq(a, b, attributes=[name for name in dir(d) if not
> name.startswith("_")]):
> ... return all(getattr(a, n, None) == getattr(b, n, None) for n in
> attributes)
> ...
Wow, this is awesome.
Malte Dik wrote:
> Hi out there!
>
> I want to put some intelligence into a csv reading script and in order to
> do so I want to compare possible different dialects I collect with some
> random
>
> d = csv.Sniffer().sniff("1,2,3,4"),
>
> because the csv is kinda dirty.
>
> Now sniff() returns
Hi out there!
I want to put some intelligence into a csv reading script and in order to do
so I want to compare possible different dialects I collect with some random
d = csv.Sniffer().sniff("1,2,3,4"),
because the csv is kinda dirty.
Now sniff() returns a class object and those aren't compara
On Aug 25, 8:49 am, Peter Otten <__pete...@web.de> wrote:
> JKPeck wrote:
> > On Aug 24, 10:43 pm, John Yeung wrote:
> >> On Aug 24, 5:00 pm, Peter Otten <__pete...@web.de> wrote:
>
> >> > If I understand you correctly the csv.writer already does
> >> > what you want:
>
> >> > >>> w.writerow([1,No
JKPeck wrote:
> On Aug 24, 10:43 pm, John Yeung wrote:
>> On Aug 24, 5:00 pm, Peter Otten <__pete...@web.de> wrote:
>>
>> > If I understand you correctly the csv.writer already does
>> > what you want:
>>
>> > >>> w.writerow([1,None,2])
>> > 1,,2
>>
>> > just sequential commas, but that is the sp
On Aug 24, 10:43 pm, John Yeung wrote:
> On Aug 24, 5:00 pm, Peter Otten <__pete...@web.de> wrote:
>
> > If I understand you correctly the csv.writer already does
> > what you want:
>
> > >>> w.writerow([1,None,2])
> > 1,,2
>
> > just sequential commas, but that is the special treatment.
> > Witho
On Aug 24, 5:00 pm, Peter Otten <__pete...@web.de> wrote:
> If I understand you correctly the csv.writer already does
> what you want:
>
> >>> w.writerow([1,None,2])
> 1,,2
>
> just sequential commas, but that is the special treatment.
> Without it the None value would be converted to a string
> an
On Aug 24, 1:30 pm, JKPeck wrote:
> I'm trying to get the csv module (Python 2.6) to write data
> records like Excel. The excel dialect isn't doing it. The
> problem is in writing None values. I want them to result
> in just sequential commas - ,, but csv treats None s
JKPeck wrote:
> I'm trying to get the csv module (Python 2.6) to write data records
> like Excel. The excel dialect isn't doing it. The problem is in
> writing None values. I want them to result in just sequential commas
> - ,, but csv treats None specially, as the doc
On Aug 24, 11:30 am, JKPeck wrote:
> I'm trying to get the csv module (Python 2.6) to write data records
> like Excel. The excel dialect isn't doing it. The problem is in
> writing None values. I want them to result in just sequential commas
> - ,, but csv treats None
I'm trying to get the csv module (Python 2.6) to write data records
like Excel. The excel dialect isn't doing it. The problem is in
writing None values. I want them to result in just sequential commas
- ,, but csv treats None specially, as the doc says,
"To make it as easy
On 3 Feb, 04:27, Tim Roberts wrote:
> vsoler wrote:
>
> >I'm still interested in learning python techniques. Are there any
> >other modules (standard or complementary) that I can use in my
> >education?
>
> Are you serious about this? Are you not aware that virtually ALL of the
> Python standard
vsoler wrote:
>
>I'm still interested in learning python techniques. Are there any
>other modules (standard or complementary) that I can use in my
>education?
Are you serious about this? Are you not aware that virtually ALL of the
Python standard modules are written in Python, and are included i
On 2 feb, 21:51, Jon Clements wrote:
> On 2 Feb, 20:46, vsoler wrote:
>
> > Hi you all,
>
> > I just discovered the csv module here in the comp.lang.python group.
>
> > I have found its manual, which is publicly available, but since I am
> > still a newby, lea
I just discovered the csv module here in the comp.lang.python
group.
It certainly makes life easier.
I have found its manual, which is publicly available, but
since I am still a newby, learning techniques, I was wondering
if the source code for this module is available.
Is it possible to
On 2 Feb, 20:46, vsoler wrote:
> Hi you all,
>
> I just discovered the csv module here in the comp.lang.python group.
>
> I have found its manual, which is publicly available, but since I am
> still a newby, learning techniques, I was wondering if the source code
> for thi
Hi you all,
I just discovered the csv module here in the comp.lang.python group.
I have found its manual, which is publicly available, but since I am
still a newby, learning techniques, I was wondering if the source code
for this module is available.
Is it possible to have a look at it?
Thanks
>>> 1,"wer","tape 2"",5
>>> 1,vvv,"hoohaa",2
>>> I want to convert it to tab-separated without those silly quotes. Note
>>> in the second line that a field is 'tape 2"' , ie two inches: there is
>>> a double
2"",5
> > 1,vvv,"hoohaa",2
>
> > I want to convert it to tab-separated without those silly quotes. Note
> > in the second line that a field is 'tape 2"' , ie two inches: there is
> > a double quote in the string.
>
> > When I us
those silly quotes. Note
> in the second line that a field is 'tape 2"' , ie two inches: there is
> a double quote in the string.
>
> When I use csv module to read this:
>
> import sys
> outf=open(sys.argv[1]+'.tsv','wt')
> import csv
> r
On Feb 17, 8:09 pm, Christopher Barrington-Leigh
<[EMAIL PROTECTED]> wrote:
> Here is a file "test.csv"
> number,name,description,value
> 1,"wer","tape 2"",5
> 1,vvv,"hoohaa",2
>
> I want to convert it to tab-separated without those silly quotes. Note
> in the second line that a field is 'tape 2"'
>Here is a file "test.csv"
>number,name,description,value
>1,"wer","tape 2"",5
>1,vvv,"hoohaa",2
>
>I want to convert it to tab-separated without those silly quotes. Note
>in the second line that a field is 'tape 2"' , ie two inches: there is
>a double quote in the string.
The input format is ambi
Here is a file "test.csv"
number,name,description,value
1,"wer","tape 2"",5
1,vvv,"hoohaa",2
I want to convert it to tab-separated without those silly quotes. Note
in the second line that a field is 'tape 2"' , ie two inches: there is
On Dec 28, 9:43 pm, John Machin <[EMAIL PROTECTED]> wrote:
> On Dec 29, 1:12 pm, t_rectenwald <[EMAIL PROTECTED]> wrote:
>
> > I've noticed an oddity when running a program, using the csv module,
> > within IDLE. I'm new to Python so am confused by what
On Dec 29, 1:12 pm, t_rectenwald <[EMAIL PROTECTED]> wrote:
> I've noticed an oddity when running a program, using the csv module,
> within IDLE. I'm new to Python so am confused by what is happening.
> Here is what I'm doing:
>
> 1) Open the IDLE Shell.
>
On Fri, 28 Dec 2007 18:12:58 -0800, t_rectenwald wrote:
> Within the program, the snippet where I use the csv module is below:
>
> ==
> csvfile = open('foo.csv', 'w')
> writer = csv.writer(csvfile)
>
> for r
I've noticed an oddity when running a program, using the csv module,
within IDLE. I'm new to Python so am confused by what is happening.
Here is what I'm doing:
1) Open the IDLE Shell.
2) Select File | Open...
3) Choose my file, foo.py, opening it in a window.
4) From that wind
On Wed, 2007-12-12 at 07:04 -0800, massimo s. wrote:
> If by "thoroughly" you mean "it actually describes technically what it
> is and does but not how to really do things", yes, it is thoroughly
> documented.
> The examples section is a joke.
Actually I rare
On 2007-12-12, John Machin <[EMAIL PROTECTED]> wrote:
> On Dec 13, 12:58 am, Neil Cerutti <[EMAIL PROTECTED]> wrote:
>> On 2007-12-12, John Machin <[EMAIL PROTECTED]> wrote:
>>
>> >> It's clear that I am thinking to completely different usages
>> >> for CSV than what most people in this thread. I u
On Dec 13, 12:58 am, Neil Cerutti <[EMAIL PROTECTED]> wrote:
> On 2007-12-12, John Machin <[EMAIL PROTECTED]> wrote:
>
> >> It's clear that I am thinking to completely different usages
> >> for CSV than what most people in this thread. I use csv to
> >> export and import numerical data columns to a
On Wed, Dec 12, 2007 at 11:02:04AM -0600, [EMAIL PROTECTED] wrote regarding Re:
Is anyone happy with csv module?:
>
> J. Clifford Dyer <[EMAIL PROTECTED]> wrote:
> > But the software you are dealing with probably doesn't actually
> > need spreadsheets. It just need
When I have a choice, I use simple tab-delimited text files. The
>> usually irrelevent limitation is the inability to embed tabs or
>> newlines in fields. The relevant advantage is the simplicity.
>>
>
> That is very unnecessary. You can have your tabs and not eat them,
tabs or
> newlines in fields. The relevant advantage is the simplicity.
>
That is very unnecessary. You can have your tabs and not eat them, too:
#!/usr/bin/python
"""
EXAMPLE USAGE OF PYTHON'S CSV.DICTREADER FOR PEOPLE NEW TO PYTHON AND/OR
CSV.DICTREADER
Python - Ba
On Wed, Dec 12, 2007 at 10:08:38AM -0600, [EMAIL PROTECTED] wrote regarding Re:
Is anyone happy with csv module?:
>
> FWIW, CSV is a much more generic format for spreadsheets than XLS.
> For example, I deal almost exclusively in CSV files for simialr situations
> as the OP because
On 2007-12-12, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
> John Machin <[EMAIL PROTECTED]> wrote:
>> For that purpose, CSV files are the utter pox and then some.
>> Consider using xlrd and xlwt (nee pyexcelerator) to read
>> (resp. write) XLS files directly.
>
> FWIW, CSV is a much more generic
massimo s. wrote:
> As for people advicing xlrd/xlrwt: thanks for the useful tip, I didn't
> know about it and looks cool, but in this case no way I'm throwing
> another dependency to the poor users of my software. Csv module was
> good because was built-in.
The trouble wit
John Machin wrote:
> For that purpose, CSV files are the utter pox and then some. Consider
> using xlrd and xlwt (nee pyexcelerator) to read (resp. write) XLS
> files directly.
xlwt is unreleased (though quite stable, they say) at the moment, so the
links are:
easy_install xlrd
svn co https://s
On Dec 12, 2:58 pm, Neil Cerutti <[EMAIL PROTECTED]> wrote:
> On 2007-12-11, massimo s. <[EMAIL PROTECTED]> wrote:
>
> > Hi,
>
> > I'm struggling to use the python in-built csv module, and I
> > must say I'm less than satisfied. Apart from being r
On 2007-12-12, John Machin <[EMAIL PROTECTED]> wrote:
>> It's clear that I am thinking to completely different usages
>> for CSV than what most people in this thread. I use csv to
>> export and import numerical data columns to and from
>> spreadsheets.
>
> For that purpose, CSV files are the utter
On 2007-12-11, massimo s. <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I'm struggling to use the python in-built csv module, and I
> must say I'm less than satisfied. Apart from being rather
> poorly documented, I find it especially cumbersome to use, and
> also rather li
29 am, Bruno Desthuilliers
<[EMAIL PROTECTED]> wrote:
> > I'm just trying to use the CSV module
> > and I mostly can get it working. I just think its interface is much
> > less than perfect. I'd like something I can, say, give a whole
> > dictionary in input a
1 - 100 of 226 matches
Mail list logo