As it happened, I ran into the exact same scenario as Joachim just the
other day,
that is, the external provider of my csv had added some new columns. In my
case
manifested itself in an error that an integer field was not an integer
(because new
columns were added in the middle).

Reading through this whole thread leaves me with the feeling that no matter
what Sven
adds, there is still a risk for error. Nevertheless, my suggestion would be
to add a
functionality to #skipHeaders, or make a sister method:
#assertAndSkipHeaders: numberOfColumns onFailDo: aBlock given the actual
number of headers
That would give me a way to handle the error up front.

This will only be interesting if your data has headers of cause.

Thanks for NeoCSV which I use all the time!

Best,

Kasper

Reply via email to