On Fri, 2007-09-21 at 08:42 +0200, Per Jessen wrote:
> Isn't that just an ALTER ?
Its a little more complex than that, as I have to actually create WKB
from the data, so no, not just an ALTER unfortunately.
--Paul
All Email originating from UWC is covered by disclaimer
http://www.uwc.ac.za/
As for MySQL, if the table that you are inserting to has any indexes at
all, then each time your insert/update completes MySQL will re-index the
table.
That'll happen for any database (I don't know that it really re-indexes,
rather it has to update the index instead).
Therefor, if you can
Paul Scott wrote:
> Thanks to all for the suggestions - I now have to figure out the best
> way to manipulate every single record in that table (now over 6.5
> million rows) to add in a field (RDBMS function in C - so much
> easier)...
Isn't that just an ALTER ?
/Per Jessen, Zürich
--
PHP Ge
Chris wrote:
Paul Scott wrote:
On Thu, 2007-09-20 at 09:54 -0300, Martin Marques wrote:
If not, you should just use the COPY command of PostgreSQL (you are
using PostgreSQL if I remember correctly) or simply do a bash script
using psql and the \copy command.
Unfortunately, this has to work
On Fri, 2007-09-21 at 08:34 +0200, Paul Scott wrote:
> Thanks to all for the suggestions - I now have to figure out the best
> way to manipulate every single record in that table (now over 6.5
> million rows) to add in a field (RDBMS function in C - so much
> easier)...
>
Oh, and by the way, add
On Fri, 2007-09-21 at 15:51 +1000, Chris wrote:
> (Personally I'd use perl over php for processing files that large but
> that may not be an option).
Thanks for all of the suggestions, I seem to have it working quite well
now, although the client has just contacted me and said that they had
"mad
Paul Scott wrote:
On Thu, 2007-09-20 at 09:54 -0300, Martin Marques wrote:
If not, you should just use the COPY command of PostgreSQL (you are
using PostgreSQL if I remember correctly) or simply do a bash script
using psql and the \copy command.
Unfortunately, this has to work on all suppor
On Thu, 2007-09-20 at 09:54 -0300, Martin Marques wrote:
> If not, you should just use the COPY command of PostgreSQL (you are
> using PostgreSQL if I remember correctly) or simply do a bash script
> using psql and the \copy command.
>
Unfortunately, this has to work on all supported RDBM's -
Paul Scott wrote:
> Code:
> [SNIP]
> $row = 1;
> $handle = fopen($csvfile, "r");
> while (($data = fgetcsv($handle, 1000, "\t")) !== FALSE) {
> $num = count($data);
> $row++;
> $insarr = array('userid' => $userid,
> 'geonameid' => $data[0],
[snip]
> $thi
Paul Scott wrote:
> On Thu, 2007-09-20 at 12:50 +0100, Edward Kay wrote:
>> In addition to Martin's good suggestions (and also assuming you're
>> running php-cli via cron), you could use nice to stop it consuming
>> too many resources:
>>
>
> This is the current approach that I am taking, was ju
Paul Scott wrote:
On Thu, 2007-09-20 at 08:03 -0400, Robert Cummings wrote:
Post some samples of the data you are parsing and a sample of the code
you've written to parse them. If you're parsing 750 megs of data then
it's quite likely you could squeeze some performance out of the parse
routines
On Thu, 2007-09-20 at 14:25 +0200, Paul Scott wrote:
> On Thu, 2007-09-20 at 08:03 -0400, Robert Cummings wrote:
> > Post some samples of the data you are parsing and a sample of the code
> > you've written to parse them. If you're parsing 750 megs of data then
> > it's quite likely you could squee
Robert Cummings wrote:
On Thu, 2007-09-20 at 13:55 +0200, Paul Scott wrote:
On Thu, 2007-09-20 at 12:50 +0100, Edward Kay wrote:
In addition to Martin's good suggestions (and also assuming you're running
php-cli via cron), you could use nice to stop it consuming too many
resources:
This is the
On Thu, 2007-09-20 at 08:03 -0400, Robert Cummings wrote:
> Post some samples of the data you are parsing and a sample of the code
> you've written to parse them. If you're parsing 750 megs of data then
> it's quite likely you could squeeze some performance out of the parse
> routines themselves.
On Thu, 2007-09-20 at 13:55 +0200, Paul Scott wrote:
> On Thu, 2007-09-20 at 12:50 +0100, Edward Kay wrote:
> > In addition to Martin's good suggestions (and also assuming you're running
> > php-cli via cron), you could use nice to stop it consuming too many
> > resources:
>
> This is the current
On Thu, 2007-09-20 at 12:50 +0100, Edward Kay wrote:
> In addition to Martin's good suggestions (and also assuming you're running
> php-cli via cron), you could use nice to stop it consuming too many
> resources:
>
This is the current approach that I am taking, was just really wondering
if there
> Paul Scott wrote:
> > I have a very large text file that gets dumped into a directoory every
> > now and then. It is typically around 750MB long, at least, and my
> > question is:
> >
> > What is the best method to parse this thing and insert the data into a
> > postgres db?
> >
> > I have tried
Paul Scott wrote:
I have a very large text file that gets dumped into a directoory every
now and then. It is typically around 750MB long, at least, and my
question is:
What is the best method to parse this thing and insert the data into a
postgres db?
I have tried using file(), fget*() and some
18 matches
Mail list logo