Hi Jon
This is exactly, what I was looking for. Need to read the data from
delimited file with no header, and do few transformation as described below
using Postgres C function and load it using pg_bulkload utility.
Transformation below, can be handled with query after loading all the data
On Wed, Dec 14, 2011 at 9:51 AM, Jon Nelson wrote:
> On Wed, Dec 14, 2011 at 9:40 AM, Jon Nelson wrote:
>> On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane wrote:
>>> Jon Nelson writes:
Regarding caching, I tried caching it across calls by making the
TupleDesc static and only initializing it
On Wed, Dec 14, 2011 at 9:40 AM, Jon Nelson wrote:
> On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane wrote:
>> Jon Nelson writes:
>>> Regarding caching, I tried caching it across calls by making the
>>> TupleDesc static and only initializing it once.
>>> When I tried that, I got:
>>
>>> ERROR: number
On Wed, Dec 14, 2011 at 9:25 AM, Tom Lane wrote:
> Jon Nelson writes:
>> Regarding caching, I tried caching it across calls by making the
>> TupleDesc static and only initializing it once.
>> When I tried that, I got:
>
>> ERROR: number of columns (6769856) exceeds limit (1664)
>
>> I tried to f
Jon Nelson writes:
> Regarding caching, I tried caching it across calls by making the
> TupleDesc static and only initializing it once.
> When I tried that, I got:
> ERROR: number of columns (6769856) exceeds limit (1664)
> I tried to find some documentation or examples that cache the
> informa
Ah, that did the trick, thank you Kevin,
Danny
From: Kevin Martyn
To: idc danny
Cc: "pgsql-performance@postgresql.org"
Sent: Wednesday, December 14, 2011 3:14 PM
Subject: Re: [PERFORM] copy vs. C function
try
host all all 5.0.0.0/8 md5
On W
On Wed, Dec 14, 2011 at 12:18 AM, Tom Lane wrote:
> Jon Nelson writes:
>> The only thing I have left are these statements:
>
>> get_call_result_type
>> TupleDescGetAttInMetadata
>> BuildTupleFromCStrings
>> HeapTupleGetDatum
>> and finally PG_RETURN_DATUM
>
>> It turns out that:
>> get_call_resul
ba.conf" in order to achieve my
> restriction? Please help me,
> Thank you,
> Danny
>
> --
> *From:* Tom Lane
> *To:* Jon Nelson
> *Cc:* pgsql-performance@postgresql.org
> *Sent:* Wednesday, December 14, 2011 8:18 AM
> *Subject:*
Lane
To: Jon Nelson
Cc: pgsql-performance@postgresql.org
Sent: Wednesday, December 14, 2011 8:18 AM
Subject: Re: [PERFORM] copy vs. C function
Jon Nelson writes:
> The only thing I have left are these statements:
> get_call_result_type
> TupleDescGetAttInMetadata
> BuildTupleF
Jon Nelson writes:
> The only thing I have left are these statements:
> get_call_result_type
> TupleDescGetAttInMetadata
> BuildTupleFromCStrings
> HeapTupleGetDatum
> and finally PG_RETURN_DATUM
> It turns out that:
> get_call_result_type adds 43 seconds [total: 54],
> TupleDescGetAttInMetadata
On Mon, Dec 12, 2011 at 10:38 AM, Merlin Moncure wrote:
> On Sat, Dec 10, 2011 at 7:27 PM, Jon Nelson wrote:
>> I was experimenting with a few different methods of taking a line of
>> text, parsing it, into a set of fields, and then getting that info
>> into a table.
>>
>> The first method involv
On Sat, Dec 10, 2011 at 7:27 PM, Jon Nelson wrote:
> I was experimenting with a few different methods of taking a line of
> text, parsing it, into a set of fields, and then getting that info
> into a table.
>
> The first method involved writing a C program to parse a file, parse
> the lines and ou
On Sat, Dec 10, 2011 at 8:32 PM, Craig Ringer wrote:
> On 12/11/2011 09:27 AM, Jon Nelson wrote:
>>
>> The first method involved writing a C program to parse a file, parse
>> the lines and output newly-formatted lines in a format that
>> postgresql's COPY function can use.
>> End-to-end, this take
Start a transaction before the first insert and commit it after the last one
and it will be much better, but I believe that the copy code path is optimized
to perform better than any set of queries can, even in a single transaction
Sent from my iPhone
On Dec 10, 2011, at 5:27 PM, Jon Nelson wr
On 12/11/2011 09:27 AM, Jon Nelson wrote:
The first method involved writing a C program to parse a file, parse
the lines and output newly-formatted lines in a format that
postgresql's COPY function can use.
End-to-end, this takes 15 seconds for about 250MB (read 250MB, parse,
output new data to n
I was experimenting with a few different methods of taking a line of
text, parsing it, into a set of fields, and then getting that info
into a table.
The first method involved writing a C program to parse a file, parse
the lines and output newly-formatted lines in a format that
postgresql's COPY f
16 matches
Mail list logo