On 2017-11-30 14:27:58 -0600, Ted Toth wrote:
> On Thu, Nov 30, 2017 at 11:40 AM, Peter J. Holzer wrote:
> > On 2017-11-30 08:43:32 -0600, Ted Toth wrote:
> >> One thing that is unclear to me is when commits occur while using psql
> >> would you know where in the docs I can find information on thi
On Thu, 30 Nov 2017 08:43:32 -0600
Ted Toth wrote:
> What is the downside of using a DO block? I'd have to do a nextval on
> each sequence before I could use currval, right? Or I could do 'select
> last_value from '.
You are creating a piece of code that has to be parsed, tokenized,
and compiled
On Thu, Nov 30, 2017 at 10:40 AM, Peter J. Holzer wrote:
>
> By default psql enables autocommit which causes an implicit commit after
> every statement. With a do block I'm not sure whether that means after
> the do block or after each statement within the do block. I'd just turn
> autocommit off
On Thu, Nov 30, 2017 at 11:40 AM, Peter J. Holzer wrote:
> On 2017-11-30 08:43:32 -0600, Ted Toth wrote:
>> Date: Thu, 30 Nov 2017 08:43:32 -0600
>> From: Ted Toth
>> To: "Peter J. Holzer"
>> Cc: pgsql-general@lists.postgresql.org
>> Subject: Re: larg
On 30 November 2017 at 05:22, Peter J. Holzer wrote:
> On 2017-11-29 08:32:02 -0600, Ted Toth wrote:
>> Yes I did generate 1 large DO block:
>>
>> DO $$
>> DECLARE thingid bigint; thingrec bigint; thingdataid bigint;
>> BEGIN
>> INSERT INTO thing
>> (ltn,classification,machine,source,thgrec,flags,
On 30 November 2017 at 05:22, Peter J. Holzer wrote:
> On 2017-11-29 08:32:02 -0600, Ted Toth wrote:
>> Yes I did generate 1 large DO block:
>>
>> DO $$
>> DECLARE thingid bigint; thingrec bigint; thingdataid bigint;
>> BEGIN
>> INSERT INTO thing
>> (ltn,classification,machine,source,thgrec,flags,
On 2017-11-30 08:43:32 -0600, Ted Toth wrote:
> Date: Thu, 30 Nov 2017 08:43:32 -0600
> From: Ted Toth
> To: "Peter J. Holzer"
> Cc: pgsql-general@lists.postgresql.org
> Subject: Re: large numbers of inserts out of memory strategy
>
> On Thu, Nov 30, 2017 at
On Thu, Nov 30, 2017 at 4:22 AM, Peter J. Holzer wrote:
> On 2017-11-29 08:32:02 -0600, Ted Toth wrote:
>> Yes I did generate 1 large DO block:
>>
>> DO $$
>> DECLARE thingid bigint; thingrec bigint; thingdataid bigint;
>> BEGIN
>> INSERT INTO thing
>> (ltn,classification,machine,source,thgrec,fla
On 2017-11-29 08:32:02 -0600, Ted Toth wrote:
> Yes I did generate 1 large DO block:
>
> DO $$
> DECLARE thingid bigint; thingrec bigint; thingdataid bigint;
> BEGIN
> INSERT INTO thing
> (ltn,classification,machine,source,thgrec,flags,serial,type) VALUES
> ('T007336','THING',0,1025,7336,7,'XXX869
On 28/11/17, Rob Sargent (robjsarg...@gmail.com) wrote:
>
> On 11/28/2017 10:50 AM, Ted Toth wrote:
> > On Tue, Nov 28, 2017 at 11:19 AM, Rob Sargent wrote:
> > > > On Nov 28, 2017, at 10:17 AM, Ted Toth wrote:
> > > >
> > > > I'm writing a migration utility to move data from non-rdbms data
> >
> I'm pretty new to postgres so I haven't changed any configuration
> setting and the log is a bit hard for me to make sense of :(
Diving into the shark tank is a helluva way to learn how to swim :-)
Are you interested in finding doc's on how to deal with the tuning?
--
Steven Lembark
> > what tools / languages ate you using?
>
> I'm using python to read binary source files and create the text files
> contains the SQL. Them I'm running psql -f .
Then chunking the input should be trivial.
There are a variety of techniques you can use to things like disable
indexes during loa
Ted Toth writes:
> On Tue, Nov 28, 2017 at 9:59 PM, Tom Lane wrote:
>> So whatever's going on here, there's more to it than a giant client-issued
>> INSERT (or COPY), or for that matter a large number of small ones. What
>> would seem to be required is a many-megabyte-sized plpgsql function body
On Tue, Nov 28, 2017 at 9:59 PM, Tom Lane wrote:
> Brian Crowell writes:
>> On Tue, Nov 28, 2017 at 12:38 PM, Tomas Vondra >> wrote:
>>> So what does the script actually do? Because psql certainly is not
>>> running pl/pgsql procedures on it's own. We need to understand why
>>> you're getting OOM
Brian Crowell writes:
> On Tue, Nov 28, 2017 at 12:38 PM, Tomas Vondra > wrote:
>> So what does the script actually do? Because psql certainly is not
>> running pl/pgsql procedures on it's own. We need to understand why
>> you're getting OOM in the first place - just inserts alone should not
>> ca
On Tue, Nov 28, 2017 at 12:38 PM, Tomas Vondra wrote:
> So what does the script actually do? Because psql certainly is not
> running pl/pgsql procedures on it's own. We need to understand why
> you're getting OOM in the first place - just inserts alone should not
> cause failures like that. Pleas
On 11/28/2017 07:26 PM, Ted Toth wrote:
> On Tue, Nov 28, 2017 at 12:01 PM, Tomas Vondra
> wrote:
>>
>> ...
>>
>> That is, most of the memory is allocated for SPI (2.4GB) and PL/pgSQL
>> procedure (500MB). How do you do the load? What libraries/drivers?
>>
>
> I'm doing the load with 'psql -f'.
On Tue, Nov 28, 2017 at 12:01 PM, Tomas Vondra
wrote:
>
>
> On 11/28/2017 06:54 PM, Ted Toth wrote:
>> On Tue, Nov 28, 2017 at 11:22 AM, Tomas Vondra
>> wrote:
>>> Hi,
>>>
>>> On 11/28/2017 06:17 PM, Ted Toth wrote:
I'm writing a migration utility to move data from non-rdbms data
source
On Tue, Nov 28, 2017 at 12:04 PM, Steven Lembark wrote:
> On Tue, 28 Nov 2017 11:17:07 -0600
> Ted Toth wrote:
>
>> I'm writing a migration utility to move data from non-rdbms data
>> source to a postgres db. Currently I'm generating SQL INSERT
>> statements involving 6 related tables for each 't
On 11/28/2017 06:54 PM, Ted Toth wrote:
> On Tue, Nov 28, 2017 at 11:22 AM, Tomas Vondra
> wrote:
>> Hi,
>>
>> On 11/28/2017 06:17 PM, Ted Toth wrote:
>>> I'm writing a migration utility to move data from non-rdbms data
>>> source to a postgres db. Currently I'm generating SQL INSERT
>>> stateme
On Tue, 28 Nov 2017 11:17:07 -0600
Ted Toth wrote:
> I'm writing a migration utility to move data from non-rdbms data
> source to a postgres db. Currently I'm generating SQL INSERT
> statements involving 6 related tables for each 'thing'. With 100k or
> more 'things' to migrate I'm generating a l
On 11/28/2017 10:50 AM, Ted Toth wrote:
On Tue, Nov 28, 2017 at 11:19 AM, Rob Sargent wrote:
On Nov 28, 2017, at 10:17 AM, Ted Toth wrote:
I'm writing a migration utility to move data from non-rdbms data
source to a postgres db. Currently I'm generating SQL INSERT
statements involving 6 rela
On Tue, Nov 28, 2017 at 11:22 AM, Tomas Vondra
wrote:
> Hi,
>
> On 11/28/2017 06:17 PM, Ted Toth wrote:
>> I'm writing a migration utility to move data from non-rdbms data
>> source to a postgres db. Currently I'm generating SQL INSERT
>> statements involving 6 related tables for each 'thing'. Wit
On Tue, Nov 28, 2017 at 11:19 AM, Rob Sargent wrote:
>
>> On Nov 28, 2017, at 10:17 AM, Ted Toth wrote:
>>
>> I'm writing a migration utility to move data from non-rdbms data
>> source to a postgres db. Currently I'm generating SQL INSERT
>> statements involving 6 related tables for each 'thing'.
Hi,
On 11/28/2017 06:17 PM, Ted Toth wrote:
> I'm writing a migration utility to move data from non-rdbms data
> source to a postgres db. Currently I'm generating SQL INSERT
> statements involving 6 related tables for each 'thing'. With 100k or
> more 'things' to migrate I'm generating a lot of st
> On Nov 28, 2017, at 10:17 AM, Ted Toth wrote:
>
> I'm writing a migration utility to move data from non-rdbms data
> source to a postgres db. Currently I'm generating SQL INSERT
> statements involving 6 related tables for each 'thing'. With 100k or
> more 'things' to migrate I'm generating a l
I'm writing a migration utility to move data from non-rdbms data
source to a postgres db. Currently I'm generating SQL INSERT
statements involving 6 related tables for each 'thing'. With 100k or
more 'things' to migrate I'm generating a lot of statements and when I
try to import using psql postgres
27 matches
Mail list logo