"invalid contrecord" error on replica

2019-07-02 Thread JOULAUD François
Hello,

I encountered a problem on replicas after the primary crashed for lack of disk 
space.

After the problem I had a constant flow of "invalid contrecord" logs and 
replication ceased working.

The only way I found to make it work again was to completely restart the 
replica.


The logs:

Jun 11 12:44:28 myhostreplica postgres[20302]: [7-1] 2019-06-11 14:44:28 CEST 
[20302-2] FATAL:  could not receive data from WAL stream: SSL SYSCALL error: 
EOF detected
Jun 11 12:44:28 myhostreplica postgres[19166]: [6-1] 2019-06-11 14:44:28 CEST 
[19166-1] FATAL:  could not connect to the primary server: FATAL:  the database 
system is in recovery mode
Jun 11 12:44:33 myhostreplica postgres[19167]: [6-1] 2019-06-11 14:44:33 CEST 
[19167-1] LOG:  started streaming WAL from primary at 2C/6A00 on timeline 1
Jun 11 12:44:33 myhostreplica postgres[20298]: [10-1] 2019-06-11 14:44:33 CEST 
[20298-6] LOG:  invalid contrecord length 4571 at 2C/69FFF648
Jun 11 12:44:33 myhostreplica postgres[19167]: [7-1] 2019-06-11 14:44:33 CEST 
[19167-2] FATAL:  terminating walreceiver process due to administrator command
Jun 11 12:44:33 myhostreplica postgres[20298]: [11-1] 2019-06-11 14:44:33 CEST 
[20298-7] LOG:  invalid contrecord length 4571 at 2C/69FFF648
Jun 11 12:44:33 myhostreplica postgres[20298]: [12-1] 2019-06-11 14:44:33 CEST 
[20298-8] LOG:  invalid contrecord length 4571 at 2C/69FFF648

François J.



Re: restore error

2019-07-02 Thread Laurenz Albe
Prakash Ramakrishnan wrote:
> While restoring the database using psql getting below error any idea about 
> this?
> 
> psql:AICH01PR.sql:641367264: ERROR:  connection for foreign table 
> "mf_adm_spotservicefiles" cannot be established
> DETAIL:  ORA-12170: TNS:Connect timeout occurred

Just guessing:

Perhaps there is a materialized view that is based on a foreign table,
and when PostgreSQL tries to populate the materialized view during
the restore, it fails to connect to the remote database.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re: restore error

2019-07-02 Thread Prakash Ramakrishnan
Thanks, I'll check it out.

Regards,
Prakash.R

On Tue, Jul 2, 2019 at 1:05 PM Laurenz Albe 
wrote:

> Prakash Ramakrishnan wrote:
> > While restoring the database using psql getting below error any idea
> about this?
> >
> > psql:AICH01PR.sql:641367264: ERROR:  connection for foreign table
> "mf_adm_spotservicefiles" cannot be established
> > DETAIL:  ORA-12170: TNS:Connect timeout occurred
>
> Just guessing:
>
> Perhaps there is a materialized view that is based on a foreign table,
> and when PostgreSQL tries to populate the materialized view during
> the restore, it fails to connect to the remote database.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>

-- 
Thanks,
Prakash.R
PostgreSQL - Offshore DBA support TCS / Nielsen Infrastructure Team On call
: +91-8939599426


[no subject]

2019-07-02 Thread Prakash Ramakrishnan
Hi Team,

We are getting below error while during import the csv file please do the
needful.

CREATE TABLE ujshjk ( pod_id bigint NOT NULL, src_id smallint, eff_strt_dt
date,eff_end_dt date,ins_dt timestamp without time zone,updt_dt timestamp
without time zone,nat_key_1 character(50),nat_key_2 character(50),nat_key_3
character(50),nat_key_4 character(50),nat_key_5 character(50));

-bash-4.2$ more ckr_sto.csv
 4937880 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
  || 5000128790679 |   |   |   |
 4939355 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
  || 5000128639345 |   |   |   |
 4939744 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
  || 5000128684510 |   |   |   |
 4939750 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
  || 5000128683100 |   |   |   |
 4936360 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
  || 5000128567527 |   |   |   |
 4940308 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
  || 5000128781329 |   |   |   |
 4938006 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
  || 4000128912554 |   |   |   |
 4937457 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
  || 5000128426574 |   |   |   |

error
--
postgres=# \copy ckr_sto from /data/ckr_sto.csv CSV HEADER;
ERROR:  invalid input syntax for integer: " 4939355 | 12 | 2015-01-05
 || 2015-01-05 05:51:47 | || 5000128639345 |
|   |   | "
CONTEXT:  COPY ckr_sto, line 2, column pod_id: " 4939355 | 12 |
2015-01-05  || 2015-01-05 05:51:47 | ||
500012863934..."
postgres=#
postgres=#



-- 
Thanks,
Prakash.R


Re: Statistics tables not being updated anymore

2019-07-02 Thread Laurenz Albe
Ron wrote:
> v9.6.9
> 
> Statistics views like pg_stat_*_tables, pg_stat_*_indexes, pg_statio_*_tables 
> and pg_statio_*_indexes
> aren't being updated anymore. Specifically, all counter fields are 0, and 
> date fields are blank.
> 
> The first thing I checked was postgresql.conf (but it hasn't been modified 
> since December 2018),
> and track_activities is turned on. Also, I connect as user "postgres", so 
> it's not a privileges problem.
> 
> Where else should I look?
> 
> track_activities| on
> track_activity_query_size   | 1024  
> track_commit_timestamp  | off   
> track_counts| on
> track_functions | none  
> track_io_timing | off   

I have seen something like that before, and described the incident in
https://www.cybertec-postgresql.com/en/stale-statistics-cause-table-bloat/

What happened there was that during system startup, PostgreSQL was started
and created the statistics collector UDP socket on IPv6 localhost.

Later in the boot sequence, IPv6 was disabled, so no more statistics
could be collected.  Since it is an UDP socket, there were no errors.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re:

2019-07-02 Thread Laurenz Albe
On Tue, 2019-07-02 at 13:09 +0530, Prakash Ramakrishnan wrote:
> We are getting below error while during import the csv file please do the 
> needful.

I'd say the needful thing here is for you to read the documentation...

> -bash-4.2$ more ckr_sto.csv
>  4937880 | 12 | 2015-01-05  || 2015-01-05 05:51:47 | 
> || 5000128790679 |   |   |   |
>  4939355 | 12 | 2015-01-05  || 2015-01-05 05:51:47 | 
> || 5000128639345 |   |   |   |
>  4939744 | 12 | 2015-01-05  || 2015-01-05 05:51:47 | 
> || 5000128684510 |   |   |   |
>  4939750 | 12 | 2015-01-05  || 2015-01-05 05:51:47 | 
> || 5000128683100 |   |   |   |
>  4936360 | 12 | 2015-01-05  || 2015-01-05 05:51:47 | 
> || 5000128567527 |   |   |   |
>  4940308 | 12 | 2015-01-05  || 2015-01-05 05:51:47 | 
> || 5000128781329 |   |   |   |
>  4938006 | 12 | 2015-01-05  || 2015-01-05 05:51:47 | 
> || 4000128912554 |   |   |   |
>  4937457 | 12 | 2015-01-05  || 2015-01-05 05:51:47 | 
> || 5000128426574 |   |   |   |
> 
> error
> --
> postgres=# \copy ckr_sto from /data/ckr_sto.csv CSV HEADER;
> ERROR:  invalid input syntax for integer: " 4939355 | 12 | 2015-01-05  |  
>   | 2015-01-05 05:51:47 | || 5000128639345 |  
>  |   |   | "
> CONTEXT:  COPY ckr_sto, line 2, column pod_id: " 4939355 | 12 | 
> 2015-01-05  || 2015-01-05 05:51:47 | || 
> 500012863934..."

>From the documentation of COPY:

DELIMITER

Specifies the character that separates columns within each row (line) of 
the file.
The default is a tab character in text format, a comma in CSV format.
This must be a single one-byte character. This option is not allowed when 
using binary format.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com





Re:

2019-07-02 Thread Prakash Ramakrishnan
Thanks a lot. I will check and let you know.

Regards,
Prakash.R

On Tue, Jul 2, 2019 at 1:15 PM Laurenz Albe 
wrote:

> On Tue, 2019-07-02 at 13:09 +0530, Prakash Ramakrishnan wrote:
> > We are getting below error while during import the csv file please do
> the needful.
>
> I'd say the needful thing here is for you to read the documentation...
>
> > -bash-4.2$ more ckr_sto.csv
> >  4937880 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>  || 5000128790679 |   |   |   |
> >  4939355 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>  || 5000128639345 |   |   |   |
> >  4939744 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>  || 5000128684510 |   |   |   |
> >  4939750 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>  || 5000128683100 |   |   |   |
> >  4936360 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>  || 5000128567527 |   |   |   |
> >  4940308 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>  || 5000128781329 |   |   |   |
> >  4938006 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>  || 4000128912554 |   |   |   |
> >  4937457 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>  || 5000128426574 |   |   |   |
> >
> > error
> > --
> > postgres=# \copy ckr_sto from /data/ckr_sto.csv CSV HEADER;
> > ERROR:  invalid input syntax for integer: " 4939355 | 12 |
> 2015-01-05  || 2015-01-05 05:51:47 | ||
> 5000128639345 |   |   |   | "
> > CONTEXT:  COPY ckr_sto, line 2, column pod_id: " 4939355 | 12 |
> 2015-01-05  || 2015-01-05 05:51:47 | ||
> 500012863934..."
>
> From the documentation of COPY:
>
> DELIMITER
>
> Specifies the character that separates columns within each row (line)
> of the file.
> The default is a tab character in text format, a comma in CSV format.
> This must be a single one-byte character. This option is not allowed
> when using binary format.
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>
>

-- 
Thanks,
Prakash.R
PostgreSQL - Offshore DBA support TCS / Nielsen Infrastructure Team On call
: +91-8939599426


Sequences part 2

2019-07-02 Thread Karl Martin Skoldebrand
Hi again,

Looking more at sequences, I have some sequences that start with ID 1, 
incrementing each record by 1.
So I have e.g. 1 Spain 2. Germany 3. France 4. Ireland 5. Norway
Now I want to insert more countries, between France and Ireland. And also alter 
the increment. That latter part might be easy as I understood it? But can I 
achieve this:
1 Spain 2. Germany 3. France 4. Greece 5. Latvia 6. Cyprus 7. Ireland 8. Norway
That is inserting records in the middle of the records, maybe even
1 Spain 2. Germany 3. France 4. Greece 5. Ireland 6. Norway 14. Latvia 24. 
Cyprus

//Martin S


Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Re: Sequences part 2

2019-07-02 Thread Thomas Kellerer
Karl Martin Skoldebrand schrieb am 02.07.2019 um 12:44:
> Looking more at sequences, I have some sequences that start with ID 1, 
> incrementing each record by 1.
> 
> So I have e.g. 1 Spain 2. Germany 3. France 4. Ireland 5. Norway
> 
> Now I want to insert more countries, between France and Ireland. 

That is a wrong assumption - there is no "between" for rows in a relational 
database. 

Why do you think you need that? 


A sequence is used to generate a unique number. The only job of that number is 
to be unique. 
You should never rely on PK values to be "gapless" nor should you rely on them 
defining any kind of ordering. 

If you need to sort your countries in a non-default way, add a specific 
sort_order column to your table which you can use when selecting from the table.




RE: Sequences part 2

2019-07-02 Thread Karl Martin Skoldebrand
Karl Martin Skoldebrand schrieb am 02.07.2019 um 12:44:
> Looking more at sequences, I have some sequences that start with ID 1, 
> incrementing each record by 1.
> 
> So I have e.g. 1 Spain 2. Germany 3. France 4. Ireland 5. Norway
> 
> Now I want to insert more countries, between France and Ireland. 

That is a wrong assumption - there is no "between" for rows in a relational 
database. 

Why do you think you need that? 


A sequence is used to generate a unique number. The only job of that number is 
to be unique. 
You should never rely on PK values to be "gapless" nor should you rely on them 
defining any kind of ordering. 

--
Yes, I'm fairly aware of this. However the application the database table 
belongs to seems to rely on a specific order in the database. I.e. if I just 
add value to the table they end up, possibly due to how the application is 
coded in an undesireable spot on the web page.
That is why I wanted them to be inserted with the previous records.

//Martin S



Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.




Re: Sequences part 2

2019-07-02 Thread Karsten Hilbert
On Tue, Jul 02, 2019 at 11:20:42AM +, Karl Martin Skoldebrand wrote:

> Yes, I'm fairly aware of this. However the application the database table 
> belongs to seems to rely on a specific order in the database. I.e. if I just 
> add value to the table they end up, possibly due to how the application is 
> coded in an undesireable spot on the web page.
> That is why I wanted them to be inserted with the previous records.

rename the table

install a view with the name of the table and appropriate
triggers and/or rules between app and database enforcing
whatever questionable assumptions the app thinks it
needs -- if at all logically possible

Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




RE: Sequences part 2

2019-07-02 Thread Karl Martin Skoldebrand


> Yes, I'm fairly aware of this. However the application the database table 
> belongs to seems to rely on a specific order in the database. I.e. if I just 
> add value to the table they end up, possibly due to how the application is 
> coded in an undesireable spot on the web page.
> That is why I wanted them to be inserted with the previous records.

rename the table

install a view with the name of the table and appropriate triggers and/or rules 
between app and database enforcing whatever questionable assumptions the app 
thinks it needs -- if at all logically possible

--
Hmm, that could be a possible avenue. 
The true horror of the database table is that it contains mixed data, so it 
could possibly be "easier" to just remodel the database table (split it into 
several) for this function. If things are in fact the way they look to us at 
the moment. Either way it's not a quick fix for someone fairly inexperienced 
with PostgreSQL and having a couple of days to holiday.

//Martin S



Disclaimer:  This message and the information contained herein is proprietary 
and confidential and subject to the Tech Mahindra policy statement, you may 
review the policy at http://www.techmahindra.com/Disclaimer.html 
 externally 
http://tim.techmahindra.com/tim/disclaimer.html 
 internally within 
TechMahindra.







Re: Sequences part 2

2019-07-02 Thread Thomas Kellerer
Karl Martin Skoldebrand schrieb am 02.07.2019 um 13:20:
>>> Now I want to insert more countries, between France and Ireland.
>> 
>> That is a wrong assumption - there is no "between" for rows in a
>> relational database.
>> 
> 
> Yes, I'm fairly aware of this. However the application the 
> database table belongs to seems to rely on a specific order in the 
> database. I.e. if I just add value to the table they end up, possibly
> due to how the application is coded in an undesireable spot on the
> web page. That is why I wanted them to be inserted with the previous
> records.
There is no "order of rows" in a relational database. 

Even _if_ you squeeze the IDs of the new rows between existing ones, 
you still have no guarantee whatsoever on the order of the rows 
returned by a SELECT statement. 

The only(!) way to get a guaranteed sort order is to use ORDER BY. 

Thomas




Re: Statistics tables not being updated anymore

2019-07-02 Thread Ron

On 7/1/19 1:48 PM, Tom Lane wrote:

Ron  writes:

Statistics views like pg_stat_*_tables, pg_stat_*_indexes,
pg_statio_*_tables and pg_statio_*_indexes aren't being updated anymore.
Specifically, all counter fields are 0, and date fields are blank.

Does anything show up in the postmaster log when you try to query
one of those views?

It seems like something must be wedged either in the stats collector
process or in backends' communication with that process.  Hard to say
what on the basis of this evidence though.

If you can do a postmaster restart without too much trouble, that
might resolve the issue.


That did, in fact, solve the problem.

--
Angular momentum makes the world go 'round.




Re: Statistics tables not being updated anymore

2019-07-02 Thread Adrian Klaver

On 7/2/19 6:28 AM, Ron wrote:

On 7/1/19 1:48 PM, Tom Lane wrote:

Ron  writes:

Statistics views like pg_stat_*_tables, pg_stat_*_indexes,
pg_statio_*_tables and pg_statio_*_indexes aren't being updated anymore.
Specifically, all counter fields are 0, and date fields are blank.

Does anything show up in the postmaster log when you try to query
one of those views?

It seems like something must be wedged either in the stats collector
process or in backends' communication with that process.  Hard to say
what on the basis of this evidence though.

If you can do a postmaster restart without too much trouble, that
might resolve the issue.


That did, in fact, solve the problem.




It got it working again, it did not actually answer what caused the 
issue. The problem could still reappear.



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: plpgsql: How to modify a field in an array of records

2019-07-02 Thread Tom Lane
Dirk Mika  writes:
> But when I try to modify a field of a record in the array I get a syntax 
> error.
> l_classes_array[i].field1 := l_value;
> The error is ERROR: syntax error at or near "." Position: 12414 where 
> position points to the . after the [i]. I've no idea what's causing this 
> syntax error.

Yeah, that seems like it ought to work, and ideally it would.  But plpgsql
doesn't currently have support for nested field-of-array-element or
element-of-record-field assignments.  (I've looked briefly at this in the
past, and it doesn't seem impossible, but it's definitely not trivial
either.)

You'll have to do some kind of workaround like assigning the whole array
element to a temp variable, fixing the field in the temp variable, then
assigning back :-(

regards, tom lane




Re: Statistics tables not being updated anymore

2019-07-02 Thread Adrien Nayrat
Hello,

FYI, check_pgactivity has a service to detect frozen stat collector:
https://github.com/OPMDG/check_pgactivity#user-content-stat_snapshot_age-9.5

We added this service after a customer has deactivated IPv6 that broke stat
collector.

Regards,



signature.asc
Description: OpenPGP digital signature


pglogical extension. - 2 node master <-> master logical replication ?

2019-07-02 Thread Day, David
Anyone with a test scripts or link to an example of bi-directional master using 
pglogical. ?
I am led to believe from the documentation that this should be possible. (?)
"Limited multi-master support with conflict resolution exists, but mutual 
replication connections must be added individually"
Which is left unexplained. 
.
Environment:  FreeBSD11.3,  Postgres 11.2,  pglogical 2.2.1
Pglogical installed on both hosts.
pg_hba.conf and postgresql.conf options adjusted per recommendations.

Using pglogical I  have set up a one way replication between hosts for a  table 
successfully, but cannot succeed in the admin of the reverse subscription.

On attempting the reverse subscription from host1 to host2

select pglogical.create_subscription('alabama_sub', 
'host=alabama port=5432 dbname=ace_db user=replicator',
'{connections}', false, false, '{}' )

could not connect to the postgresql server: FATAL:  role "pgsql" does not exist
DETAIL:  dsn was:  host=georgia port=5432 dbname=ace_db

--
Wrong dsn and role ?
The  postgres installed superuser role is not pgsql.
I did  not see this issue in the working subscription direction.

Anyone with a test scripts or link to an example of bi-directional master using 
pglogical or a suggestion as
to where I went wrong on the setup.

Best Regards


Dave




Re:

2019-07-02 Thread raf
Laurenz Albe wrote:

> On Tue, 2019-07-02 at 13:09 +0530, Prakash Ramakrishnan wrote:
> > We are getting below error while during import the csv file please do the 
> > needful.
> 
> I'd say the needful thing here is for you to read the documentation...
> 
> > -bash-4.2$ more ckr_sto.csv
> >  4937880 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
> >  || 5000128790679 |   |   |   |
> >  4939355 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
> >  || 5000128639345 |   |   |   |
> >  4939744 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
> >  || 5000128684510 |   |   |   |
> >  4939750 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
> >  || 5000128683100 |   |   |   |
> >  4936360 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
> >  || 5000128567527 |   |   |   |
> >  4940308 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
> >  || 5000128781329 |   |   |   |
> >  4938006 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
> >  || 4000128912554 |   |   |   |
> >  4937457 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
> >  || 5000128426574 |   |   |   |
> > 
> > error
> > --
> > postgres=# \copy ckr_sto from /data/ckr_sto.csv CSV HEADER;
> > ERROR:  invalid input syntax for integer: " 4939355 | 12 | 2015-01-05  
> > || 2015-01-05 05:51:47 | || 5000128639345 | 
> >   |   |   | "
> > CONTEXT:  COPY ckr_sto, line 2, column pod_id: " 4939355 | 12 | 
> > 2015-01-05  || 2015-01-05 05:51:47 | || 
> > 500012863934..."
> 
> >From the documentation of COPY:
> 
> DELIMITER
> 
> Specifies the character that separates columns within each row (line) of 
> the file.
> The default is a tab character in text format, a comma in CSV format.
> This must be a single one-byte character. This option is not allowed when 
> using binary format.
> 
> Yours,
> Laurenz Albe
> -- 
> Cybertec | https://www.cybertec-postgresql.com

in other words, ckr_sto.csv is not a csv file.
it just has .csv at the end of its name.
that's why psql tried to interpret the entire
line as the first column: there were no commas.

its contents should look something like:

4937880,12,2015-01-05,,2015-01-05 05:51:47,,,5000128790679
4939355,12,2015-01-05,,2015-01-05 05:51:47,,,5000128639345
4939744,12,2015-01-05,,2015-01-05 05:51:47,,,5000128684510
4939750,12,2015-01-05,,2015-01-05 05:51:47,,,5000128683100
4936360,12,2015-01-05,,2015-01-05 05:51:47,,,5000128567527
4940308,12,2015-01-05,,2015-01-05 05:51:47,,,5000128781329
4938006,12,2015-01-05,,2015-01-05 05:51:47,,,4000128912554
4937457,12,2015-01-05,,2015-01-05 05:51:47,,,5000128426574





Re:

2019-07-02 Thread Andrew Kerber
Yes, CSV stands for comma separated variable length.  This means that the
fields in each row should be separated by commas, with a carriage return at
the end of each record.  You have a file using | separators, which mean it
is not csv.

On Tue, Jul 2, 2019 at 6:04 PM  wrote:

> Laurenz Albe wrote:
>
> > On Tue, 2019-07-02 at 13:09 +0530, Prakash Ramakrishnan wrote:
> > > We are getting below error while during import the csv file please do
> the needful.
> >
> > I'd say the needful thing here is for you to read the documentation...
> >
> > > -bash-4.2$ more ckr_sto.csv
> > >  4937880 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>|| 5000128790679 |   |   |   |
> > >  4939355 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>|| 5000128639345 |   |   |   |
> > >  4939744 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>|| 5000128684510 |   |   |   |
> > >  4939750 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>|| 5000128683100 |   |   |   |
> > >  4936360 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>|| 5000128567527 |   |   |   |
> > >  4940308 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>|| 5000128781329 |   |   |   |
> > >  4938006 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>|| 4000128912554 |   |   |   |
> > >  4937457 | 12 | 2015-01-05  || 2015-01-05 05:51:47 |
>|| 5000128426574 |   |   |   |
> > >
> > > error
> > > --
> > > postgres=# \copy ckr_sto from /data/ckr_sto.csv CSV HEADER;
> > > ERROR:  invalid input syntax for integer: " 4939355 | 12 |
> 2015-01-05  || 2015-01-05 05:51:47 | ||
> 5000128639345 |   |   |   | "
> > > CONTEXT:  COPY ckr_sto, line 2, column pod_id: " 4939355 | 12 |
> 2015-01-05  || 2015-01-05 05:51:47 | ||
> 500012863934..."
> >
> > >From the documentation of COPY:
> >
> > DELIMITER
> >
> > Specifies the character that separates columns within each row
> (line) of the file.
> > The default is a tab character in text format, a comma in CSV format.
> > This must be a single one-byte character. This option is not allowed
> when using binary format.
> >
> > Yours,
> > Laurenz Albe
> > --
> > Cybertec | https://www.cybertec-postgresql.com
>
> in other words, ckr_sto.csv is not a csv file.
> it just has .csv at the end of its name.
> that's why psql tried to interpret the entire
> line as the first column: there were no commas.
>
> its contents should look something like:
>
> 4937880,12,2015-01-05,,2015-01-05 05:51:47,,,5000128790679
> 4939355,12,2015-01-05,,2015-01-05 05:51:47,,,5000128639345
> 4939744,12,2015-01-05,,2015-01-05 05:51:47,,,5000128684510
> 4939750,12,2015-01-05,,2015-01-05 05:51:47,,,5000128683100
> 4936360,12,2015-01-05,,2015-01-05 05:51:47,,,5000128567527
> 4940308,12,2015-01-05,,2015-01-05 05:51:47,,,5000128781329
> 4938006,12,2015-01-05,,2015-01-05 05:51:47,,,4000128912554
> 4937457,12,2015-01-05,,2015-01-05 05:51:47,,,5000128426574
>
>
>
>

-- 
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'


Re:

2019-07-02 Thread Adrian Klaver

On 7/2/19 4:08 PM, Andrew Kerber wrote:
Yes, CSV stands for comma separated variable length.  This means that 


CSV = Comma Separated Values

the fields in each row should be separated by commas, with a carriage 
return at the end of each record.  You have a file using | separators, 
which mean it is not csv.


That is not strictly true:

https://en.wikipedia.org/wiki/Comma-separated_values

And is definitely not true for this context, using Postgres COPY:

https://www.postgresql.org/docs/9.6/sql-copy.html

"CSV Format

This format option is used for importing and exporting the Comma 
Separated Value (CSV) file format used by many other programs, such as 
spreadsheets. Instead of the escaping rules used by PostgreSQL's 
standard text format, it produces and recognizes the common CSV escaping 
mechanism.


The values in each record are separated by the DELIMITER character. If 
the value contains the delimiter character, the QUOTE character, the 
NULL string, a carriage return, or line feed character, then the whole 
value is prefixed and suffixed by the QUOTE character, and any 
occurrence within the value of a QUOTE character or the ESCAPE character 
is preceded by the escape character. You can also use FORCE_QUOTE to 
force quotes when outputting non-NULL values in specific columns.


...
"

You do have to specify the delimiter if it is not the default comma. In 
the OP's case the delimiter would need to be set to '|'.




On Tue, Jul 2, 2019 at 6:04 PM mailto:r...@raf.org>> wrote:

Laurenz Albe wrote:

 > On Tue, 2019-07-02 at 13:09 +0530, Prakash Ramakrishnan wrote:
 > > We are getting below error while during import the csv file
please do the needful.
 >
 > I'd say the needful thing here is for you to read the
documentation...
 >
 > > -bash-4.2$ more ckr_sto.csv
 > >  4937880 |     12 | 2015-01-05  |            | 2015-01-05
05:51:47 |         |        | 5000128790679 |           | 
  |           |

 > >  4939355 |     12 | 2015-01-05  |            | 2015-01-05
05:51:47 |         |        | 5000128639345 |           | 
  |           |

 > >  4939744 |     12 | 2015-01-05  |            | 2015-01-05
05:51:47 |         |        | 5000128684510 |           | 
  |           |

 > >  4939750 |     12 | 2015-01-05  |            | 2015-01-05
05:51:47 |         |        | 5000128683100 |           | 
  |           |

 > >  4936360 |     12 | 2015-01-05  |            | 2015-01-05
05:51:47 |         |        | 5000128567527 |           | 
  |           |

 > >  4940308 |     12 | 2015-01-05  |            | 2015-01-05
05:51:47 |         |        | 5000128781329 |           | 
  |           |

 > >  4938006 |     12 | 2015-01-05  |            | 2015-01-05
05:51:47 |         |        | 4000128912554 |           | 
  |           |

 > >  4937457 |     12 | 2015-01-05  |            | 2015-01-05
05:51:47 |         |        | 5000128426574 |           | 
  |           |

 > >
 > > error
 > > --
 > > postgres=# \copy ckr_sto from /data/ckr_sto.csv CSV HEADER;
 > > ERROR:  invalid input syntax for integer: " 4939355 |     12 |
2015-01-05  |            | 2015-01-05 05:51:47 |         |        |
5000128639345 |           |           |           | "
 > > CONTEXT:  COPY ckr_sto, line 2, column pod_id: " 4939355 |   
  12 | 2015-01-05  |            | 2015-01-05 05:51:47 |         |   
     | 500012863934..."

 >
 > >From the documentation of COPY:
 >
 > DELIMITER
 >
 >     Specifies the character that separates columns within each
row (line) of the file.
 >     The default is a tab character in text format, a comma in CSV
format.
 >     This must be a single one-byte character. This option is not
allowed when using binary format.
 >
 > Yours,
 > Laurenz Albe
 > --
 > Cybertec | https://www.cybertec-postgresql.com

in other words, ckr_sto.csv is not a csv file.
it just has .csv at the end of its name.
that's why psql tried to interpret the entire
line as the first column: there were no commas.

its contents should look something like:

4937880,12,2015-01-05,,2015-01-05 05:51:47,,,5000128790679
4939355,12,2015-01-05,,2015-01-05 05:51:47,,,5000128639345
4939744,12,2015-01-05,,2015-01-05 05:51:47,,,5000128684510
4939750,12,2015-01-05,,2015-01-05 05:51:47,,,5000128683100
4936360,12,2015-01-05,,2015-01-05 05:51:47,,,5000128567527
4940308,12,2015-01-05,,2015-01-05 05:51:47,,,5000128781329
4938006,12,2015-01-05,,2015-01-05 05:51:47,,,4000128912554
4937457,12,2015-01-05,,2015-01-05 05:51:47,,,5000128426574





--
Andrew W. Kerber

'If at first you dont succeed, dont take up skydiving.'



--
Adrian Klaver
adrian.kla...@aklaver.com




Re: plpgsql: How to modify a field in an array of records

2019-07-02 Thread raf
Dirk Mika wrote:

> Hi,
> 
> Another Oracle -> PostgreSQL Question. ☺
> 
> I try to migrate a package procedure. Local types were declared in the Oracle 
> package:
> 
>   TYPE t_class_record IS RECORD
>   (
>  id_class   classes.id_class%TYPE,
>  field1 number,
>  field2 number
>   );
> 
>   TYPE t_classes_table IS TABLE OF t_class_record
>  INDEX BY BINARY_INTEGER;
> 
>   l_classes_table t_classes_table;
> 
> l_classes_table is initialized by a SELECT statement where later single 
> fields of single array elements are modified like this:
> 
>   l_classes_table(i).field1 := l_value;
> 
> So far I have done the following in PostgreSQL:
> 
> 
>   *   Defined a composite type that corresponds to the structure listed above:
> CREATE TYPE t_class_record AS (id_class CHARACTER VARYING,
>field1   INTEGER,
>field2   INTEGER);
> 
>   *   Defined a procedure with a local variable of type Array of 
> t_class_record:
> l_classes_array   t_class_record [];
> 
> But when I try to modify a field of a record in the array I get a syntax 
> error.
> 
> l_classes_array[i].field1 := l_value;
> 
> The error is ERROR: syntax error at or near "." Position: 12414 where 
> position points to the . after the [i]. I've no idea what's causing this 
> syntax error.
> 
> My goal is to store an array of records, fetched via SELECT Statement, in a 
> variable in a way, that I am able to modify individual fields of individual 
> array elements later in the function.
> 
> Am I on the right track or should I approach the problem completely 
> differently?
> 
> Dirk
> 
> --
> Dirk Mika
> Software Developer

Hi Dirk,

I don't know the answer to this but I have encountered
a situation where the plpgsql syntax didn't support
something I would have liked (specifically, it
can't/couldn't select into array elements so I needed
to select into multiple scalar variables and then
assign their values to the array elements).

If the problem you are facing is similar in nature, you
might be able to work around it by having a variable of
the same type as the array elements, assign to it the
record that you want to modify, make the modification
in the single record variable and then assign that
single record variable back into the array element that
it came from.

i.e. something like:

  declare
  a rectype[];
  r rectype;
  i  integer;
  begin
  ...
  r := a[i];
  r.field := newvalue;
  a[i] := r;
  ...

I didn't even realise that you could have an array of records.
I've only ever used arrays of scalar values.

cheers,
raf





Re: plpgsql: How to modify a field in an array of records

2019-07-02 Thread Dirk Mika

--
Dirk Mika
Software Developer

mika:timing GmbH
Strundepark - Kürtener Str. 11b
51465 Bergisch Gladbach
Germany

fon +49 2202 2401-1197
dirk.m...@mikatiming.de
www.mikatiming.de

AG Köln HRB 47509 * WEEE-Reg.-Nr. DE 90029884
Geschäftsführer: Harald Mika, Jörg Mika



## How2Use
## the ChampionChip by mika:timing
## https://youtu.be/qfOFXrpSKLQ
Am 02.07.19, 15:53 schrieb "Tom Lane" :

Yeah, that seems like it ought to work, and ideally it would.  But plpgsql
doesn't currently have support for nested field-of-array-element or
element-of-record-field assignments.  (I've looked briefly at this in the
past, and it doesn't seem impossible, but it's definitely not trivial
either.)

Ah, that explains the error message, of course. And I thought it was actually a 
matter of correct syntax. :-)

You'll have to do some kind of workaround like assigning the whole array
element to a temp variable, fixing the field in the temp variable, then
assigning back :-(

I tried that and it seemed to work. There is no error message and the function 
seems to do the right thing. ;-)

Dirk



Re: plpgsql: How to modify a field in an array of records

2019-07-02 Thread Dirk Mika



--
Dirk Mika
Software Developer

mika:timing GmbH
Strundepark - Kürtener Str. 11b
51465 Bergisch Gladbach
Germany

fon +49 2202 2401-1197
dirk.m...@mikatiming.de
www.mikatiming.de

AG Köln HRB 47509 * WEEE-Reg.-Nr. DE 90029884
Geschäftsführer: Harald Mika, Jörg Mika



## How2Use
## the ChampionChip by mika:timing
## https://youtu.be/qfOFXrpSKLQ
Am 03.07.19, 01:39 schrieb "r...@raf.org" :

If the problem you are facing is similar in nature, you
might be able to work around it by having a variable of
the same type as the array elements, assign to it the
record that you want to modify, make the modification
in the single record variable and then assign that
single record variable back into the array element that
it came from.

i.e. something like:

  declare
  a rectype[];
  r rectype;
  i  integer;
  begin
  ...
  r := a[i];
  r.field := newvalue;
  a[i] := r;
  ...

It actually works the way you suggested. Not nice, but as long as it works, 
it's ok ;-)

I didn't even realise that you could have an array of records.
I've only ever used arrays of scalar values.

Yes, if that wasn't possible, I would have had to switch to a series of Array 
of scalar values.

BR
Dirk






multiple nodes in FDW create server statement

2019-07-02 Thread Vijaykumar Jain
All,

We are glad that we have this feature that allows us to load balance reads.
that has helped us a lot.
https://paquier.xyz/postgresql-2/postgres-10-multi-host-connstr/

I would like to know if it is possible to request a similar enhancement to
FDWs too?
https://www.postgresql.org/docs/11/sql-createserver.html

unless i am missing something obvious, we wanted to use this option when
one of the read instances are down when there is a FDW query for reads.
the second instance is only a fallback, not round robin. and we have 2
foreign servers, one for write and one for reads.

although they are marked by cnames, so its just a matter of cname switch,
but that takes some time for propagation and requires someone to be near
the machine.

i guess the concerns would be if primary is down, the writes would be
directed to another set of servers which may end up in confusing errors.

pgtesting=> create table foo(id int);

ERROR:  cannot execute CREATE TABLE in a read-only transaction

pgtesting=> \q

Let me know if this is not clear.

Regards,
Vijay