Re: Windows 10 got stuck with PostgreSQL at starting up. Adding delay lets it avoid.

2018-06-29 Thread TAKATSUKA Haruka


On Fri, 29 Jun 2018 08:34:18 +0200
Thomas Kellerer  wrote:

> Did you try setting the service to "delayed start"?

We didn't try it yet. Thanks to give an idea. I think that
MS would advise us already if it were a just solution for this case.
Anyway, we will try and confirm it.

Thanks,
Takatsuka Haruka


> TAKATSUKA Haruka schrieb am 29.06.2018 um 08:03:
> > I got a trouble in PostgreSQL 9.3.x on Windows 10.
> > I would like to add new delay code as an official build option.
> > 
> > Windows 10 sometime (approximately once in 300 tries) hung up 
> > at OS starting up. The logs say it happened while the PostgreSQL 
> > service was starting. When OS stopped, some postgres auxiliary 
> > process were started and some were not started yet. 
> > 
> > The Windows dump say some threads of the postgres auxiliary process
> > are waiting OS level locks and the logon processes’thread are
> > also waiting a lock. MS help desk said that PostgreSQL’s OS level 
> > deadlock caused OS freeze. I think it is strange story. But, 
> > in fact, it not happened in repeated tests when I got rid of 
> > PostgreSQL from the initial auto-starting services.
> > 
> > I tweaked PostgreSQL 9.3.x (the newest from the repository) to add 
> > 0.5 or 3.0 seconds delay after each sub process starts. 
> > And then the hung up was gone. This test patch is attached. 
> > It is only implemented for Windows. Also, I did not use existing 
> > pg_usleep because it contains locking codes (e.g. WaitForSingleObject
> > and Enter/LeaveCriticalSection).
> > 
> > Although Windows OS may have some problems, I think we should have
> > a means to avoid it. Can PostgreSQL be accepted such delay codes
> > as build-time options by preprocessor variables?




Re: plperl and plperlu language extentsions

2018-06-29 Thread Niles Oien
Thanks for responding!

I installed Postgres 10 from the PG repos. I believe the system came with
Postgres 9.2 on it but I never used that. Postgres 10 is first in the path.



On Thu, Jun 28, 2018 at 7:03 AM, Adrian Klaver 
wrote:

> On 06/27/2018 08:27 AM, Niles Oien wrote:
>
>>
>> Hi,
>>
>> I am running postgres 10.4 on CentOS 7.5. I am having trouble getting the
>> plperl language extension going :
>>
>> netdrms02.nispdc.nso.edu:5432 
>> postgres@nsocu=# CREATE EXTENSION plperl;
>> ERROR:  58P01: could not open extension control file
>> "/usr/pgsql-10/share/extension/plperl.control": No such file or directory
>> LOCATION:  parse_extension_control_file, extension.c:494
>>
>> Sure enough that file is not there :
>>
>> [production@netdrms02 ~]$ ls /usr/pgsql-10/share/extension/
>> plpgsql--1.0.sql  plpgsql.control  plpgsql--unpackaged--1.0.sql
>>
>>
>> Yet plperl would seem to be installed :
>>
>> # yum list | grep postgres | grep perl
>> postgresql-plperl.x86_64  9.2.23-3.el7_4
>>  base
>> postgresql10-plperl.x86_6410.4-1PGDG.rhel7
>>   pgdg10
>>
>
> So where did you get the packages to install Postgres itself?:
>
> a) The CentOsS repos
>
> b) The PGDG repos
>
>
>
>> I need to do this for both plperl and plperlu. I think there is perhaps
>> something I don't have installed, but I can't tell what? I apologize if
>> this is obvious, the language extensions are not something I deal with
>> regularly.
>>
>> If it helps, here is the result of "select version();" :
>> PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
>> 20150623 (Red Hat 4.8.5-28), 64-bit
>>
>> Thanks for considering this,
>>
>> Niles.
>>
>>
>> --
>> Niles Oien, National Solar Observatory, Boulder Colorado USA
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@aklaver.com
>



-- 
Niles Oien, National Solar Observatory, Boulder Colorado USA


Re: Database name with semicolon

2018-06-29 Thread joby.john@nccgroup.trust
Pavel Stehule wrote
> 2018-06-28 12:10 GMT+02:00 please, can you try'"db; name"' .. double
> quotes nested in apostrophes

I tried this with no luck. Also tried the other way, apostrophe nested in
double quotes with same results.Looks like an issue in ODBC driver as it's
not handling semicolon within database name. Probably needs to go back to
the client asking to rename the database.



--
Sent from: http://www.postgresql-archive.org/PostgreSQL-general-f1843780.html

pgloader question - postgis support

2018-06-29 Thread Brent Wood
Hi,


I'm looking at pgloader to automate data loading into a Postgis enabled 
Postgres database.


I have seen in the tutorial how the internal point type is supported, and how 
the postgis extension can be pre-installed by pgloader if necessary, but I 
can't see how I might take x & y values & save as a postgis point. I guess I 
could do this via a trigger, but that is hopefully an unnecessary workaround.


if I read data in (say from CSV):


1,tan0104,1,173.567,-43.678

...


to a Postgis table:

create table station

(id int primary key,

 trip   char(7)

 station_no int,

 lon_s  decimal(7,4),

 lat_s  decimal(6,4),

 startp geometry(POINT,4326));



the startp column is populated by the SQL:

startp=ST_SetSRID(ST_MakePoint(lon_s,lat_s),4326)


This creates a point feature from the x (lon) & y (lat) coordinates,

  and specifies the coordinate reference system as lat/lon degrees WGS84 
(EPSG:4326) to match the column specification.


How can I implement that in the pgloader command to load the CSV file?



Thanks


Brent Wood

Programme leader: Environmental Information Delivery
NIWA
DDI:  +64 (4) 3860529



Brent Wood
Principal Technician - GIS and Spatial Data Management
Programme Leader - Environmental Information Delivery
+64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington | 
www.niwa.co.nz
[NIWA]
To ensure compliance with legal requirements and to maintain cyber security 
standards, NIWA's IT systems are subject to ongoing monitoring, activity 
logging and auditing. This monitoring and auditing service may be provided by 
third parties. Such third parties can access information transmitted to, 
processed by and stored on NIWA's IT systems.





Re: pgloader question - postgis support

2018-06-29 Thread Dimitri Fontaine
Hi Brent,

Yes I think it's possible, simply playing with the TARGET COLUMNS clause of the 
pgloader command. Would you mind opening an issue on Github, where I track bug 
fixes and user requests, so that our conversation is then publicly archived and 
available to other PostGIS and pgloader users?

Thanks,
-- 
  Dimitri Fontaine
  d...@tapoueh.org

On Fri, Jun 29, 2018, at 04:33, Brent Wood wrote:
> Hi,
> 
> 
> I'm looking at pgloader to automate data loading into a Postgis enabled 
> Postgres database.
> 
> 
> I have seen in the tutorial how the internal point type is supported, 
> and how the postgis extension can be pre-installed by pgloader if 
> necessary, but I can't see how I might take x & y values & save as a 
> postgis point. I guess I could do this via a trigger, but that is 
> hopefully an unnecessary workaround.
> 
> 
> if I read data in (say from CSV):
> 
> 
> 1,tan0104,1,173.567,-43.678
> 
> ...
> 
> 
> to a Postgis table:
> 
> create table station
> 
> (id int primary key,
> 
>  trip   char(7)
> 
>  station_no int,
> 
>  lon_s  decimal(7,4),
> 
>  lat_s  decimal(6,4),
> 
>  startp geometry(POINT,4326));
> 
> 
> 
> the startp column is populated by the SQL:
> 
> startp=ST_SetSRID(ST_MakePoint(lon_s,lat_s),4326)
> 
> 
> This creates a point feature from the x (lon) & y (lat) coordinates,
> 
>   and specifies the coordinate reference system as lat/lon degrees WGS84 
> (EPSG:4326) to match the column specification.
> 
> 
> How can I implement that in the pgloader command to load the CSV file?
> 
> 
> 
> Thanks
> 
> 
> Brent Wood
> 
> Programme leader: Environmental Information Delivery
> NIWA
> DDI:  +64 (4) 3860529
> 
> 
> 
> Brent Wood
> Principal Technician - GIS and Spatial Data Management
> Programme Leader - Environmental Information Delivery
> +64-4-386-0529 | 301 Evans Bay Parade, Greta Point, Wellington | 
> www.niwa.co.nz
> [NIWA]
> To ensure compliance with legal requirements and to maintain cyber 
> security standards, NIWA's IT systems are subject to ongoing monitoring, 
> activity logging and auditing. This monitoring and auditing service may 
> be provided by third parties. Such third parties can access information 
> transmitted to, processed by and stored on NIWA's IT systems.
> 
> 
> 
> Email had 1 attachment:
> + image9c19a7.JPG
>   28k (image/jpeg)



Re: plperl and plperlu language extentsions

2018-06-29 Thread Niles Oien
I got it working, although I found it a bit odd. Although "yum list"
showed postgresql10-plperl.x86_64 as being installed, in fact :

yum install postgresql10-plperl

went ahead as if it was not installed, and I was now able to create the
language extensions.

Thanks, all,

Niles.


On Thu, Jun 28, 2018 at 8:42 AM, Niles Oien  wrote:

>
>
> Thanks for responding!
>
> I installed Postgres 10 from the PG repos. I believe the system came with
> Postgres 9.2 on it but I never used that. Postgres 10 is first in the path.
>
>
>
> On Thu, Jun 28, 2018 at 7:03 AM, Adrian Klaver 
> wrote:
>
>> On 06/27/2018 08:27 AM, Niles Oien wrote:
>>
>>>
>>> Hi,
>>>
>>> I am running postgres 10.4 on CentOS 7.5. I am having trouble getting
>>> the plperl language extension going :
>>>
>>> netdrms02.nispdc.nso.edu:5432 
>>> postgres@nsocu=# CREATE EXTENSION plperl;
>>> ERROR:  58P01: could not open extension control file
>>> "/usr/pgsql-10/share/extension/plperl.control": No such file or
>>> directory
>>> LOCATION:  parse_extension_control_file, extension.c:494
>>>
>>> Sure enough that file is not there :
>>>
>>> [production@netdrms02 ~]$ ls /usr/pgsql-10/share/extension/
>>> plpgsql--1.0.sql  plpgsql.control  plpgsql--unpackaged--1.0.sql
>>>
>>>
>>> Yet plperl would seem to be installed :
>>>
>>> # yum list | grep postgres | grep perl
>>> postgresql-plperl.x86_64  9.2.23-3.el7_4
>>>  base
>>> postgresql10-plperl.x86_6410.4-1PGDG.rhel7
>>>   pgdg10
>>>
>>
>> So where did you get the packages to install Postgres itself?:
>>
>> a) The CentOsS repos
>>
>> b) The PGDG repos
>>
>>
>>
>>> I need to do this for both plperl and plperlu. I think there is perhaps
>>> something I don't have installed, but I can't tell what? I apologize if
>>> this is obvious, the language extensions are not something I deal with
>>> regularly.
>>>
>>> If it helps, here is the result of "select version();" :
>>> PostgreSQL 10.4 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
>>> 20150623 (Red Hat 4.8.5-28), 64-bit
>>>
>>> Thanks for considering this,
>>>
>>> Niles.
>>>
>>>
>>> --
>>> Niles Oien, National Solar Observatory, Boulder Colorado USA
>>>
>>
>>
>> --
>> Adrian Klaver
>> adrian.kla...@aklaver.com
>>
>
>
>
> --
> Niles Oien, National Solar Observatory, Boulder Colorado USA
>



-- 
Niles Oien, National Solar Observatory, Boulder Colorado USA


Re: Database name with semicolon

2018-06-29 Thread Gavin Flower

On 29/06/18 04:22, joby.john@nccgroup.trust wrote:


Pavel Stehule wrote
2018-06-28 12:10 GMT+02:00 please, can you try '"db; name"' ..
double quotes nested in apostrophes

I tried this with no luck. Also tried the other way, apostrophe nested 
in double quotes with same results. Looks like an issue in ODBC driver 
as it's not handling semicolon within database name. Probably needs to 
go back to the client asking to rename the database.


Sent from the PostgreSQL - general mailing list archive 
 
at Nabble.com.


Reminds me of:
    https://xkcd.com/327/
'Little Bobby Tables'




Re: Code of Conduct committee: call for volunteers

2018-06-29 Thread Magnus Hagander
On Wed, Jun 27, 2018 at 11:44 AM, ERR ORR  wrote:

> [ clear attempt at trolling ]
>
>
This sort of verbiage has never been considered acceptable on the
PostgreSQL lists. Since the CoC is not yet in effect, it falls on the core
team to enforce community norms. Be advised that if you post something like
this again, you will be banned immediately and permanently from the lists.

-- 
Magnus Hagander
PostgreSQL Core Team


Re: Analyze plan of foreign data wrapper

2018-06-29 Thread Mathieu PUJOL
Hi,
I understand that I should also use varno to check which table is
referenced by varattno. In case of Join, aggregation, etc. Sometimes I get
a number or INNER_VAR or OUTER_VAR.
I am lost on how i could resolve this.
I understand that OUTER_VAR/INNER_VAR are related to joins sub plans. Is
outer related to left plan and inner to right plan ? In this case varattno
is index of target list of subplan ?
When varno is an index how to retrieve table info ?
Regards
Mathieu


Le jeu. 28 juin 2018 à 23:17, Laurenz Albe  a
écrit :

> Mathieu PUJOL wrote:
> > I'am writing a foreign data wrapper. To avoid returning data for a
> column that is not used, I parse 'targetlist' and 'qual' of the Plan.
> > I'am able to find Var nodes but I can't figure out how i could now if
> this node is related to a column my foreign table.
>
> For a Var v, v->varattno contains the attribute number of the column.
> That is the same as the attnum column in pg_attribute.
>
> If v->varattno == 0, it is a whole-row reference, like in
>
>SELECT mytab FROM mytab;
>
> Yours,
> Laurenz Albe
> --
> Cybertec | https://www.cybertec-postgresql.com
>


CSV export bug?

2018-06-29 Thread Tracy Babiasz
Hi there. New pgAdmin user hoping someone out there might help me figure out 
why I can't export a particular report.

I've been running a few basic SQL scrips in pgAdmin pre 1.5 successfully and 
had no trouble exporting them as CSV files. Until one report, which is a bit 
larger than the others (the results include about 300 rows and 8 columns). The 
report runs fine, but I can't export it to save my life. We did upgrade me to 
3.1, which is at least allowing me to copy the results and paste them into 
Excel. But I'd still love to figure out why I can't export as I can with others.

I read a bit about this perhaps being a bug in early versions that was 
resolved, but since I'm now on the latest version and seeing the same thing, 
either the bug wasn't resolved or it's not the same problem.

The export process seems to work, but the resulting file is only about 1KB and 
contains 1 of 2 possible statements, depending on whether or not I commented 
out some info about the initial drop tables.

if the comments are in, I get:
ERROR:  syntax error at or near DROP"

LINE 1: ...OR /* Drop the temp table if it already exists */ DROP TABLE...

 ^


if the comments are removed, I get:
ERROR:  syntax error at or near DROP"

LINE 1: ...LARE "CURSOR:CONN:372417" CURSOR WITHOUT HOLD FOR DROP TABLE...

 ^


The beginning of this particular report looks like this:

/* Drop the temp table if it already exists */ DROP TABLE IF EXISTS tmpholds;



/* This is query is run first and create the temp table populating with bibs or 
items that have holds */ CREATE TEMP TABLE tmpholds AS SELECT b.id AS "bib_id",

COUNT(DISTINCT h.id) AS "hold_count",



CASE

WHEN COUNT(DISTINCT i.id) IS NULL THEN 0

ELSE COUNT(DISTINCT i.id)

END

AS "item_count",



COUNT(DISTINCT ia.id) AS "avail_item_count",

MAX(o1.order_count) AS "order_count",
STRING_AGG(DISTINCT i.location_code, ' ' ORDER BY 
i.location_code desc) AS ...and so on and so forth.

The report RUNS fine, I just can't export the results, and based on these 
statements, my best guess is that it's about the drop table or the comments 
about the drop table (though I've tried removing the comments).

Anybody know if this is part of a current bug or if it can be worked around?  
Many thanks for your help!

[email]

Tracy Babiasz
Acquisitions and Collections Manager
Chapel Hill Public Library
100 Library Drive   Chapel Hill, NC 27514
Phone: 919-969-2032 |Fax: 919-968-2838
tbabi...@townofchapelhill.org
Sparking Curiosity. Inspiring Learning. Creating Connections.





Re: CSV export bug?

2018-06-29 Thread Adrian Klaver

On 06/29/2018 07:58 AM, Tracy Babiasz wrote:
Hi there. New pgAdmin user hoping someone out there might help me figure 
out why I can’t export a particular report.


I’ve been running a few basic SQL scrips in pgAdmin pre 1.5 successfully 
and had no trouble exporting them as CSV files. Until one report, which 
is a bit larger than the others (the results include about 300 rows and 
8 columns). The report runs fine, but I can’t export it to save my life. 


Not understanding, are you doing two steps?:

1) Create/run report

2) Export the data from the report.


It will help to see the exact steps you are taking in pgAdmin. In other 
words what parts of pgAdmin are you using and what parameters are you 
supplying it?



We did upgrade me to 3.1, which is at least allowing me to copy the 
results and paste them into Excel. But I’d still love to figure out why 
I can’t export as I can with others.


I read a bit about this perhaps being a bug in early versions that was 
resolved, but since I’m now on the latest version and seeing the same 
thing, either the bug wasn’t resolved or it’s not the same problem.


The export process seems to work, but the resulting file is only about 
1KB and contains 1 of 2 possible statements, depending on whether or not 
I commented out some info about the initial drop tables.


if the comments are in, I get:

ERROR:  syntax error at or near DROP"



LINE 1: ...OR /* Drop the temp table if it already exists */ DROP TABLE...



  ^



if the comments are removed, I get:

ERROR:  syntax error at or near DROP"



LINE 1: ...LARE "CURSOR:CONN:372417" CURSOR WITHOUT HOLD FOR DROP TABLE...

  ^



The beginning of this particular report looks like this:

/* Drop the temp table if it already exists */ DROP TABLE IF EXISTS 
tmpholds;


/* This is query is run first and create the temp table populating with 
bibs or items that have holds */ CREATE TEMP TABLE tmpholds AS SELECT 
b.id AS "bib_id",


     COUNT(DISTINCT h.id) AS "hold_count",

     CASE

     WHEN COUNT(DISTINCT i.id) IS NULL THEN 0

     ELSE COUNT(DISTINCT i.id)

     END

     AS "item_count",

     COUNT(DISTINCT ia.id) AS 
"avail_item_count",


     MAX(o1.order_count) AS "order_count",

     STRING_AGG(DISTINCT i.location_code, ' ' ORDER BY 
i.location_code desc) AS …and so on and so forth.


The report RUNS fine, I just can’t export the results, and based on 
these statements, my best guess is that it’s about the drop table or the 
comments about the drop table (though I’ve tried removing the comments).


Anybody know if this is part of a current bug or if it can be worked 
around?  Many thanks for your help!


__

email



*Tracy Babiasz*
Acquisitions and Collections Manager
Chapel Hill Public Library
100 Library Drive   Chapel Hill, NC 27514

Phone: 919-969-2032 |Fax: 919-968-2838

tbabi...@townofchapelhill.org 

/Sparking Curiosity. Inspiring Learning. Creating Connections.///




--
Adrian Klaver
adrian.kla...@aklaver.com



RE: CSV export bug?

2018-06-29 Thread Tracy Babiasz
Hi Adrian. Thanks for the response. I'm not sure how else to explain it. The 
report executes fine in the query tool and gives me about 300 lines in data 
output. It even appears to export fine. I click on the download button and open 
in Excel, but I get the statements I copied below. The statement is slightly 
different if I remove the comment statements within the script. If, instead of 
opening in Excel, I save the file and give it a name, I have a csv file to 
open. But the file is only 1KB, and if you open it, you see the statements I 
copied below, rather than the results of the report. 

Does that make sense?



Tracy Babiasz
Acquisitions and Collections Manager
Chapel Hill Public Library
100 Library Drive   Chapel Hill, NC 27514
Phone: 919-969-2032 |Fax: 919-968-2838
tbabi...@townofchapelhill.org
Sparking Curiosity. Inspiring Learning. Creating Connections.


-Original Message-
From: Adrian Klaver [mailto:adrian.kla...@aklaver.com] 
Sent: Friday, June 29, 2018 12:15 PM
To: Tracy Babiasz ; 
pgsql-general@lists.postgresql.org
Subject: Re: CSV export bug?

On 06/29/2018 07:58 AM, Tracy Babiasz wrote:
> Hi there. New pgAdmin user hoping someone out there might help me 
> figure out why I can't export a particular report.
> 
> I've been running a few basic SQL scrips in pgAdmin pre 1.5 
> successfully and had no trouble exporting them as CSV files. Until one 
> report, which is a bit larger than the others (the results include 
> about 300 rows and
> 8 columns). The report runs fine, but I can't export it to save my life. 

Not understanding, are you doing two steps?:

1) Create/run report

2) Export the data from the report.


It will help to see the exact steps you are taking in pgAdmin. In other words 
what parts of pgAdmin are you using and what parameters are you supplying it?


> We did upgrade me to 3.1, which is at least allowing me to copy the 
> results and paste them into Excel. But I'd still love to figure out 
> why I can't export as I can with others.
> 
> I read a bit about this perhaps being a bug in early versions that was 
> resolved, but since I'm now on the latest version and seeing the same 
> thing, either the bug wasn't resolved or it's not the same problem.
> 
> The export process seems to work, but the resulting file is only about 
> 1KB and contains 1 of 2 possible statements, depending on whether or 
> not I commented out some info about the initial drop tables.
> 
> if the comments are in, I get:
> 
> ERROR:  syntax error at or near DROP"
> 
>   
> 
> LINE 1: ...OR /* Drop the temp table if it already exists */ DROP TABLE...
> 
>   
> 
>   ^
> 
>   
> 
> if the comments are removed, I get:
> 
> ERROR:  syntax error at or near DROP"
> 
>   
> 
> LINE 1: ...LARE "CURSOR:CONN:372417" CURSOR WITHOUT HOLD FOR DROP TABLE...
> 
>   ^
> 
>   
> 
> The beginning of this particular report looks like this:
> 
> /* Drop the temp table if it already exists */ DROP TABLE IF EXISTS 
> tmpholds;
> 
> /* This is query is run first and create the temp table populating 
> with bibs or items that have holds */ CREATE TEMP TABLE tmpholds AS 
> SELECT b.id AS "bib_id",
> 
>      COUNT(DISTINCT h.id) AS "hold_count",
> 
>      CASE
> 
>      WHEN COUNT(DISTINCT i.id) IS NULL 
> THEN 0
> 
>      ELSE COUNT(DISTINCT i.id)
> 
>      END
> 
>      AS "item_count",
> 
>      COUNT(DISTINCT ia.id) AS 
> "avail_item_count",
> 
>      MAX(o1.order_count) AS "order_count",
> 
>      STRING_AGG(DISTINCT i.location_code, ' ' ORDER BY 
> i.location_code desc) AS .and so on and so forth.
> 
> The report RUNS fine, I just can't export the results, and based on 
> these statements, my best guess is that it's about the drop table or 
> the comments about the drop table (though I've tried removing the comments).
> 
> Anybody know if this is part of a current bug or if it can be worked 
> around?  Many thanks for your help!
> 
> __
> 
> email
> 
>   
> 
> *Tracy Babiasz*
> Acquisitions and Collections Manager
> Chapel Hill Public Library
> 100 Library Drive   Chapel Hill, NC 27514
> 
> Phone: 919-969-2032 |Fax: 919-968-2838
> 
> tbabi...@townofchapelhill.org 
> 
> /Sparking Curiosity. Inspiring Learning. Creating Connections.///
> 


--
Adrian Klaver
adrian.kla...@aklaver.com



Re: CSV export bug?

2018-06-29 Thread Adrian Klaver

On 06/29/2018 09:29 AM, Tracy Babiasz wrote:

Hi Adrian. Thanks for the response. I'm not sure how else to explain it. The 
report executes fine in the query tool and gives me about 300 lines in data 
output. It even appears to export fine. I click on the download button and open 
in Excel, but I get the statements I copied below. The statement is slightly 
different if I remove the comment statements within the script. If, instead of 
opening in Excel, I save the file and give it a name, I have a csv file to 
open. But the file is only 1KB, and if you open it, you see the statements I 
copied below, rather than the results of the report.

Does that make sense?



Well I can replicate it so, yes.

Using pgAdmin4 3.1.

My 'report':

DROP TABLE IF EXISTS tmp_test;
CREATE table tmp_test(id integer, fld_1 varchar);
select * from cell_per;

The error I get when doing the export:

postgres-2018-06-29 12:39:16.349 PDT-0ERROR:  syntax error at or near 
"DROP" at character 55
postgres-2018-06-29 12:39:16.349 PDT-0STATEMENT:  DECLARE 
"CURSOR:CONN:8564986" CURSOR WITHOUT HOLD FOR DROP TABLE IF EXISTS tmp_test;

CREATE table tmp_test(id integer, fld_1 varchar);
select * from cell_per;

Looks like pgAdmin4 is running all the queries using a declared 
cursor(https://www.postgresql.org/docs/10/static/sql-declare.html). This 
fails because the CURSOR can only take a SELECT or VALUES as the query. 
The DROP TABLE will not work with a CURSOR.


To me it looks like a bug. I would file an issue here:

https://redmine.postgresql.org/projects/pgadmin4

You will need a Postgres community account to access the issue tracker. 
If you do not have one there will be instructions on how to set one up 
when you click on the link above.


For now what seems to work is to open another tab and then open another 
Query Tool. In that Tool do the DROP TABLE/CREATE TABLE and then in the 
other Tool do the SELECT and export from there.






Tracy Babiasz
Acquisitions and Collections Manager
Chapel Hill Public Library
100 Library Drive   Chapel Hill, NC 27514
Phone: 919-969-2032 |Fax: 919-968-2838
tbabi...@townofchapelhill.org
Sparking Curiosity. Inspiring Learning. Creating Connections.




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: CSV export bug?

2018-06-29 Thread Adrian Klaver

On 06/29/2018 12:53 PM, Adrian Klaver wrote:

On 06/29/2018 09:29 AM, Tracy Babiasz wrote:
Hi Adrian. Thanks for the response. I'm not sure how else to explain 
it. The report executes fine in the query tool and gives me about 300 
lines in data output. It even appears to export fine. I click on the 
download button and open in Excel, but I get the statements I copied 
below. The statement is slightly different if I remove the comment 
statements within the script. If, instead of opening in Excel, I save 
the file and give it a name, I have a csv file to open. But the file 
is only 1KB, and if you open it, you see the statements I copied 
below, rather than the results of the report.


Does that make sense?



Well I can replicate it so, yes.

Using pgAdmin4 3.1.

My 'report':

DROP TABLE IF EXISTS tmp_test;
CREATE table tmp_test(id integer, fld_1 varchar);
select * from cell_per;

The error I get when doing the export:

postgres-2018-06-29 12:39:16.349 PDT-0ERROR:  syntax error at or near 
"DROP" at character 55
postgres-2018-06-29 12:39:16.349 PDT-0STATEMENT:  DECLARE 
"CURSOR:CONN:8564986" CURSOR WITHOUT HOLD FOR DROP TABLE IF EXISTS 
tmp_test;

     CREATE table tmp_test(id integer, fld_1 varchar);
     select * from cell_per;

Looks like pgAdmin4 is running all the queries using a declared 
cursor(https://www.postgresql.org/docs/10/static/sql-declare.html). This 
fails because the CURSOR can only take a SELECT or VALUES as the query. 
The DROP TABLE will not work with a CURSOR.


To me it looks like a bug. I would file an issue here:

https://redmine.postgresql.org/projects/pgadmin4

You will need a Postgres community account to access the issue tracker. 
If you do not have one there will be instructions on how to set one up 
when you click on the link above.


For now what seems to work is to open another tab and then open another 
Query Tool. In that Tool do the DROP TABLE/CREATE TABLE and then in the 
other Tool do the SELECT and export from there.


After I posted the above I wondered how that could be as they are 
different sessions and we are dealing with a temporary table. Then I 
realized my example did not use a temporary table. So if you are going 
to use the above then you will need to use a 'permanent' table.








Tracy Babiasz
Acquisitions and Collections Manager
Chapel Hill Public Library
100 Library Drive   Chapel Hill, NC 27514
Phone: 919-969-2032 |Fax: 919-968-2838
tbabi...@townofchapelhill.org
Sparking Curiosity. Inspiring Learning. Creating Connections.







--
Adrian Klaver
adrian.kla...@aklaver.com



dumping only table definitions

2018-06-29 Thread Kevin Brannen
I'm trying to figure out how to dump only the table definitions, well those and 
things they need directly, like sequences & types. What I do NOT want are all 
the millions (not literally but it feels like it :)) of functions we have. 
Triggers would be all right if I must, as we only have a few of those.

I've tried various combinations of args to pg_dump, with -s being what I'd like 
to work, but I still get all the functions.

I suppose I could filter it with Perl or Awk, but that could be tricky. In 
fact, the best my searching could find is to do:

 pg_dump -s databasename | awk 'RS="";/CREATE TABLE[^;]*;/'

which fails in some interesting ways.

I could dump the schema (with functions) then load it into another DB then 
programmatically drop all the functions before dumping that with pg_dump, but 
again why should I have to.

Is there any Pg tool that gives me just the table defs or am I going to have to 
write my own?

Thanks,
Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: dumping only table definitions

2018-06-29 Thread Alvaro Herrera
On 2018-Jun-29, Kevin Brannen wrote:

> I'm trying to figure out how to dump only the table definitions, well those 
> and things they need directly, like sequences & types. What I do NOT want are 
> all the millions (not literally but it feels like it :)) of functions we 
> have. Triggers would be all right if I must, as we only have a few of those.

Try "pg_dump -Fc" followed by pg_restore -l.  You can edit the list
emitted there, then use it with pg_restore -L.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services



Re: dumping only table definitions

2018-06-29 Thread Tom Lane
Alvaro Herrera  writes:
> On 2018-Jun-29, Kevin Brannen wrote:
>> I'm trying to figure out how to dump only the table definitions, well those 
>> and things they need directly, like sequences & types. What I do NOT want 
>> are all the millions (not literally but it feels like it :)) of functions we 
>> have. Triggers would be all right if I must, as we only have a few of those.

> Try "pg_dump -Fc" followed by pg_restore -l.  You can edit the list
> emitted there, then use it with pg_restore -L.

I think something involving "--tables '*.*'" might work, too.
Be careful about shell-command quoting.

regards, tom lane



Re: dumping only table definitions

2018-06-29 Thread Melvin Davidson
On Fri, Jun 29, 2018 at 6:30 PM, Kevin Brannen  wrote:

> I’m trying to figure out how to dump only the table definitions, well
> those and things they need directly, like sequences & types. What I do NOT
> want are all the millions (not literally but it feels like it J) of
> functions we have. Triggers would be all right if I must, as we only have a
> few of those.
>
>
>
> I’ve tried various combinations of args to pg_dump, with -s being what I’d
> like to work, but I still get all the functions.
>
>
>
> I suppose I could filter it with Perl or Awk, but that could be tricky. In
> fact, the best my searching could find is to do:
>
>
>
>  pg_dump -s databasename | awk 'RS="";/CREATE TABLE[^;]*;/'
>
>
>
> which fails in some interesting ways.
>
>
>
> I could dump the schema (with functions) then load it into another DB then
> programmatically drop all the functions before dumping that with pg_dump,
> but again why should I have to.
>
>
>
> Is there any Pg tool that gives me just the table defs or am I going to
> have to write my own?
>
>
>
> Thanks,
>
> Kevin
> This e-mail transmission, and any documents, files or previous e-mail
> messages attached to it, may contain confidential information. If you are
> not the intended recipient, or a person responsible for delivering it to
> the intended recipient, you are hereby notified that any disclosure,
> distribution, review, copy or use of any of the information contained in or
> attached to this message is STRICTLY PROHIBITED. If you have received this
> transmission in error, please immediately notify us by reply e-mail, and
> destroy the original transmission and its attachments without reading them
> or saving them to disk. Thank you.
>

> What I do NOT want are all the millions (not literally but it feels like
it J) of functions we have.

It sounds like you would best be served by installing pg_extractor. In
essence, it is a super flexible version of pg_dump. You have your choice of
exactly what you want to dump.

https://github.com/omniti-labs/pg_extractor

-- 
*Melvin Davidson*
*Maj. Database & Exploration Specialist*
*Universe Exploration Command – UXC*
Employment by invitation only!


RE: dumping only table definitions

2018-06-29 Thread Kevin Brannen
On Fri, Jun 29, 2018 at 6:30 PM, Kevin Brannen 
mailto:kbran...@efji.com>> wrote:
I’m trying to figure out how to dump only the table definitions, well those and 
things they need directly, like sequences & types. What I do NOT want are all 
the millions (not literally but it feels like it ☺) of functions we have. 
Triggers would be all right if I must, as we only have a few of those.

Melvin wrote:

> > What I do NOT want are all the millions (not literally but it feels like it 
> > ☺) of functions we have.

> It sounds like you would best be served by installing pg_extractor. In 
> essence, it is a super flexible version of pg_dump. You have your choice of 
> exactly what you want to dump.

> https://github.com/omniti-labs/pg_extractor
Thanks Melvin, that looks like it would do what I want. If it doesn’t, I’ll try 
Alvaro’s and Tom’s suggestions.

Kevin
This e-mail transmission, and any documents, files or previous e-mail messages 
attached to it, may contain confidential information. If you are not the 
intended recipient, or a person responsible for delivering it to the intended 
recipient, you are hereby notified that any disclosure, distribution, review, 
copy or use of any of the information contained in or attached to this message 
is STRICTLY PROHIBITED. If you have received this transmission in error, please 
immediately notify us by reply e-mail, and destroy the original transmission 
and its attachments without reading them or saving them to disk. Thank you.


Re: About "Cost-based Vacuum Delay"

2018-06-29 Thread Jeff Janes
On Wed, Jun 27, 2018 at 3:19 AM, Laurenz Albe 
wrote:

> Ilyeop Yi wrote:
> > Currently, I am working with a workload that is mostly insert and
> update, and its performance suffers from autovacuum.
>

Do you know what about the autovacuum causes the performance drop?  Is it
the reading, the writing, or the steady stream of fsync calls?  Or the CPU
load, or something else?


> >
> > I've adjusted parameters such as vacuum_cost_delay and
> vacuum_cost_limit, but they have no significant effect.
>

vacuum_cost_delay has no effect on autovacuum,
unless autovacuum_vacuum_cost_delay is set to -1 (which is not the default
setting for it)

I think that any adjustment you make there will not take effect in the
middle of an existing table vacuuming, anyway, as the autovacuum worker
only checks for SIGHUP between tables.


> >
> > So, I would like to find a way to pause a running vacuum during bursty
> insert/update period and resume the vacuum after that period.
> >
> > Is there such a way?
>

You can use the OS tools.  For example, on linux you could use "kill
-SIGSTOP ", and then kill "-SIGCONT ".  This is not a
recommendation for use in production systems, as there is a small chance
this could cause a stuck spinlock and thus crash the db server.  Or a stuck
LWLOCK, which would cause other process to block unexpectedly and
indefinitely.  And if neither of those happen but you forget to do the
SIGCONT, lots of havoc would be created.  It might be safer to use SIGTSTP?

The best solution for preventing the problem from recurring might be just
to manually vacuum the largest tables at a time of your choosing, so that
they will not be likely to become due for autovacuum at the "wrong" time .


> Please keep the list copied.
>
> You can do
>
>ALTER TABLE mytab SET (autovacuum_enabled = off);
>

But you would have to kill the autovacuum or wait for it to finish the
table naturally before it would take effect.  And the problem might not be
with one particular table being vacuumed.

Cheers,

Jeff