Re: What to do when dynamic shared memory control segment is corrupt

2018-06-19 Thread Alban Hertroys


> On 18 Jun 2018, at 17:34, Sherrylyn Branchaw  wrote:
> 
> In the other case, the logs recorded
> 
> LOG:  all server processes terminated; reinitializing
> LOG:  dynamic shared memory control segment is corrupt
> LOG:  incomplete data in "postmaster.pid": found only 1 newlines while trying 
> to add line 7
> 
> In that case, the database did not restart on its own. It was 5 am on Sunday, 
> so the on-call SRE just manually started the database up, and it appears to 
> have been running fine since.

That rings a bell. Some versions of systemd apparently clean up shared memory 
belonging to a user when it detects the user logs out. ISTR that we had to 
disable that on our CentOS 7 server to stop crashes from happening.

More details here: https://wiki.postgresql.org/wiki/Systemd

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.




Re: PostgreSQL Volume Question

2018-06-19 Thread Ron

On 06/15/2018 11:26 AM, Data Ace wrote:


Well I think my question is somewhat away from my intention cause of my 
poor understanding and questioning :(


Actually, I have 1TB data and have hardware spec enough to handle this 
amount of data, but the problem is that it needs too many join operations 
and the analysis process is going too slow right now.


I've searched and found that graph model nicely fits for network data like 
social data in query performance.




If your data is hierarchal, then storing it in a network database is 
perfectly reasonable.  I'm not sure, though, that there are many network 
databases for Linux.  Raima is the only one I can think of.


Should I change my DB (I mean my DB for analysis)? or do I need some other 
solutions or any extension?



Thanks



--
Angular momentum makes the world go 'round.


using pg_basebackup for point in time recovery

2018-06-19 Thread Pierre Timmermans
Hi,I find the documentation about pg_basebackup misleading : the documentation 
states that standalone hot backups cannot be used for point in time recovery, 
however I don't get the point : if one has a combination of the nightly 
pg_basebackup and the archived wals, then it is totally OK to do point in time 
I assume ? (of course the recovery.conf must be manually changed to set the 
restore_command and the recovery target time) Here is the doc, the sentence 
that I find misleading is "There are backups that cannot be used for 
point-in-time recovery", also mentioning that they are faster than pg_dumps add 
to confusion (since pg_dumps cannot be used for PITR)Doc: 
https://www.postgresql.org/docs/current/static/continuous-archiving.html
It is possible to use PostgreSQL's backup facilities to produce standalone hot 
backups. These are backups that cannot be used for point-in-time recovery, yet 
are typically much faster to backup and restore than pg_dump dumps. (They are 
also much larger than pg_dump dumps, so in some cases the speed advantage might 
be negated.)
As with base backups, the easiest way to produce a standalone hot backup is to 
use the pg_basebackup tool. If you include the -X parameter when calling it, 
all the write-ahead log required to use the backup will be included in the 
backup automatically, and no special action is required to restore the backup.
Thanks and regards,

Pierre 

On Tuesday, June 19, 2018, 1:38:40 PM GMT+2, Ron  
wrote:  
 
  On 06/15/2018 11:26 AM, Data Ace wrote:
 
  
Well I think my question is somewhat away from my intention cause of my poor 
understanding and questioning :( 
 
 
 
Actually, I have 1TB data and have hardware spec enough to handle this amount 
of data, but the problem is that it needs too many join operations and the 
analysis process is going too slow right now.
 
 
 
I've searched and found that graph model nicely fits for network data like 
social data in query performance.
  
 
 If your data is hierarchal, then storing it in a network database is perfectly 
reasonable.  I'm not sure, though, that there are many network databases for 
Linux.  Raima is the only one I can think of.
 
 
  

 
 Should I change my DB (I mean my DB for analysis)? or do I need some other 
solutions or any extension?
 

 
 

Thanks
  
 
 -- 
 Angular momentum makes the world go 'round.   

Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Alban Hertroys
Hi all,

I'm struggling with a hierarchical query where I'm tasked to calculate weights 
of items in an (exploded) Bill of Materials, based on the weights of their 
components. Not all components are measured with a weight, sometimes there are 
pieces, meters, areas, etc, and the hierarchy is of varying levels of depth.

It would help if I could track a sum() throughout the explosion that would 
write back onto parent rows when the recursion returns: postorder traversal.

I created a simplified example about making pizza:

CREATE TABLE ingredient (
name text NOT NULL
);

CREATE TABLE recipe (
name text NOT NULL,
ingredient text NOT NULL,
quantity numeric(6,2) NOT NULL,
unit text NOT NULL,
step integer NOT NULL
);

COPY ingredient (name) FROM stdin;
tomato
basil
salt
tomato sauce
flour
water
yeast
dough
pizza bottom
pizza
\.

COPY recipe (name, ingredient, quantity, unit, step) FROM stdin;
tomato saucetomato  100.00  g   1
dough   flour   150.00  g   1
tomato saucebasil   10.00   g   2
pizza   pizza bottom1.00pcs 2
tomato saucesalt3.00g   3
dough   salt1.00pinch   3
pizza   tomato sauce1.00pcs 1
pizza bottomdough   1.00pcs 2
dough   water   50.00   g   2
\.

ALTER TABLE ONLY ingredient
ADD CONSTRAINT ingredient_pkey PRIMARY KEY (name);

ALTER TABLE ONLY recipe
ADD CONSTRAINT recipe_pkey PRIMARY KEY (name, ingredient);

ALTER TABLE ONLY recipe
ADD CONSTRAINT recipe_ingredient_fkey FOREIGN KEY (ingredient) REFERENCES 
ingredient(name);

ALTER TABLE ONLY recipe
ADD CONSTRAINT recipe_name_fkey FOREIGN KEY (name) REFERENCES 
ingredient(name);


A query listing the recipe for 'pizza' would be as follows:
development=> with recursive pizza (name, step, ingredient, quantity, unit, 
rel_qty, path, weight)
as (
select
name, step, ingredient, quantity, unit
,   quantity::numeric(10,2)
,   step::text
,   case when unit = 'g' then quantity::numeric(10,2) else null end
  from recipe
 where name = 'pizza'
union all
select
recipe.name, recipe.step, recipe.ingredient, recipe.quantity, 
recipe.unit
,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
,   pizza.path || '.' || recipe.step
,   case when recipe.unit = 'g' then (pizza.rel_qty * 
recipe.quantity)::numeric(10,2) else null end
  from pizza
  join recipe on (recipe.name = pizza.ingredient)
)
select path, ingredient, quantity, rel_qty, unit, weight
  from pizza
 order by path;

 path  |  ingredient  | quantity | rel_qty | unit  | weight 
---+--+--+-+---+
 1 | tomato sauce | 1.00 |1.00 | pcs   |   
 1.1   | tomato   |   100.00 |  100.00 | g | 100.00
 1.2   | basil|10.00 |   10.00 | g |  10.00
 1.3   | salt | 3.00 |3.00 | g |   3.00
 2 | pizza bottom | 1.00 |1.00 | pcs   |   
 2.2   | dough| 1.00 |1.00 | pcs   |   
 2.2.1 | flour|   150.00 |  150.00 | g | 150.00
 2.2.2 | water|50.00 |   50.00 | g |  50.00
 2.2.3 | salt | 1.00 |1.00 | pinch |   
(9 rows)


With these results, I somehow need to calculate that the weights of 'tomato 
sauce', 'dough' and 'pizza bottom' are 113 g, 200 g and 200 g respectively, 
bringing the total weight of 'pizza' to 313 g.

My first thought was to traverse the result of this recursive CTE using another 
one, but in the opposite direction. But since this tends to be kept as a 
temporary materialized result set with no indices, that's not performing great 
and it adds a fair amount of complexity to the query too.

Then I realised that if we somehow could track the sum() of 'weight' throughout 
exploding these recipe items, by using a postorder tree traversal, the desired 
result would be readily available to pick up when the recursive CTE travels up 
through the hierarchy.

In above example; When the CTE would reach '1.3 salt', it would write the 
summed 'weight' value 113 back on the result for '1 tomato sauce' and when it 
reached '2.2.2 salt' it would write back 200 to '2.2 dough' and then 200 to '2 
pizza bottom'.

Is that possible?

I've seen a couple of "solutions" on the internet that just summed up the 
results of the CTE, but that won't do as it would not put the correct weights 
onto intermediate levels of the tree as far as I can see (in above, the weight 
of 'dough').


Regards,

Alban Hertroys


PS. Don't try to make pizza using this recipe, it probably won't succeed. I 
forgot the yeast, for one thing, and quantities are probably way off. Not to 
mention that there are probably more ingredients missing…

PS2. In my real case the ingredients have a base quantity and unit, which makes 
adjusting to relative quantities actually viable. Those aren't necessary to 
descri

Find schema-qualified table name given unqualified name

2018-06-19 Thread Aron Widforss
Hi,

I'm interested in finding a (any) unique identifier for a table given an 
unqualified name of the table. Is that feasible in SQL or a C function?

Regards,
Aron Widforss



Re: Find schema-qualified table name given unqualified name

2018-06-19 Thread Tom Lane
Aron Widforss  writes:
> I'm interested in finding a (any) unique identifier for a table given an 
> unqualified name of the table. Is that feasible in SQL or a C function?

Something like this might help you:

  select relnamespace::regnamespace from pg_class
  where oid = 'mytablename'::regclass;

This will fail (not just return an empty set) if 'mytablename' isn't
resolvable, so you might need some hackery to cope with that.  Also,
if you need it to work pre-9.5, you'll need to write an explicit
join to pg_namespace instead of relying on regnamespace.

regards, tom lane



Drop Default Privileges?

2018-06-19 Thread Louis Battuello
Is it possible to drop default privileges?

I’m attempting to run a pg_restore into an RDS instance, which doesn’t have a 
“postgres” user.

I encounter many messages like so:

ALTER DEFAULT PRIVILEGES...
pg_restore: [archiver (db)] Error from TOC entry 10182; 826 253752252 DEFAULT 
ACL DEFAULT PRIVILEGES FOR TABLES postgres
pg_restore: [archiver (db)] could not execute query: ERROR:  role "postgres" 
does not exist
Command was: ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA 
abc_schema REVOKE ALL ON TABLES  FROM PUBLIC;

I’d like to remove these default privileges on the source database to avoid 
this error message, but I can’t find the syntax in the documentation (or if 
it’s possible). I only see GRANT/REVOKE options.

Thanks,
Louis


Re: Drop Default Privileges?

2018-06-19 Thread David G. Johnston
On Tuesday, June 19, 2018, Louis Battuello 
wrote:

> Is it possible to drop default privileges
>

 https://www.postgresql.org/docs/10/static/sql-alterdefaultprivileges.html ?

David J.


Re: Drop Default Privileges?

2018-06-19 Thread Fabio Pardi
Hi Louis,

I think 'alter user' can do the job for you.

https://www.postgresql.org/docs/current/static/sql-alteruser.html

Else, as alternative: before running pg_restore, you couldedit the dump and 
replace the string 'ROLE postgres' withthe correct useron the RDS instance.

regards,

fabio pardi




On 19/06/18 17:20, Louis Battuello wrote:
> Is it possible to drop default privileges?
>
> I’m attempting to run a pg_restore into an RDS instance, which doesn’t have a 
> “postgres” user.
>
> I encounter many messages like so:
>
> ALTER DEFAULT PRIVILEGES...
>
> pg_restore: [archiver (db)] Error from TOC entry 10182; 826 253752252 
>  DEFAULT ACL DEFAULT PRIVILEGES FOR TABLES postgres
>
> pg_restore: [archiver (db)] could not execute query: ERROR:  role "postgres" 
> does not exist
>
>     Command was: ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA 
> abc_schema REVOKE ALL ON TABLES  FROM PUBLIC;
>
>
> I’d like to remove these default privileges on the source database to avoid 
> this error message, but I can’t find the syntax in the documentation (or if 
> it’s possible). I only see GRANT/REVOKE options.
>
> Thanks,
> Louis



Re: What to do when dynamic shared memory control segment is corrupt

2018-06-19 Thread Sherrylyn Branchaw
Yeah, I'd like to know that too.  The complaint about corrupt shared
memory may be just an unrelated red herring, or it might be a separate
effect of whatever the primary failure was ... but I think it was likely
not the direct cause of the failure-to-restart.

Anyway, I would not be afraid to try restarting the postmaster manually
if it died.  Maybe don't do that repeatedly without human intervention;
but PG is pretty robust against crashes.  We developers crash it all the
time, and we don't lose data.

Understood, and thanks. I was basing my concern on a message in the mailing
lists that suggested that postgres might fail to start up in the event of a
corrupted memory segment. I would link to the message directly, but I keep
getting backend server error messages when I try to search for it today. At
any rate, it looked there was a chance that it was a deliberate design
choice, and I didn't want to ignore it if so. It's good to know that this
is not the case.

I realize that you're most focused on less-downtime, but from my
perspective it'd be good to worry about collecting evidence as to
what happened exactly.

Absolutely. I would love to know why this is happening too. However, our
priorities have been set in part by a very tight deadline handed down from
the C-levels to migrate to Aurora, so we have to focus our energies
accordingly. I will be back with core files if this happens again before
we're completely migrated over. Meanwhile, thank you for assuring me we
have no current data corruption and that it's safe to restart next time
without taking additional action to avoid or detect corruption.

Best,
Sherrylyn


Re: Failed rpm package signature checks with reposync

2018-06-19 Thread Bruno Lavoie
Hi Devrim,

It works now.

Thanks

On Mon, Jun 18, 2018 at 9:22 AM, Devrim Gündüz  wrote:

>
> Hi Bruno,
>
> On Mon, 2018-06-18 at 08:31 -0400, Bruno Lavoie wrote:
> >
> > Sorry, me too just saw your email...
> >
> > Issue still persists:
> >
> > # reposync --repoid=pgdg10 --gpgcheck
> > --download_path=/var/www/html/centos7/repos/postgresql/
> > Repository 'base' is missing name in configuration, using id
> > Removing mysql_fdw_10-2.3.0-3.rhel7.x86_64.rpm due to failed signature
> > check.
> > Removing mysql_fdw_10-debuginfo-2.3.0-3.rhel7.x86_64.rpm due to failed
> > signature check.
> > Removing osm2pgrouting_10-2.3.3-1.rhel7.x86_64.rpm due to failed
> signature
> > check.
> > Removing osm2pgrouting_10-debuginfo-2.3.3-1.rhel7.x86_64.rpm due to
> failed
> > signature check.
>
> Ok, verified on my end as well. Fixed this, and pushed them to the repo.
> They
> will be available 1 hour later. Can you please try again then?
>
> Regards,
> --
> Devrim Gündüz
> EnterpriseDB: https://www.enterprisedb.com
> PostgreSQL Consultant, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR
>


Re: High WriteLatency RDS Postgres 9.3.20

2018-06-19 Thread Juan Manuel Cuello
On Mon, Jun 18, 2018 at 7:23 PM Andres Freund  wrote:

> > So far it's been almost two months of investigation
> > and people at AWS technical support don't seem to find the cause. I think
> > it could be related to Postgres and the number of schema/tables in the
> > database, that's why I post this issue here.
>
> There've been improvements made since 9.3. Upgrade.
>

You are right, and I'm aware of that (I'm planning a version upgrade), I
just wanted to know if anybody knew if the number of schema/tables could be
the cause of high write levels due to Postgres processes reaching some
internal limit.

Thanks.

Juan


Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Hellmuth Vargas
Hi

with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
path, weight)
as (
select
name, step, ingredient, quantity, unit
,   quantity::numeric(10,2)
,   step::text
,   case when unit = 'g' then quantity::numeric(10,2) else null
end
  from recipe
 where name = 'pizza'
union all
select
recipe.name, recipe.step, recipe.ingredient,
recipe.quantity, recipe.unit
,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
,   pizza.path || '.' || recipe.step
,   case when recipe.unit = 'g' then (pizza.rel_qty *
recipe.quantity)::numeric(10,2) else null end
  from pizza
  join recipe on (recipe.name = pizza.ingredient)
)
select path, ingredient, quantity, rel_qty, unit, weight, *sum(weight)
over() as total_weight*
  from pizza
 order by path;

 path  |  ingredient  | quantity | rel_qty | unit  | weight | total_weight
---+--+--+-+---++--
 1 | tomato sauce | 1.00 |1.00 | pcs   ||   313.00
 1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |   313.00
 1.2   | basil|10.00 |   10.00 | g |  10.00 |   313.00
 1.3   | salt | 3.00 |3.00 | g |   3.00 |   313.00
 2 | pizza bottom | 1.00 |1.00 | pcs   ||   313.00
 2.2   | dough| 1.00 |1.00 | pcs   ||   313.00
 2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |   313.00
 2.2.2 | water|50.00 |   50.00 | g |  50.00 |   313.00
 2.2.3 | salt | 1.00 |1.00 | pinch ||   313.00
(9 rows)





El mar., 19 de jun. de 2018 a la(s) 08:39, Alban Hertroys (
haram...@gmail.com) escribió:

> Hi all,
>
> I'm struggling with a hierarchical query where I'm tasked to calculate
> weights of items in an (exploded) Bill of Materials, based on the weights
> of their components. Not all components are measured with a weight,
> sometimes there are pieces, meters, areas, etc, and the hierarchy is of
> varying levels of depth.
>
> It would help if I could track a sum() throughout the explosion that would
> write back onto parent rows when the recursion returns: postorder traversal.
>
> I created a simplified example about making pizza:
>
> CREATE TABLE ingredient (
> name text NOT NULL
> );
>
> CREATE TABLE recipe (
> name text NOT NULL,
> ingredient text NOT NULL,
> quantity numeric(6,2) NOT NULL,
> unit text NOT NULL,
> step integer NOT NULL
> );
>
> COPY ingredient (name) FROM stdin;
> tomato
> basil
> salt
> tomato sauce
> flour
> water
> yeast
> dough
> pizza bottom
> pizza
> \.
>
> COPY recipe (name, ingredient, quantity, unit, step) FROM stdin;
> tomato saucetomato  100.00  g   1
> dough   flour   150.00  g   1
> tomato saucebasil   10.00   g   2
> pizza   pizza bottom1.00pcs 2
> tomato saucesalt3.00g   3
> dough   salt1.00pinch   3
> pizza   tomato sauce1.00pcs 1
> pizza bottomdough   1.00pcs 2
> dough   water   50.00   g   2
> \.
>
> ALTER TABLE ONLY ingredient
> ADD CONSTRAINT ingredient_pkey PRIMARY KEY (name);
>
> ALTER TABLE ONLY recipe
> ADD CONSTRAINT recipe_pkey PRIMARY KEY (name, ingredient);
>
> ALTER TABLE ONLY recipe
> ADD CONSTRAINT recipe_ingredient_fkey FOREIGN KEY (ingredient)
> REFERENCES ingredient(name);
>
> ALTER TABLE ONLY recipe
> ADD CONSTRAINT recipe_name_fkey FOREIGN KEY (name) REFERENCES
> ingredient(name);
>
>
> A query listing the recipe for 'pizza' would be as follows:
> development=> with recursive pizza (name, step, ingredient, quantity,
> unit, rel_qty, path, weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else
> null end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient,
> recipe.quantity, recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty *
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight
>   from pizza
>  order by path;
>
>  path  |  ingredient  | quantity | rel_qty | unit  | weight
> ---+--+--+-+---+
>  1 | tomato sauce | 1.00 |1.00 | pcs   |
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00
>  1.3   | salt | 3.00 |3.0

Re: High WriteLatency RDS Postgres 9.3.20

2018-06-19 Thread Juan Manuel Cuello
On Tue, Jun 19, 2018 at 12:16 AM Benjamin Scherrey <
scher...@proteus-tech.com> wrote:

> I would also add that AWS' I/O capabilities are quite poor and expensive.
> I assume that you have tried purchasing additional IOOPs on that setup to
> see whether you got an expected speed up? If not you should try that as a
> diagnostic tool even if you wouldn't want to pay that on an ongoing basis.
>

I haven't tried increasing available IOPS, but looking at the metrics, I'm
far away of the limit, so it doesn't seem to be related, but I will explore
this option further.

We have a client that is I/O write bound and it has taken us significant
> efforts to get it to perform well on AWS. We definitely run our own
> instances rather than depend on RDS and have always been able to outperform
> RDS instances which seem to really be focused to provide a PAAS capability
> for developers who really don't want to have to understand how a db works.
> Running our identical environment on bare metal is like night & day under
> any circumstances when compared to AWS.
>
> Client's requirement is AWS so we keep working on it and we like AWS for
> many things but understand it will always underperform on I/O.
>
> Post actual measurements with and without IOOPs or create your own PG
> server instance and then people might be able to give you additional
> insights.
>

I'll consider your suggestions and I'll back with more info in case I
create my own environment, I just wanted to know if the number of
schemas/tables could be the cause of high writes levels, in order to
discard this hypothesis.

Thanks


Question re: pldbgapi

2018-06-19 Thread Steven Hirsch

Hi, all.

I'm trying to debug a rather complicated function using pldbgapi in 
pgAdmin III and am running into an annoying problem.  In order to set 
breakpoints, I need to execute the query that invokes the function.  It 
then stops at the first line and allows me to set breakpoints.  However, I 
cannot figure out a way to suppress that first line break after doing so!


I have some unexpected conditions being hit and need to let it run on a 
rather large data set, breaking into the debugger ONLY when one of my 
breakpoints is reached.  Pressing the "continue" button 1.4M times does 
not seem very practical :-).


Am I missing something obvious?  Is there really no way to have it 
free-run after manually setting breakpoints?


Steve


--



Re: Drop Default Privileges?

2018-06-19 Thread Louis Battuello


> On Jun 19, 2018, at 11:38 AM, Fabio Pardi  wrote:
> 
> Hi Louis,
> 
> I think 'alter user' can do the job for you.
> 
> https://www.postgresql.org/docs/current/static/sql-alteruser.html
> 
> Else, as alternative: before running pg_restore, you could edit the dump and 
> replace the string 'ROLE postgres' with the correct user on the RDS instance.
> 
> regards,
> 
> fabio pardi
> 

Thanks for your replies, David and Fabio.

I thought about editing the dump file or attempting some sort of reassignment 
of the default privileges, but that still leaves the larger question: can 
default privileges ever be removed specific to a single schema?

If I set a default of GRANT SELECT, is my only option to change it to REVOKE 
SELECT? Is there a way to “get rid of the default privileges entry for the 
role,” as referenced in the Notes section of the of the ALTER DEFAULT 
PRIVILEGES documentation? Reversing the change from GRANT to REVOKE still 
leaves a catalog reference to the postgres user oid in pg_default_acl.

I don’t want to reverse the default behavior. I’d like to remove it entirely.

Thanks,
Louis

Re: Drop Default Privileges?

2018-06-19 Thread Pavan Teja
Hi Louis,

In order to remove the default privileges for any particular user/role, we
should know the list of default privileges.

If we know them we can revoke them as a super user. Once I tried finding
the list of default privileges, but left with no clue. But I strongly
believe that if we know the list of default privileges that will be
assigned we may revoke them as a super user.

Regards,
Pavan

On Tue, Jun 19, 2018, 11:55 PM Louis Battuello 
wrote:

>
>
> On Jun 19, 2018, at 11:38 AM, Fabio Pardi  wrote:
>
> Hi Louis,
>
> I think 'alter user' can do the job for you.
>
> https://www.postgresql.org/docs/current/static/sql-alteruser.html
>
> Else, as alternative: before running pg_restore, you could edit the dump
> and replace the string 'ROLE postgres' with the correct user on the RDS
> instance.
>
> regards,
>
> fabio pardi
>
>
> Thanks for your replies, David and Fabio.
>
> I thought about editing the dump file or attempting some sort of
> reassignment of the default privileges, but that still leaves the larger
> question: can default privileges ever be removed specific to a single
> schema?
>
> If I set a default of GRANT SELECT, is my only option to change it to
> REVOKE SELECT? Is there a way to “get rid of the default privileges entry
> for the role,” as referenced in the Notes section of the of the ALTER
> DEFAULT PRIVILEGES documentation? Reversing the change from GRANT to REVOKE
> still leaves a catalog reference to the postgres user oid in pg_default_acl.
>
> I don’t want to reverse the default behavior. I’d like to remove it
> entirely.
>
> Thanks,
> Louis
>


Re: Drop Default Privileges?

2018-06-19 Thread Victor Yegorov
вт, 19 июн. 2018 г. в 21:32, Pavan Teja :

> In order to remove the default privileges for any particular user/role, we
> should know the list of default privileges.
>

`psql` allows you to check default privileges via `\ddp` command (per
database). You can start `psql` with `-E` switch that will show you
internal queries used for displaying this information, or you can `\set
ECHO_HIDDEN on` with the same effect.

Also, you can do `pg_dumpall -s | grep -E 'DEFAULT PRIVILEGE|\\connect' and
it'll produce a list of all entries for all databases, along with database
name.


-- 
Victor Yegorov


Re: Drop Default Privileges?

2018-06-19 Thread Victor Yegorov
вт, 19 июн. 2018 г. в 18:20, Louis Battuello :

> Is it possible to drop default privileges?
>
> I’m attempting to run a pg_restore into an RDS instance, which doesn’t
> have a “postgres” user.
>
> I encounter many messages like so:
>
> ALTER DEFAULT PRIVILEGES...
>
> pg_restore: [archiver (db)] Error from TOC entry 10182; 826 253752252 DEFAULT
> ACL DEFAULT PRIVILEGES FOR TABLES postgres
>
> pg_restore: [archiver (db)] could not execute query: ERROR:  role
> "postgres" does not exist
>
> Command was: ALTER DEFAULT PRIVILEGES FOR ROLE postgres IN SCHEMA
> abc_schema REVOKE ALL ON TABLES  FROM PUBLIC;
>
> I’d like to remove these default privileges on the source database to
> avoid this error message, but I can’t find the syntax in the documentation
> (or if it’s possible). I only see GRANT/REVOKE options.
>

If you see `ALTER DEFAULT PRIVILEGES … REVOKE …` and want to undo it, you
will have to GRANT corresponding privilege.


-- 
Victor Yegorov


Re: Drop Default Privileges?

2018-06-19 Thread David G. Johnston
On Tue, Jun 19, 2018 at 11:31 AM, Pavan Teja 
wrote:

> Once I tried finding the list of default privileges, but left with no clue.
>

​Start here:

https://www.postgresql.org/docs/10/static/catalog-pg-default-acl.html

David J.
​


Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Hellmuth Vargas
Hi

with partial sum:




with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
path, weight)
as (
select
name, step, ingredient, quantity, unit
,   quantity::numeric(10,2)
,   step::text
,   case when unit = 'g' then quantity::numeric(10,2) else null
end
  from recipe
 where name = 'pizza'
union all
select
recipe.name, recipe.step, recipe.ingredient,
recipe.quantity, recipe.unit
,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
,   pizza.path || '.' || recipe.step
,   case when recipe.unit = 'g' then (pizza.rel_qty *
recipe.quantity)::numeric(10,2) else null end
  from pizza
  join recipe on (recipe.name = pizza.ingredient)
)
select path, ingredient, quantity, rel_qty, unit, weight,*sum(weight)
over(partition by split_part(path,'.',1)) as parcial_weight*, *sum(weight)
over() as total_weight*
  from pizza
 order by path;

 path  |  ingredient  | quantity | rel_qty | unit  | weight |
parcial_weight | total_weight
---+--+--+-+---+++--
 1 | tomato sauce | 1.00 |1.00 | pcs   ||
 113.00 |   313.00
 1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |
 113.00 |   313.00
 1.2   | basil|10.00 |   10.00 | g |  10.00 |
 113.00 |   313.00
 1.3   | salt | 3.00 |3.00 | g |   3.00 |
 113.00 |   313.00
 2 | pizza bottom | 1.00 |1.00 | pcs   ||
 200.00 |   313.00
 2.2   | dough| 1.00 |1.00 | pcs   ||
 200.00 |   313.00
 2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |
 200.00 |   313.00
 2.2.2 | water|50.00 |   50.00 | g |  50.00 |
 200.00 |   313.00
 2.2.3 | salt | 1.00 |1.00 | pinch ||
 200.00 |   313.00
(9 rows)




El mar., 19 de jun. de 2018 a la(s) 11:49, Hellmuth Vargas (hiv...@gmail.com)
escribió:

> Hi
>
> with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
> path, weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else
> null end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient,
> recipe.quantity, recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty *
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight, *sum(weight)
> over() as total_weight*
>   from pizza
>  order by path;
>
>  path  |  ingredient  | quantity | rel_qty | unit  | weight | total_weight
> ---+--+--+-+---++--
>  1 | tomato sauce | 1.00 |1.00 | pcs   ||   313.00
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |   313.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00 |   313.00
>  1.3   | salt | 3.00 |3.00 | g |   3.00 |   313.00
>  2 | pizza bottom | 1.00 |1.00 | pcs   ||   313.00
>  2.2   | dough| 1.00 |1.00 | pcs   ||   313.00
>  2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |   313.00
>  2.2.2 | water|50.00 |   50.00 | g |  50.00 |   313.00
>  2.2.3 | salt | 1.00 |1.00 | pinch ||   313.00
> (9 rows)
>
>
>
>
>
> El mar., 19 de jun. de 2018 a la(s) 08:39, Alban Hertroys (
> haram...@gmail.com) escribió:
>
>> Hi all,
>>
>> I'm struggling with a hierarchical query where I'm tasked to calculate
>> weights of items in an (exploded) Bill of Materials, based on the weights
>> of their components. Not all components are measured with a weight,
>> sometimes there are pieces, meters, areas, etc, and the hierarchy is of
>> varying levels of depth.
>>
>> It would help if I could track a sum() throughout the explosion that
>> would write back onto parent rows when the recursion returns: postorder
>> traversal.
>>
>> I created a simplified example about making pizza:
>>
>> CREATE TABLE ingredient (
>> name text NOT NULL
>> );
>>
>> CREATE TABLE recipe (
>> name text NOT NULL,
>> ingredient text NOT NULL,
>> quantity numeric(6,2) NOT NULL,
>> unit text NOT NULL,
>> step integer NOT NULL
>> );
>>
>> COPY ingredient (name) FROM stdin;
>> tomato
>> basil
>> salt
>> tomato sauce
>> flour
>> water
>> yeast
>> dough
>> pizza bottom
>> pizza
>> \.
>>
>> COPY recipe (name, ingre

Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Rob Sargent



On 06/19/2018 01:14 PM, Hellmuth Vargas wrote:


Hi

with partial sum:




with recursive pizza (name, step, ingredient, quantity, unit, rel_qty, 
path, weight)

as (
        select
                name, step, ingredient, quantity, unit
        ,       quantity::numeric(10,2)
        ,       step::text
        ,       case when unit = 'g' then quantity::numeric(10,2) else 
null end

          from recipe
         where name = 'pizza'
        union all
        select
recipe.name , recipe.step, recipe.ingredient, 
recipe.quantity, recipe.unit

        ,       (pizza.rel_qty * recipe.quantity)::numeric(10,2)
        ,       pizza.path || '.' || recipe.step
        ,       case when recipe.unit = 'g' then (pizza.rel_qty * 
recipe.quantity)::numeric(10,2) else null end

          from pizza
          join recipe on (recipe.name  = 
pizza.ingredient)

)
select path, ingredient, quantity, rel_qty, unit, weight,*sum(weight) 
over(partition by split_part(path,'.',1)) as parcial_weight*, 
*sum(weight) over() as total_weight*

  from pizza
 order by path;

 path  |  ingredient  | quantity | rel_qty | unit  | weight | 
parcial_weight | total_weight

---+--+--+-+---+++--
 1     | tomato sauce |     1.00 |    1.00 | pcs   |        |        
 113.00 |      313.00
 1.1   | tomato  |   100.00 |  100.00 | g     | 100.00 |        
 113.00 |      313.00
 1.2   | basil |    10.00 |   10.00 | g     |  10.00 |         113.00 
|      313.00
 1.3   | salt  |     3.00 |    3.00 | g     |   3.00 |         113.00 
|      313.00
 2     | pizza bottom |     1.00 |    1.00 | pcs   |        |        
 200.00 |      313.00
 2.2   | dough |     1.00 |    1.00 | pcs   |        |         200.00 
|      313.00
 2.2.1 | flour |   150.00 |  150.00 | g     | 150.00 |         200.00 
|      313.00
 2.2.2 | water |    50.00 |   50.00 | g     |  50.00 |         200.00 
|      313.00
 2.2.3 | salt  |     1.00 |    1.00 | pinch |        |         200.00 
|      313.00

(9 rows)




This is gorgeous but I suspect any level greater than 10 wide will 
present sorting problems, no?  Maybe a fixed two-digit, zero filled 
number per level? Pushing the problem off by an order of magnitude :)

An exercise left to the OP perhaps.




Re: Run Stored procedure - function from VBA

2018-06-19 Thread Rob Sargent



On 06/18/2018 09:51 PM, Łukasz Jarych wrote:

Thank you Rob,

question is it is the optimal way to run SP from VBA?
Or not?

Best,
Jacek

2018-06-19 1:34 GMT+02:00 Rob Sargent >:





On Jun 18, 2018, at 9:47 AM, Łukasz Jarych mailto:jarys...@gmail.com>> wrote:

Hi Guys,

i have example function :

CREATE OR REPLACE FUNCTION totalRecords ()
RETURNS integer AS $total$
declare
total integer;
BEGIN
   SELECT count(*) into total FROM COMPANY;
   RETURN total;
END;
$total$ LANGUAGE plpgsql;

and i want to run it from VBA using odbc connection.

What is the best way to use it ?

something like this:

|Dim dbCon asnew ADODB.Connection Dim rst asnew ADODB.Recordset
Dbcon.connectionstring=”Your connection string goes
here!”Dbcon.openRst.openstrsql|
where strsql is "Select * from totalRecords" or this is not a
good solution?

Best,
Jacek


You need the parentheses after the function name: “select * from
totalrecords();"




Depends on the usage pattern.  I'm sure there is an ODBC construct for 
stored procedures/function, which you could build once and re-use with 
new parameter values if you're going to call this repeatedly.


Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Hellmuth Vargas
*Hi*

This is gorgeous but I suspect any level greater than 10 wide will present
sorting problems, no?


*no, it should not be inconvenient*

 Maybe a fixed two-digit, zero filled number per level?

*neither*

Pushing the problem off by an order of magnitude :)
An exercise left to the OP perhaps.



El mar., 19 de jun. de 2018 a la(s) 14:52, Rob Sargent (
robjsarg...@gmail.com) escribió:

>
>
> On 06/19/2018 01:14 PM, Hellmuth Vargas wrote:
>
>
> Hi
>
> with partial sum:
>
>
>
>
> with recursive pizza (name, step, ingredient, quantity, unit, rel_qty,
> path, weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else
> null end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient,
> recipe.quantity, recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty *
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight,*sum(weight)
> over(partition by split_part(path,'.',1)) as parcial_weight*, *sum(weight)
> over() as total_weight*
>   from pizza
>  order by path;
>
>  path  |  ingredient  | quantity | rel_qty | unit  | weight |
> parcial_weight | total_weight
>
> ---+--+--+-+---+++--
>  1 | tomato sauce | 1.00 |1.00 | pcs   ||
>  113.00 |   313.00
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |
>  113.00 |   313.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00 |
>  113.00 |   313.00
>  1.3   | salt | 3.00 |3.00 | g |   3.00 |
>  113.00 |   313.00
>  2 | pizza bottom | 1.00 |1.00 | pcs   ||
>  200.00 |   313.00
>  2.2   | dough| 1.00 |1.00 | pcs   ||
>  200.00 |   313.00
>  2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |
>  200.00 |   313.00
>  2.2.2 | water|50.00 |   50.00 | g |  50.00 |
>  200.00 |   313.00
>  2.2.3 | salt | 1.00 |1.00 | pinch ||
>  200.00 |   313.00
> (9 rows)
>
>
>
>
>
>

-- 
Cordialmente,

Ing. Hellmuth I. Vargas S.
Esp. Telemática y Negocios por Internet
Oracle Database 10g Administrator Certified Associate
EnterpriseDB Certified PostgreSQL 9.3 Associate


Load data from a csv file without using COPY

2018-06-19 Thread Ravi Krishna
In order to test a real life scenario (and use it for benchmarking) I want to 
load large number of data from csv files.  
The requirement is that the load should happen like an application writing to 
the database ( that is, no COPY command). 
Is there a tool which can do the job.  Basically parse the csv file and insert 
it to the database row by row.

thanks






Re: Load data from a csv file without using COPY

2018-06-19 Thread Nicolas Paris
hi

AFAIK you can use copy from a jdbc command since copy allows to stream data
(from stdin version)
However while faster than INSERT INTO,  this might lock the target table
during the process

2018-06-19 22:16 GMT+02:00 Ravi Krishna :

> In order to test a real life scenario (and use it for benchmarking) I want
> to load large number of data from csv files.
> The requirement is that the load should happen like an application writing
> to the database ( that is, no COPY command).
> Is there a tool which can do the job.  Basically parse the csv file and
> insert it to the database row by row.
>
> thanks
>
>
>
>
>


Re: Load data from a csv file without using COPY

2018-06-19 Thread Hans Schou
On Tue, Jun 19, 2018 at 10:17 PM Ravi Krishna  wrote:

> In order to test a real life scenario (and use it for benchmarking) I want
> to load large number of data from csv files.
> The requirement is that the load should happen like an application writing
> to the database ( that is, no COPY command).


Once you have parsed the data it is fairly easy to use PostgreSQL "COPY
FROM stdin" format. If you have all data with a tabulator separator. A
simple table (t1) could look like:

COPY t1 (f1,f2) FROM stdin;
3Joe
7Jane
\.

These data can be piped directly to psql and it will be fast.

Note: NULL should be '\N', see manual:
https://www.postgresql.org/docs/current/static/sql-copy.html

It is the same kind of data you get with pg_dump.

./hans


Re: Load data from a csv file without using COPY

2018-06-19 Thread David G. Johnston
On Tue, Jun 19, 2018 at 1:16 PM, Ravi Krishna  wrote:

> In order to test a real life scenario (and use it for benchmarking) I want
> to load large number of data from csv files.
> The requirement is that the load should happen like an application writing
> to the database ( that is, no COPY command).
> Is there a tool which can do the job.  Basically parse the csv file and
> insert it to the database row by row.
>

​I'm skeptical that injesting CSV of any form, even if you intentionally
blow things up by converting into:

BEGIN;
INSERT INTO tbl VALUES ('','','');
COMMIT;
BEGIN;
INSERT INTO tbl VALUES ('','','');
COMMIT;

(which is what auto-commit mode looks like)

Is going to provide a meaningful ​benchmark for application-like usage
patterns.

But anyway, I'm not familiar with any tools that make doing this
particularly simple.  In most situations like this I'll just import the CSV
into a spreadsheet and create a formula that builds out the individual SQL
commands.  Whether that's useful depends a lot on how often the source CSV
is updated.

That said, I have the following tool to be generally helpful in this area -
though I'm thinking it doesn't do what you want here.

http://csvkit.readthedocs.io/en/1.0.3/scripts/csvsql.html

David J.


Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Alban Hertroys


> On 19 Jun 2018, at 21:14, Hellmuth Vargas  wrote:
> 
> 
> Hi
> 
> with partial sum:
> 
> with recursive pizza (name, step, ingredient, quantity, unit, rel_qty, path, 
> weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else null 
> end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient, recipe.quantity, 
> recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty * 
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight,sum(weight) 
> over(partition by split_part(path,'.',1)) as parcial_weight, sum(weight) 
> over() as total_weight
>   from pizza
>  order by path;
> 
>  path  |  ingredient  | quantity | rel_qty | unit  | weight | parcial_weight 
> | total_weight
> ---+--+--+-+---+++--
>  1 | tomato sauce | 1.00 |1.00 | pcs   || 113.00 
> |   313.00
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 | 113.00 
> |   313.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00 | 113.00 
> |   313.00
>  1.3   | salt | 3.00 |3.00 | g |   3.00 | 113.00 
> |   313.00
>  2 | pizza bottom | 1.00 |1.00 | pcs   || 200.00 
> |   313.00
>  2.2   | dough| 1.00 |1.00 | pcs   || 200.00 
> |   313.00
>  2.2.1 | flour|   150.00 |  150.00 | g | 150.00 | 200.00 
> |   313.00
>  2.2.2 | water|50.00 |   50.00 | g |  50.00 | 200.00 
> |   313.00
>  2.2.3 | salt | 1.00 |1.00 | pinch || 200.00 
> |   313.00
> (9 rows)

That is certainly an interesting solution and it begs the question whether a 
text field ('path') is actually the right representation of the hierarchy (some 
type of array would seem to be a better fit). Good out-of-the-box thinking!
This is probably usable for my actual case, so thanks for that, wouldn't have 
thought of it myself (even though I already had all the right "bits" in place!).

On the more theoretical front: The question remains whether it is possible to 
calculate fields in post-order tree traversal. I think that would be a 
semantically proper way to express this type of problem and it wouldn't need 
the kinds of pre/post-processing that after-the-fact aggregation (like in above 
solution) requires. So, leaner, and probably faster.
That implies that the SQL committee thought of the possibility in the first 
place though, which I'm beginning to doubt...


> El mar., 19 de jun. de 2018 a la(s) 11:49, Hellmuth Vargas (hiv...@gmail.com) 
> escribió:
> Hi
> 
> with recursive pizza (name, step, ingredient, quantity, unit, rel_qty, path, 
> weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else null 
> end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient, recipe.quantity, 
> recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty * 
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight, sum(weight) over() 
> as total_weight
>   from pizza
>  order by path;
> 
>  path  |  ingredient  | quantity | rel_qty | unit  | weight | total_weight
> ---+--+--+-+---++--
>  1 | tomato sauce | 1.00 |1.00 | pcs   ||   313.00
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |   313.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00 |   313.00
>  1.3   | salt | 3.00 |3.00 | g |   3.00 |   313.00
>  2 | pizza bottom | 1.00 |1.00 | pcs   ||   313.00
>  2.2   | dough| 1.00 |1.00 | pcs   ||   313.00
>  2.2.1 | flour|   150.00 |  150.00 | g | 150.00 |   313.00
>  2.2.2 | water|50.00 |   50.00 | g |  50.00 |   313.00
>  2.2.3 | salt | 1.00 |1.00 | pinch |

Re: Load data from a csv file without using COPY

2018-06-19 Thread Steve Atkins


> On Jun 19, 2018, at 9:16 PM, Ravi Krishna  wrote:
> 
> In order to test a real life scenario (and use it for benchmarking) I want to 
> load large number of data from csv files.  
> The requirement is that the load should happen like an application writing to 
> the database ( that is, no COPY command). 
> Is there a tool which can do the job.  Basically parse the csv file and 
> insert it to the database row by row.

If performance is relevant then your app should probably be using COPY 
protocol, not line by line inserts. It's
supported by most postgresql access libraries. If your app does that then using 
"\copy" from psql would be
an appropriate benchmark.

Cheers,
  Steve




Re: Load data from a csv file without using COPY

2018-06-19 Thread Alban Hertroys


> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
> 
> In order to test a real life scenario (and use it for benchmarking) I want to 
> load large number of data from csv files.  
> The requirement is that the load should happen like an application writing to 
> the database ( that is, no COPY command). 
> Is there a tool which can do the job.  Basically parse the csv file and 
> insert it to the database row by row.
> 
> thanks

I think an easy approach would be to COPY the CSV files into a separate 
database using psql's \copy command and then pg_dump that as separate insert 
statements with pg_dump —inserts.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.




Re: Load data from a csv file without using COPY

2018-06-19 Thread Ravi Krishna
> 
> If performance is relevant then your app should probably be using COPY 
> protocol, not line by line inserts. It's
> supported by most postgresql access libraries. If your app does that then 
> using "\copy" from psql would be
> an appropriate benchmark.

Actually the reluctance to not use COPY is to make the benchmark same across 
two different RDBMS in
two diff env. 


Re: Load data from a csv file without using COPY

2018-06-19 Thread Ravi Krishna
> 
> I think an easy approach would be to COPY the CSV files into a separate 
> database using psql's \copy command and then pg_dump that as separate insert 
> statements with pg_dump —inserts.
> 

This was my first thought too.  However, as I understand, pg_dump --insert 
basically runs INSERT INTO ... sql for every row.  
In other words, each row is un-prepared and executed individually.  That is 
also not real life scenario.




Re: Load data from a csv file without using COPY

2018-06-19 Thread Steve Atkins


> On Jun 19, 2018, at 10:14 PM, Ravi Krishna  wrote:
> 
>> 
>> If performance is relevant then your app should probably be using COPY 
>> protocol, not line by line inserts. It's
>> supported by most postgresql access libraries. If your app does that then 
>> using "\copy" from psql would be
>> an appropriate benchmark.
> 
> Actually the reluctance to not use COPY is to make the benchmark same across 
> two different RDBMS in
> two diff env.

That's something I'd only do if I intended to rig a benchmark between a RDBMS 
with good bulk import
and one without. If that's not your goal, your approach doesn't seem to make 
sense and is unlikely
to provide performance metrics that are useful or related to your app 
performance, unless you intend
to hamstring your app in exactly the same way you're running the benchmark.

Maybe use your app, or write ten minutes worth of code that'll interact with 
the database in much the
same way as your app will?

Cheers,
  Steve




Re: Load data from a csv file without using COPY

2018-06-19 Thread Rob Sargent




On 06/19/2018 03:14 PM, Ravi Krishna wrote:

If performance is relevant then your app should probably be using COPY 
protocol, not line by line inserts. It's
supported by most postgresql access libraries. If your app does that then using 
"\copy" from psql would be
an appropriate benchmark.

Actually the reluctance to not use COPY is to make the benchmark same across 
two different RDBMS in
two diff env.
If bulk loading is the actual production target, all your RDBMS choices 
have their own loaders.  I suggest that is what you ought tocompare.




Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Paul Jungwirth

On 06/19/2018 02:05 PM, Alban Hertroys wrote:

On the more theoretical front: The question remains whether it is possible to 
calculate fields in post-order tree traversal. I think that would be a 
semantically proper way to express this type of problem and it wouldn't need 
the kinds of pre/post-processing that after-the-fact aggregation (like in above 
solution) requires. So, leaner, and probably faster.
That implies that the SQL committee thought of the possibility in the first 
place though, which I'm beginning to doubt...


If this interests you, you might enjoy this StackOverflow question:

https://stackoverflow.com/questions/35956486/generate-nested-json-with-couting-in-postgresql

Briefly, how do you construct a nested JSON structure from a recursive 
CTE? The only answers at that link rely on plpgsql, but of course that 
is cheating. :-) I took a stab at it a couple years ago but couldn't 
figure it out, and it seemed like post-order processing was exactly the 
missing piece.


If anyone has any ideas I'd be intrigued to hear them!

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Load data from a csv file without using COPY

2018-06-19 Thread David G. Johnston
On Tue, Jun 19, 2018 at 2:17 PM, Ravi Krishna  wrote:

> >
> > I think an easy approach would be to COPY the CSV files into a separate
> database using psql's \copy command and then pg_dump that as separate
> insert statements with pg_dump —inserts.
> >
>
> This was my first thought too.  However, as I understand, pg_dump --insert
> basically runs INSERT INTO ... sql for every row.
> In other words, each row is un-prepared and executed individually.  That
> is also not real life scenario.
>

​You really need to describe what you consider to be a "real life​
scenario"; and probably give a better idea of creation and number of these
csv files.  In addition to describing the relevant behavior of the
application you are testing.

If you want maximum realism you should probably write integration tests for
your application and then execute those at high volume.

Or at minimum give an example of the output you would want from this
unknown program...

David J.


Re: Load data from a csv file without using COPY

2018-06-19 Thread Tim Cross


Ravi Krishna  writes:

> In order to test a real life scenario (and use it for benchmarking) I want to 
> load large number of data from csv files.  
> The requirement is that the load should happen like an application writing to 
> the database ( that is, no COPY command). 
> Is there a tool which can do the job.  Basically parse the csv file and 
> insert it to the database row by row.
>

Not clear what you mean by 'real world scenario', but you could possibly
use PG's foreign table support and define a csv file as a foreign table
and then have scripts which read from there and do whatever
insert/update etc you need. However, this has a high level of 'fakery'
going on and probably not testing what you really want.

There are lots of ways that applications write to the database -
different drivers (e.g. jdbc, odbc, pg etc), different commit
and transaction strategies and even different ways to handle things like
an update or insert process. You can even use streams and copy from an
application.

To get 'real world' equivalence, you really need to use the same
interface as the application you are comparing. Most languages have
support for processing CSV files, so you may be better off writing a
small 'wrapper' app which uses the same drivers and assuming your
database connectivity has been abstracted into some sort of
module/library/class, use the same interface to write to the database
that the application uses. 

Tim
-- 
Tim Cross



Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Asif Ali
how the fuck i unsubscribe to this mailing list , i get more than 100 emails a 
day

Bye


From: Paul Jungwirth 
Sent: Wednesday, June 20, 2018 2:31 AM
To: pgsql-general@lists.postgresql.org
Subject: Re: Is postorder tree traversal possible with recursive CTE's?

On 06/19/2018 02:05 PM, Alban Hertroys wrote:
> On the more theoretical front: The question remains whether it is possible to 
> calculate fields in post-order tree traversal. I think that would be a 
> semantically proper way to express this type of problem and it wouldn't need 
> the kinds of pre/post-processing that after-the-fact aggregation (like in 
> above solution) requires. So, leaner, and probably faster.
> That implies that the SQL committee thought of the possibility in the first 
> place though, which I'm beginning to doubt...

If this interests you, you might enjoy this StackOverflow question:

https://stackoverflow.com/questions/35956486/generate-nested-json-with-couting-in-postgresql

Briefly, how do you construct a nested JSON structure from a recursive
CTE? The only answers at that link rely on plpgsql, but of course that
is cheating. :-) I took a stab at it a couple years ago but couldn't
figure it out, and it seemed like post-order processing was exactly the
missing piece.

If anyone has any ideas I'd be intrigued to hear them!

--
Paul  ~{:-)
p...@illuminatedcomputing.com



Re: Load data from a csv file without using COPY

2018-06-19 Thread Asif Ali


how the fuck i unsubscribe to this mailing list , i get more than 100 emails a 
day


From: Tim Cross 
Sent: Wednesday, June 20, 2018 2:59 AM
To: Ravi Krishna
Cc: PG mailing List
Subject: Re: Load data from a csv file without using COPY


Ravi Krishna  writes:

> In order to test a real life scenario (and use it for benchmarking) I want to 
> load large number of data from csv files.
> The requirement is that the load should happen like an application writing to 
> the database ( that is, no COPY command).
> Is there a tool which can do the job.  Basically parse the csv file and 
> insert it to the database row by row.
>

Not clear what you mean by 'real world scenario', but you could possibly
use PG's foreign table support and define a csv file as a foreign table
and then have scripts which read from there and do whatever
insert/update etc you need. However, this has a high level of 'fakery'
going on and probably not testing what you really want.

There are lots of ways that applications write to the database -
different drivers (e.g. jdbc, odbc, pg etc), different commit
and transaction strategies and even different ways to handle things like
an update or insert process. You can even use streams and copy from an
application.

To get 'real world' equivalence, you really need to use the same
interface as the application you are comparing. Most languages have
support for processing CSV files, so you may be better off writing a
small 'wrapper' app which uses the same drivers and assuming your
database connectivity has been abstracted into some sort of
module/library/class, use the same interface to write to the database
that the application uses.

Tim
--
Tim Cross



Is there a way to be notified on the CREATE TABLE execution?

2018-06-19 Thread Igor Korot
Hi, ALL,
Consider a scenario:

1. A software that uses libpq is executing.
2. Someone opens up a terminal and creates a table.
3. A software needs to know about this new table.

I presume this is a DBMS-specific...

Thank you.



Re: Load data from a csv file without using COPY

2018-06-19 Thread Asif Ali
how the fuck i unsubscribe to this mailing list , i get more than 100 emails a 
day

Bye


From: Alban Hertroys 
Sent: Wednesday, June 20, 2018 2:10 AM
To: Ravi Krishna
Cc: PG mailing List
Subject: Re: Load data from a csv file without using COPY


> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
>
> In order to test a real life scenario (and use it for benchmarking) I want to 
> load large number of data from csv files.
> The requirement is that the load should happen like an application writing to 
> the database ( that is, no COPY command).
> Is there a tool which can do the job.  Basically parse the csv file and 
> insert it to the database row by row.
>
> thanks

I think an easy approach would be to COPY the CSV files into a separate 
database using psql's \copy command and then pg_dump that as separate insert 
statements with pg_dump —inserts.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.




Re: Is postorder tree traversal possible with recursive CTE's?

2018-06-19 Thread Asif Ali
how the fuck i unsubscribe to this mailing list , i get more than 100 emails a 
day

Bye


From: Alban Hertroys 
Sent: Wednesday, June 20, 2018 2:05 AM
To: Hellmuth Vargas
Cc: pgsql-gene...@postgresql.org
Subject: Re: Is postorder tree traversal possible with recursive CTE's?


> On 19 Jun 2018, at 21:14, Hellmuth Vargas  wrote:
>
>
> Hi
>
> with partial sum:
>
> with recursive pizza (name, step, ingredient, quantity, unit, rel_qty, path, 
> weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else null 
> end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient, recipe.quantity, 
> recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty * 
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight,sum(weight) 
> over(partition by split_part(path,'.',1)) as parcial_weight, sum(weight) 
> over() as total_weight
>   from pizza
>  order by path;
>
>  path  |  ingredient  | quantity | rel_qty | unit  | weight | parcial_weight 
> | total_weight
> ---+--+--+-+---+++--
>  1 | tomato sauce | 1.00 |1.00 | pcs   || 113.00 
> |   313.00
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 | 113.00 
> |   313.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00 | 113.00 
> |   313.00
>  1.3   | salt | 3.00 |3.00 | g |   3.00 | 113.00 
> |   313.00
>  2 | pizza bottom | 1.00 |1.00 | pcs   || 200.00 
> |   313.00
>  2.2   | dough| 1.00 |1.00 | pcs   || 200.00 
> |   313.00
>  2.2.1 | flour|   150.00 |  150.00 | g | 150.00 | 200.00 
> |   313.00
>  2.2.2 | water|50.00 |   50.00 | g |  50.00 | 200.00 
> |   313.00
>  2.2.3 | salt | 1.00 |1.00 | pinch || 200.00 
> |   313.00
> (9 rows)

That is certainly an interesting solution and it begs the question whether a 
text field ('path') is actually the right representation of the hierarchy (some 
type of array would seem to be a better fit). Good out-of-the-box thinking!
This is probably usable for my actual case, so thanks for that, wouldn't have 
thought of it myself (even though I already had all the right "bits" in place!).

On the more theoretical front: The question remains whether it is possible to 
calculate fields in post-order tree traversal. I think that would be a 
semantically proper way to express this type of problem and it wouldn't need 
the kinds of pre/post-processing that after-the-fact aggregation (like in above 
solution) requires. So, leaner, and probably faster.
That implies that the SQL committee thought of the possibility in the first 
place though, which I'm beginning to doubt...


> El mar., 19 de jun. de 2018 a la(s) 11:49, Hellmuth Vargas (hiv...@gmail.com) 
> escribió:
> Hi
>
> with recursive pizza (name, step, ingredient, quantity, unit, rel_qty, path, 
> weight)
> as (
> select
> name, step, ingredient, quantity, unit
> ,   quantity::numeric(10,2)
> ,   step::text
> ,   case when unit = 'g' then quantity::numeric(10,2) else null 
> end
>   from recipe
>  where name = 'pizza'
> union all
> select
> recipe.name, recipe.step, recipe.ingredient, recipe.quantity, 
> recipe.unit
> ,   (pizza.rel_qty * recipe.quantity)::numeric(10,2)
> ,   pizza.path || '.' || recipe.step
> ,   case when recipe.unit = 'g' then (pizza.rel_qty * 
> recipe.quantity)::numeric(10,2) else null end
>   from pizza
>   join recipe on (recipe.name = pizza.ingredient)
> )
> select path, ingredient, quantity, rel_qty, unit, weight, sum(weight) over() 
> as total_weight
>   from pizza
>  order by path;
>
>  path  |  ingredient  | quantity | rel_qty | unit  | weight | total_weight
> ---+--+--+-+---++--
>  1 | tomato sauce | 1.00 |1.00 | pcs   ||   313.00
>  1.1   | tomato   |   100.00 |  100.00 | g | 100.00 |   313.00
>  1.2   | basil|10.00 |   10.00 | g |  10.00 |   313.00
>  1.3   | salt | 3.00 |3.00 | g |   3.00 |   313.00
>  2 | pizza bottom | 1.00 |1.00 | pcs   || 

Re: Run Stored procedure - function from VBA

2018-06-19 Thread Asif Ali
how the fuck i unsubscribe to this mailing list , i get more than 100 emails a 
day

Bye


From: Rob Sargent 
Sent: Wednesday, June 20, 2018 12:54 AM
To: Łukasz Jarych
Cc: pgsql-gene...@postgresql.org
Subject: Re: Run Stored procedure - function from VBA



On 06/18/2018 09:51 PM, Łukasz Jarych wrote:
Thank you Rob,

question is it is the optimal way to run SP from VBA?
Or not?

Best,
Jacek

2018-06-19 1:34 GMT+02:00 Rob Sargent 
mailto:robjsarg...@gmail.com>>:


On Jun 18, 2018, at 9:47 AM, Łukasz Jarych 
mailto:jarys...@gmail.com>> wrote:

Hi Guys,

i have example function :

CREATE OR REPLACE FUNCTION totalRecords ()
RETURNS integer AS $total$
declare
total integer;
BEGIN
   SELECT count(*) into total FROM COMPANY;
   RETURN total;
END;
$total$ LANGUAGE plpgsql;

and i want to run it from VBA using odbc connection.

What is the best way to use it ?

something like this:


Dim dbCon as new ADODB.Connection
Dim rst as new ADODB.Recordset

Dbcon.connectionstring=”Your connection string goes here!”
Dbcon.open

Rst.open strsql

where strsql is "Select * from totalRecords" or this is not a good solution?

Best,
Jacek

You need the parentheses after the function name: “select * from 
totalrecords();"



Depends on the usage pattern.  I'm sure there is an ODBC construct for stored 
procedures/function, which you could build once and re-use with new parameter 
values if you're going to call this repeatedly.


Re: Load data from a csv file without using COPY

2018-06-19 Thread James Keener
Seriously, stop spamming the list and stop cursing and acting like a petulant 
child. Go to the site and unsubscribe or use a mail client that understands the 
standard list headers.

On June 19, 2018 6:06:59 PM EDT, Asif Ali  wrote:
>how the fuck i unsubscribe to this mailing list , i get more than 100
>emails a day
>
>Bye
>
>
>From: Alban Hertroys 
>Sent: Wednesday, June 20, 2018 2:10 AM
>To: Ravi Krishna
>Cc: PG mailing List
>Subject: Re: Load data from a csv file without using COPY
>
>
>> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
>>
>> In order to test a real life scenario (and use it for benchmarking) I
>want to load large number of data from csv files.
>> The requirement is that the load should happen like an application
>writing to the database ( that is, no COPY command).
>> Is there a tool which can do the job.  Basically parse the csv file
>and insert it to the database row by row.
>>
>> thanks
>
>I think an easy approach would be to COPY the CSV files into a separate
>database using psql's \copy command and then pg_dump that as separate
>insert statements with pg_dump —inserts.
>
>Alban Hertroys
>--
>If you can't see the forest for the trees,
>cut the trees and you'll find there is no forest.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Load data from a csv file without using COPY

2018-06-19 Thread Asif Ali
please just tell me the site i will do it right away and i have marked it 
junked so many times , i will keep spamming it until my email address is 
removed from the list

Bye


From: James Keener 
Sent: Wednesday, June 20, 2018 3:11 AM
To: pgsql-general@lists.postgresql.org; Asif Ali; Alban Hertroys; Ravi Krishna
Cc: PG mailing List
Subject: Re: Load data from a csv file without using COPY

Seriously, stop spamming the list and stop cursing and acting like a petulant 
child. Go to the site and unsubscribe or use a mail client that understands the 
standard list headers.

On June 19, 2018 6:06:59 PM EDT, Asif Ali  wrote:
how the fuck i unsubscribe to this mailing list , i get more than 100 emails a 
day

Bye


From: Alban Hertroys 
Sent: Wednesday, June 20, 2018 2:10 AM
To: Ravi Krishna
Cc: PG mailing List
Subject: Re: Load data from a csv file without using COPY


> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
>
> In order to test a real life scenario (and use it for benchmarking) I want to 
> load large number of data from csv files.
> The requirement is that the load should happen like an application writing to 
> the database ( that is, no COPY command).
> Is there a tool which can do the job.  Basically parse the csv file and 
> insert it to the database row by row.
>
> thanks

I think an easy approach would be to COPY the CSV files into a separate 
database using psql's \copy command and then pg_dump that as separate insert 
statements with pg_dump —inserts.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: Is there a way to be notified on the CREATE TABLE execution?

2018-06-19 Thread David G. Johnston
On Tuesday, June 19, 2018, Igor Korot  wrote:

> Hi, ALL,
> Consider a scenario:
>
> 1. A software that uses libpq is executing.
> 2. Someone opens up a terminal and creates a table.
> 3. A software needs to know about this new table.
>

I'd start here:

 https://www.postgresql.org/docs/10/static/sql-createeventtrigger.html

Your scenario suggests you may wish to avail yourself of the Listen and
Notify commands as well.

David J.


Re: Load data from a csv file without using COPY

2018-06-19 Thread David G. Johnston
https://lists.postgresql.org/unsubscribe/


On Tuesday, June 19, 2018, Asif Ali  wrote:

> please just tell me the site i will do it right away and i have marked it
> junked so many times , i will keep spamming it until my email address is
> removed from the list
>
> Bye
>
> --
> *From:* James Keener 
> *Sent:* Wednesday, June 20, 2018 3:11 AM
> *To:* pgsql-general@lists.postgresql.org; Asif Ali; Alban Hertroys; Ravi
> Krishna
> *Cc:* PG mailing List
> *Subject:* Re: Load data from a csv file without using COPY
>
> Seriously, stop spamming the list and stop cursing and acting like a
> petulant child. Go to the site and unsubscribe or use a mail client that
> understands the standard list headers.
>
> On June 19, 2018 6:06:59 PM EDT, Asif Ali  wrote:
>
> how the fuck i unsubscribe to this mailing list , i get more than 100
> emails a day
>
> Bye
>
> --
> *From:* Alban Hertroys 
> *Sent:* Wednesday, June 20, 2018 2:10 AM
> *To:* Ravi Krishna
> *Cc:* PG mailing List
> *Subject:* Re: Load data from a csv file without using COPY
>
>
> > On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
> >
> > In order to test a real life scenario (and use it for benchmarking) I
> want to load large number of data from csv files.
> > The requirement is that the load should happen like an application
> writing to the database ( that is, no COPY command).
> > Is there a tool which can do the job.  Basically parse the csv file and
> insert it to the database row by row.
> >
> > thanks
>
> I think an easy approach would be to COPY the CSV files into a separate
> database using psql's \copy command and then pg_dump that as separate
> insert statements with pg_dump —inserts.
>
> Alban Hertroys
> --
> If you can't see the forest for the trees,
> cut the trees and you'll find there is no forest.
>
>
>
> --
> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>


Re: Load data from a csv file without using COPY

2018-06-19 Thread James Keener
It's people like you who make spam filters worse for the rest of us to the 
point they need to be checked daily for false positives. I'm sure you could 
have found it in less time than it took to spam the list with obscenities.

On June 19, 2018 6:13:49 PM EDT, Asif Ali  wrote:
>please just tell me the site i will do it right away and i have marked
>it junked so many times , i will keep spamming it until my email
>address is removed from the list
>
>Bye
>
>
>From: James Keener 
>Sent: Wednesday, June 20, 2018 3:11 AM
>To: pgsql-general@lists.postgresql.org; Asif Ali; Alban Hertroys; Ravi
>Krishna
>Cc: PG mailing List
>Subject: Re: Load data from a csv file without using COPY
>
>Seriously, stop spamming the list and stop cursing and acting like a
>petulant child. Go to the site and unsubscribe or use a mail client
>that understands the standard list headers.
>
>On June 19, 2018 6:06:59 PM EDT, Asif Ali  wrote:
>how the fuck i unsubscribe to this mailing list , i get more than 100
>emails a day
>
>Bye
>
>
>From: Alban Hertroys 
>Sent: Wednesday, June 20, 2018 2:10 AM
>To: Ravi Krishna
>Cc: PG mailing List
>Subject: Re: Load data from a csv file without using COPY
>
>
>> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
>>
>> In order to test a real life scenario (and use it for benchmarking) I
>want to load large number of data from csv files.
>> The requirement is that the load should happen like an application
>writing to the database ( that is, no COPY command).
>> Is there a tool which can do the job.  Basically parse the csv file
>and insert it to the database row by row.
>>
>> thanks
>
>I think an easy approach would be to COPY the CSV files into a separate
>database using psql's \copy command and then pg_dump that as separate
>insert statements with pg_dump —inserts.
>
>Alban Hertroys
>--
>If you can't see the forest for the trees,
>cut the trees and you'll find there is no forest.
>
>
>
>--
>Sent from my Android device with K-9 Mail. Please excuse my brevity.

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Re: Load data from a csv file without using COPY

2018-06-19 Thread Adrian Klaver

On 06/19/2018 03:13 PM, Asif Ali wrote:
please just tell me the site i will do it right away and i have marked 
it junked so many times , i will keep spamming it until my email address 
is removed from the list


https://lists.postgresql.org/unsubscribe/


Bye


*From:* James Keener 
*Sent:* Wednesday, June 20, 2018 3:11 AM
*To:* pgsql-general@lists.postgresql.org; Asif Ali; Alban Hertroys; Ravi 
Krishna

*Cc:* PG mailing List
*Subject:* Re: Load data from a csv file without using COPY
Seriously, stop spamming the list and stop cursing and acting like a 
petulant child. Go to the site and unsubscribe or use a mail client that 
understands the standard list headers.


On June 19, 2018 6:06:59 PM EDT, Asif Ali  wrote:

how the fuck i unsubscribe to this mailing list , i get more than
100 emails a day

Bye


*From:* Alban Hertroys 
*Sent:* Wednesday, June 20, 2018 2:10 AM
*To:* Ravi Krishna
*Cc:* PG mailing List
*Subject:* Re: Load data from a csv file without using COPY

> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
> 
> In order to test a real life scenario (and use it for benchmarking) I want to load large number of data from csv files. 
> The requirement is that the load should happen like an application writing to the database ( that is, no COPY command).

> Is there a tool which can do the job.  Basically parse the csv file and 
insert it to the database row by row.
> 
> thanks


I think an easy approach would be to COPY the CSV files into a
separate database using psql's \copy command and then pg_dump that
as separate insert statements with pg_dump —inserts.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



--
Sent from my Android device with K-9 Mail. Please excuse my brevity.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Load data from a csv file without using COPY

2018-06-19 Thread Asif Ali
just tell me the site , i dont have time to waste on shitty things , i will 
program a spammer to send email to this list

Bye


From: James Keener 
Sent: Wednesday, June 20, 2018 3:16 AM
To: Asif Ali; pgsql-general@lists.postgresql.org; Alban Hertroys; Ravi Krishna
Subject: Re: Load data from a csv file without using COPY

It's people like you who make spam filters worse for the rest of us to the 
point they need to be checked daily for false positives. I'm sure you could 
have found it in less time than it took to spam the list with obscenities.

On June 19, 2018 6:13:49 PM EDT, Asif Ali  wrote:
please just tell me the site i will do it right away and i have marked it 
junked so many times , i will keep spamming it until my email address is 
removed from the list

Bye


From: James Keener 
Sent: Wednesday, June 20, 2018 3:11 AM
To: pgsql-general@lists.postgresql.org; Asif Ali; Alban Hertroys; Ravi Krishna
Cc: PG mailing List
Subject: Re: Load data from a csv file without using COPY

Seriously, stop spamming the list and stop cursing and acting like a petulant 
child. Go to the site and unsubscribe or use a mail client that understands the 
standard list headers.

On June 19, 2018 6:06:59 PM EDT, Asif Ali  wrote:
how the fuck i unsubscribe to this mailing list , i get more than 100 emails a 
day

Bye


From: Alban Hertroys 
Sent: Wednesday, June 20, 2018 2:10 AM
To: Ravi Krishna
Cc: PG mailing List
Subject: Re: Load data from a csv file without using COPY


> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
>
> In order to test a real life scenario (and use it for benchmarking) I want to 
> load large number of data from csv files.
> The requirement is that the load should happen like an application writing to 
> the database ( that is, no COPY command).
> Is there a tool which can do the job.  Basically parse the csv file and 
> insert it to the database row by row.
>
> thanks

I think an easy approach would be to COPY the CSV files into a separate 
database using psql's \copy command and then pg_dump that as separate insert 
statements with pg_dump —inserts.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: Load data from a csv file without using COPY

2018-06-19 Thread Asif Ali
just tell me the site , i dont have time to waste on shitty things , i will 
program a spammer to send email to this list

Bye


From: James Keener 
Sent: Wednesday, June 20, 2018 3:16 AM
To: Asif Ali; pgsql-general@lists.postgresql.org; Alban Hertroys; Ravi Krishna
Subject: Re: Load data from a csv file without using COPY

It's people like you who make spam filters worse for the rest of us to the 
point they need to be checked daily for false positives. I'm sure you could 
have found it in less time than it took to spam the list with obscenities.

On June 19, 2018 6:13:49 PM EDT, Asif Ali  wrote:
please just tell me the site i will do it right away and i have marked it 
junked so many times , i will keep spamming it until my email address is 
removed from the list

Bye


From: James Keener 
Sent: Wednesday, June 20, 2018 3:11 AM
To: pgsql-general@lists.postgresql.org; Asif Ali; Alban Hertroys; Ravi Krishna
Cc: PG mailing List
Subject: Re: Load data from a csv file without using COPY

Seriously, stop spamming the list and stop cursing and acting like a petulant 
child. Go to the site and unsubscribe or use a mail client that understands the 
standard list headers.

On June 19, 2018 6:06:59 PM EDT, Asif Ali  wrote:
how the fuck i unsubscribe to this mailing list , i get more than 100 emails a 
day

Bye


From: Alban Hertroys 
Sent: Wednesday, June 20, 2018 2:10 AM
To: Ravi Krishna
Cc: PG mailing List
Subject: Re: Load data from a csv file without using COPY


> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
>
> In order to test a real life scenario (and use it for benchmarking) I want to 
> load large number of data from csv files.
> The requirement is that the load should happen like an application writing to 
> the database ( that is, no COPY command).
> Is there a tool which can do the job.  Basically parse the csv file and 
> insert it to the database row by row.
>
> thanks

I think an easy approach would be to COPY the CSV files into a separate 
database using psql's \copy command and then pg_dump that as separate insert 
statements with pg_dump —inserts.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



--
Sent from my Android device with K-9 Mail. Please excuse my brevity.


Re: Load data from a csv file without using COPY

2018-06-19 Thread Adrian Klaver

On 06/19/2018 03:18 PM, Asif Ali wrote:
just tell me the site , i dont have time to waste on shitty things , i 
will program a spammer to send email to this list


So why subscribe in the first place?



Bye


*From:* James Keener 
*Sent:* Wednesday, June 20, 2018 3:16 AM
*To:* Asif Ali; pgsql-general@lists.postgresql.org; Alban Hertroys; Ravi 
Krishna

*Subject:* Re: Load data from a csv file without using COPY
It's people like you who make spam filters worse for the rest of us to 
the point they need to be checked daily for false positives. I'm sure 
you could have found it in less time than it took to spam the list with 
obscenities.


On June 19, 2018 6:13:49 PM EDT, Asif Ali  wrote:

please just tell me the site i will do it right away and i have
marked it junked so many times , i will keep spamming it until my
email address is removed from the list

Bye


*From:* James Keener 
*Sent:* Wednesday, June 20, 2018 3:11 AM
*To:* pgsql-general@lists.postgresql.org; Asif Ali; Alban Hertroys;
Ravi Krishna
*Cc:* PG mailing List
*Subject:* Re: Load data from a csv file without using COPY
Seriously, stop spamming the list and stop cursing and acting like a
petulant child. Go to the site and unsubscribe or use a mail client
that understands the standard list headers.

On June 19, 2018 6:06:59 PM EDT, Asif Ali  wrote:

how the fuck i unsubscribe to this mailing list , i get more
than 100 emails a day

Bye


*From:* Alban Hertroys 
*Sent:* Wednesday, June 20, 2018 2:10 AM
*To:* Ravi Krishna
*Cc:* PG mailing List
*Subject:* Re: Load data from a csv file without using COPY

> On 19 Jun 2018, at 22:16, Ravi Krishna  wrote:
> 
> In order to test a real life scenario (and use it for benchmarking) I want to load large number of data from csv files. 
> The requirement is that the load should happen like an application writing to the database ( that is, no COPY command).

> Is there a tool which can do the job.  Basically parse the csv file 
and insert it to the database row by row.
> 
> thanks


I think an easy approach would be to COPY the CSV files into a
separate database using psql's \copy command and then pg_dump
that as separate insert statements with pg_dump —inserts.

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



--
Sent from my Android device with K-9 Mail. Please excuse my brevity.



--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Load data from a csv file without using COPY

2018-06-19 Thread Stephen Frost
Greetings,

* Adrian Klaver (adrian.kla...@aklaver.com) wrote:
> On 06/19/2018 03:18 PM, Asif Ali wrote:
> >just tell me the site , i dont have time to waste on shitty things , i
> >will program a spammer to send email to this list
> 
> So why subscribe in the first place?

Thanks for the attempts at helping folks, but it's been addressed.
Please don't reply further on this sub-thread.

Thanks!

Stephen


signature.asc
Description: PGP signature


Re: Load data from a csv file without using COPY

2018-06-19 Thread Adrian Klaver

On 06/19/2018 01:16 PM, Ravi Krishna wrote:

In order to test a real life scenario (and use it for benchmarking) I want to 
load large number of data from csv files.
The requirement is that the load should happen like an application writing to 
the database ( that is, no COPY command).
Is there a tool which can do the job.  Basically parse the csv file and insert 
it to the database row by row.


http://csvkit.readthedocs.io/en/1.0.3/scripts/csvsql.html


thanks







--
Adrian Klaver
adrian.kla...@aklaver.com



Re: Load data from a csv file without using COPY

2018-06-19 Thread Ravi Krishna
Thanks all for replying.  I see that I did not explain my requirement in 
detail.  So let me

explain it in detail.

1. Currently we have a legacy app running in DB2/LUW. Application writes 
to it either via Java program

or uses a custom ETL scripts using a vendor product.
2. We want to migrate it to DB2 and eliminate vendor ETL tool.
3. We now have a catch-22 situation.  Should we spend time porting the 
app to PG without first verifying
that PG can perform as well as DB2. In other words, if some sort of 
testing rules out PG as a good
replacement for DB2, why even bother to port.  Of course that does 
not prove conclusively that if PG
passes the test, then it would mean that the app will work just as 
fine.  But at least basic test will tell

   that we are not on a wrong path.
4. What I am planning is:
4.a Get a set of large tables exported as a pipe delimited text 
file.

4.b Load them in both DB2 and PG on a similar h/w
4.c  Run OLAP queries.

4.b is to test i/o. Our app is sensitive to the load times and some of 
the tables are really wide.
4.c is to test maturity of PG in handling complex OLAP SQLs. From what I 
have read, while PG
 optimizer is very good in handling OLTP, it is not, as yet, as good 
in OLAP queries.


I just want to keep the testing tool same in 4.b for both db2 and pg. If 
COPY is the only way,

we will use it with something comparable on the DB2 side.


Re: Load data from a csv file without using COPY

2018-06-19 Thread Michael Paquier
On Tue, Jun 19, 2018 at 02:32:10PM -0700, David G. Johnston wrote:
> ​You really need to describe what you consider to be a "real life​
> scenario"; and probably give a better idea of creation and number of these
> csv files.  In addition to describing the relevant behavior of the
> application you are testing.
> 
> If you want maximum realism you should probably write integration tests for
> your application and then execute those at high volume.
> 
> Or at minimum give an example of the output you would want from this
> unknown program...

Hard to say what you are especially looking for that psql's \copy cannot
do, but perhaps you have an interest in pg_bulkload?  Here is a link to
the project:
https://github.com/ossc-db/pg_bulkload/

It has a couple of fancy features as well, like preventing failures of
rows if loading a large file, etc.
--
Michael


signature.asc
Description: PGP signature


Re: using pg_basebackup for point in time recovery

2018-06-19 Thread Michael Paquier
Hi Pierre,

On Tue, Jun 19, 2018 at 12:03:58PM +, Pierre Timmermans wrote:
> Here is the doc, the sentence that I find misleading is "There are
> backups that cannot be used for point-in-time recovery", also
> mentioning that they are faster than pg_dumps add to confusion (since
> pg_dumps cannot be used for PITR):
> https://www.postgresql.org/docs/current/static/continuous-archiving.html

Yes, it is indeed perfectly possible to use such backups to do a PITR
as long as you have a WAL archive able to replay up to the point where
you want the replay to happen, so I agree that this is a bit confusing.
This part of the documentation is here since the beginning of times,
well 6559c4a2 to be exact.  Perhaps we would want to reword this
sentence as follows:
"These are backups that could be used for point-in-time recovery if
combined with a WAL archive able to recover up to the wanted recovery
point.  These backups are typically much faster to backup and restore
than pg_dump for large deployments but can result as well in larger
backup sizes, so the speed of one method or the other is to evaluate
carefully first."

I am open to better suggestions of course.
--
Michael


signature.asc
Description: PGP signature


Re: Run Stored procedure - function from VBA

2018-06-19 Thread Łukasz Jarych
Thank you Rob,

exactly. Do you know this odbc constructtion?

Best,
Jacek

2018-06-20 0:08 GMT+02:00 Asif Ali :

> how the fuck i unsubscribe to this mailing list , i get more than 100
> emails a day
>
> Bye
>
> --
> *From:* Rob Sargent 
> *Sent:* Wednesday, June 20, 2018 12:54 AM
> *To:* Łukasz Jarych
> *Cc:* pgsql-gene...@postgresql.org
> *Subject:* Re: Run Stored procedure - function from VBA
>
>
>
>
> On 06/18/2018 09:51 PM, Łukasz Jarych wrote:
>
> Thank you Rob,
>
> question is it is the optimal way to run SP from VBA?
> Or not?
>
> Best,
> Jacek
>
> 2018-06-19 1:34 GMT+02:00 Rob Sargent :
>
>
>
> On Jun 18, 2018, at 9:47 AM, Łukasz Jarych  wrote:
>
> Hi Guys,
>
> i have example function :
>
> CREATE OR REPLACE FUNCTION totalRecords ()
> RETURNS integer AS $total$
> declare
> total integer;
> BEGIN
>SELECT count(*) into total FROM COMPANY;
>RETURN total;
> END;
> $total$ LANGUAGE plpgsql;
>
> and i want to run it from VBA using odbc connection.
>
> What is the best way to use it ?
>
> something like this:
>
> Dim dbCon as new ADODB.Connection
> Dim rst as new ADODB.Recordset
>
> Dbcon.connectionstring=”Your connection string goes here!”
> Dbcon.open
>
> Rst.open strsql
>
> where strsql is "Select * from totalRecords" or this is not a good
> solution?
>
> Best,
> Jacek
>
>
> You need the parentheses after the function name: “select * from
> totalrecords();"
>
>
>
> Depends on the usage pattern.  I'm sure there is an ODBC construct for
> stored procedures/function, which you could build once and re-use with new
> parameter values if you're going to call this repeatedly.
>


Re: Run Stored procedure - function from VBA

2018-06-19 Thread Rob Sargent
Sorry. I don’t use ODBC directly. If it’s not obvious in the manual, google 
‘ODBC functions’

> On Jun 19, 2018, at 10:39 PM, Łukasz Jarych  wrote:
> 
> Thank you Rob,
> 
> exactly. Do you know this odbc constructtion?
> 
> Best,
> Jacek 
> 
> 2018-06-20 0:08 GMT+02:00 Asif Ali :
>> how the fuck i unsubscribe to this mailing list , i get more than 100 emails 
>> a day
>> 
>> Bye
>> 
>>  
>> From: Rob Sargent 
>> Sent: Wednesday, June 20, 2018 12:54 AM
>> To: Łukasz Jarych
>> Cc: pgsql-gene...@postgresql.org
>> Subject: Re: Run Stored procedure - function from VBA
>>  
>> 
>> 
>>> On 06/18/2018 09:51 PM, Łukasz Jarych wrote:
>>> Thank you Rob,
>>> 
>>> question is it is the optimal way to run SP from VBA? 
>>> Or not?
>>> 
>>> Best,
>>> Jacek
>>> 
>>> 2018-06-19 1:34 GMT+02:00 Rob Sargent :
>>> 
>>> 
 On Jun 18, 2018, at 9:47 AM, Łukasz Jarych  wrote:
 
 Hi Guys,
 
 i have example function :
 
 CREATE OR REPLACE FUNCTION totalRecords ()
 RETURNS integer AS $total$
 declare
 total integer;
 BEGIN
SELECT count(*) into total FROM COMPANY;
RETURN total;
 END;
 $total$ LANGUAGE plpgsql;
 
 and i want to run it from VBA using odbc connection. 
 
 What is the best way to use it ?
 
 something like this: 
 
 Dim dbCon as new ADODB.Connection
 Dim rst as new ADODB.Recordset
 
 Dbcon.connectionstring=”Your connection string goes here!”
 Dbcon.open
 
 Rst.open strsql
 where strsql is "Select * from totalRecords" or this is not a good 
 solution?
 
 Best,
 Jacek
>>> 
>>> You need the parentheses after the function name: “select * from 
>>> totalrecords();"
>>> 
>>> 
>> 
>> Depends on the usage pattern.  I'm sure there is an ODBC construct for 
>> stored procedures/function, which you could build once and re-use with new 
>> parameter values if you're going to call this repeatedly.
> 


Re: Run Stored procedure - function from VBA

2018-06-19 Thread Łukasz Jarych
Thank you Rob,

I googled it and there a loit about ODBC but not a lot about ODBC and
postgresql.

Best,
Jacek

2018-06-20 7:41 GMT+02:00 Rob Sargent :

> Sorry. I don’t use ODBC directly. If it’s not obvious in the manual,
> google ‘ODBC functions’
>
> On Jun 19, 2018, at 10:39 PM, Łukasz Jarych  wrote:
>
> Thank you Rob,
>
> exactly. Do you know this odbc constructtion?
>
> Best,
> Jacek
>
> 2018-06-20 0:08 GMT+02:00 Asif Ali :
>
>> how the fuck i unsubscribe to this mailing list , i get more than 100
>> emails a day
>>
>> Bye
>>
>> --
>> *From:* Rob Sargent 
>> *Sent:* Wednesday, June 20, 2018 12:54 AM
>> *To:* Łukasz Jarych
>> *Cc:* pgsql-gene...@postgresql.org
>> *Subject:* Re: Run Stored procedure - function from VBA
>>
>>
>>
>>
>> On 06/18/2018 09:51 PM, Łukasz Jarych wrote:
>>
>> Thank you Rob,
>>
>> question is it is the optimal way to run SP from VBA?
>> Or not?
>>
>> Best,
>> Jacek
>>
>> 2018-06-19 1:34 GMT+02:00 Rob Sargent :
>>
>>
>>
>> On Jun 18, 2018, at 9:47 AM, Łukasz Jarych  wrote:
>>
>> Hi Guys,
>>
>> i have example function :
>>
>> CREATE OR REPLACE FUNCTION totalRecords ()
>> RETURNS integer AS $total$
>> declare
>> total integer;
>> BEGIN
>>SELECT count(*) into total FROM COMPANY;
>>RETURN total;
>> END;
>> $total$ LANGUAGE plpgsql;
>>
>> and i want to run it from VBA using odbc connection.
>>
>> What is the best way to use it ?
>>
>> something like this:
>>
>> Dim dbCon as new ADODB.Connection
>> Dim rst as new ADODB.Recordset
>>
>> Dbcon.connectionstring=”Your connection string goes here!”
>> Dbcon.open
>>
>> Rst.open strsql
>>
>> where strsql is "Select * from totalRecords" or this is not a good
>> solution?
>>
>> Best,
>> Jacek
>>
>>
>> You need the parentheses after the function name: “select * from
>> totalrecords();"
>>
>>
>>
>> Depends on the usage pattern.  I'm sure there is an ODBC construct for
>> stored procedures/function, which you could build once and re-use with new
>> parameter values if you're going to call this repeatedly.
>>
>
>