Re: Issue with psqlrc with command line.

2018-08-30 Thread saurabh shelar
Hi David,

Got your point.

Thank you for the help.

On Wed, Aug 29, 2018 at 9:07 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Wed, Aug 29, 2018 at 8:31 AM, saurabh shelar 
> wrote:
>
>> Hi David,
>>
>> Thank you for the help.
>>
>> I was just got confused with the below line mentioned in the document.
>> However, it seems it is still the same behaviour *(i.e --no-psqlrc)*.
>>
>> *"Before PostgreSQL 9.6, the -c option implied -X (--no-psqlrc); this is
>> no longer the case."*
>>
>
> Whether psqlrc is being processed or not is immaterial to the behavior you
> are seeing because -c will not resolve the variable reference prior to
> sending the command to the server.
>
> If you have other non-variable definitions in psqlrc that could affect the
> -c command (or its output - like \pset instructions) those will now be
> applied before sending the command whereas before they would not be.
>
> (admittedly I'm not reading the source code nor have experimented to draw
> this conclusion)
>
> David J.
>
>


Ways to deal with large amount of columns;

2018-08-30 Thread a
Hi all:


I need to make a table contains projected monthly cashflow for multiple agents 
(10,000 around).


Therefore, the column number would be 1000+. 


I would need to perform simple aggregate function such as count, sum or average 
on each cashflow projected.


So if there is anyway of doing this? Will there be anything like define a macro 
in C that I can manipulate multiple columns by simple word that representing 
them.


Thanks so much!


Shore

Re: pg_upgrade fails saying function unaccent(text) doesn't exist

2018-08-30 Thread Gunnlaugur Thor Briem
Thank you! That got the pg_upgrade to completion. But then during
./analyze_new_cluster.sh vacuum fails thus:

vacuumdb: processing database "dm_test": Generating minimal optimizer
statistics (1 target)
vacuumdb: vacuuming of database "dm_test" failed: ERROR:  text search
dictionary "unaccent" does not exist
CONTEXT:  SQL function "semantic_normalize" statement 1

And yet a text search dictionary with that name does exist:

$ psql -d dm_test -c '\dFd+ unaccent'
List of text search dictionaries
 Schema |   Name   |Template |Init options| Description
+--+-++-
 public | unaccent | public.unaccent | rules = 'unaccent' |
(1 row)

Running VACUUM ANALYZE semantic_mapping in psql works:

$ psql -d dm_test -c 'VACUUM ANALYZE semantic_mapping'
VACUUM
Time: 1231,767 ms (00:01,232)

But running it with the vacuumdb command doesn't:

vacuumdb -z -t semantic_mapping dm_test
vacuumdb: vacuuming database "dm_test"
vacuumdb: vacuuming of table "semantic_mapping" in database "dm_test"
failed: ERROR:  text search dictionary "unaccent" does not exist
CONTEXT:  SQL function "semantic_normalize" statement 1

This is presumably a similar search path problem, because I can reproduce
this in psql by setting the search path to exclude public:

set search_path to "$user";
vacuum analyze public.semantic_mapping;
ERROR:  text search dictionary "unaccent" does not exist
CONTEXT:  SQL function "semantic_normalize" statement 1
Time: 851,562 ms

Can't find a place to poke the "public." prefix in to work around this ...
I can't even see where it's getting the link to the text search dictionary
from. Is that in native code in the unaccent extension?

The unaccent definition looks like this:

\df+ public.unaccent

List of functions
 Schema |   Name   | Result data type | Argument data types |  Type  |
Volatility | Parallel | Owner | Security | Access privileges | Language |
Source code  | Description
+--+--+-+++--+---+--+---+--+---+-
 public | unaccent | text | regdictionary, text | normal |
stable | safe | gthb  | invoker  |   | c|
unaccent_dict |
 public | unaccent | text | text| normal |
stable | safe | gthb  | invoker  |   | c|
unaccent_dict |
(2 rows)

Any tips?

Cheers,
Gulli

On Wed, Aug 29, 2018 at 7:06 PM Bruce Momjian  wrote:

> On Wed, Aug 29, 2018 at 06:09:53PM +, Gunnlaugur Thor Briem wrote:
> > Hi,
> >
> > I'm trying to upgrade an old PG 9.4 cluster to PG 10, and it's failing on
> > creating an index that uses the unaccent(text) function.
> >
> > That function is part of the unaccent extension, which is installed in
> the old
> > DB cluster. I expect pg_upgrade to create that extension as part of the
> > upgrade. It does create other extensions that are installed in the old DB
> > cluster. I don't get why this one isn't included.
>
> This is caused by security changes made in PG 10.3 and other minor
> releases.  Please see this thread for an outline of the issue:
>
>
> https://www.postgresql.org/message-id/flat/152106914669.1223.5104148605998271987%40wrigleys.postgresql.org
>
> I think you have to change your index function to specify the schema
> name before the unacces function call, e.g.
>
> SELECT lower(public.unaccent(btrim(regexp_replace(
>
> --
>   Bruce Momjian  http://momjian.us
>   EnterpriseDB http://enterprisedb.com
>
> + As you are, so once was I.  As I am, so you will be. +
> +  Ancient Roman grave inscription +
>


Re: Ways to deal with large amount of columns;

2018-08-30 Thread David G. Johnston
On Thursday, August 30, 2018, a <372660...@qq.com> wrote:

> Hi all:
>
> I need to make a table contains projected monthly cashflow for multiple
> agents (10,000 around).
>
> Therefore, the column number would be 1000+.
>
> I would need to perform simple aggregate function such as count, sum or
> average on each cashflow projected.
>
> So if there is anyway of doing this? Will there be anything like define a
> macro in C that I can manipulate multiple columns by simple word that
> representing them.
>

Better to design a data model that doesn't have so many columns. Otherwise
generating dynamic SQL via the for,at function and loops and such is your
best bet.  Can be down in pl/pgsql or in your preferred programming
language.  Psql variables can maybe be useful too.

David J.


Re: Ways to deal with large amount of columns;

2018-08-30 Thread Andreas Kretschmer




Am 30.08.2018 um 11:13 schrieb a:

Therefore, the column number would be 1000+.


just as a additional note: there is a limit, a table can contains not 
more than 250-100 columns, dependsing on column types.

https://wiki.postgresql.org/wiki/FAQ

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: Ways to deal with large amount of columns;

2018-08-30 Thread Robert Zenz
As David said, you'd be better off having a table that looks like this (in terms
of columns):

 * MONTH
 * AGENT
 * CASHFLOW

So your query to get the sum of a single agent would be looking like:

select
sum(CHASFLOW)
where
AGENT = 'Agent'
and MONTH between values;

It might be a little more work to create a GUI for that (or map this model to
the existing GUI) but it is much simpler to maintain and work with.


On 30.08.2018 11:13, a wrote:
> Hi all:
> 
> 
> I need to make a table contains projected monthly cashflow for multiple 
> agents (10,000 around).
> 
> 
> Therefore, the column number would be 1000+. 
> 
> 
> I would need to perform simple aggregate function such as count, sum or 
> average on each cashflow projected.
> 
> 
> So if there is anyway of doing this? Will there be anything like define a 
> macro in C that I can manipulate multiple columns by simple word that 
> representing them.
> 
> 
> Thanks so much!
> 
> 
> Shore
> 

Re: Ways to deal with large amount of columns;

2018-08-30 Thread Andreas Kretschmer




Am 30.08.2018 um 15:15 schrieb Robert Zenz:

As David said, you'd be better off having a table that looks like this (in terms
of columns):

  * MONTH
  * AGENT
  * CASHFLOW

So your query to get the sum of a single agent would be looking like:

 select
 sum(CHASFLOW)
 where
 AGENT = 'Agent'
 and MONTH between values;

It might be a little more work to create a GUI for that (or map this model to
the existing GUI) but it is much simpler to maintain and work with.


agree. it is also possible to use partitioning.

Regards, Andreas

--
2ndQuadrant - The PostgreSQL Support Company.
www.2ndQuadrant.com




Re: pg_upgrade fails saying function unaccent(text) doesn't exist

2018-08-30 Thread Adrian Klaver

On 08/30/2018 02:24 AM, Gunnlaugur Thor Briem wrote:
Thank you! That got the pg_upgrade to completion. But then during 
./analyze_new_cluster.sh vacuum fails thus:


vacuumdb: processing database "dm_test": Generating minimal optimizer 
statistics (1 target)
vacuumdb: vacuuming of database "dm_test" failed: ERROR:  text search 
dictionary "unaccent" does not exist

CONTEXT:  SQL function "semantic_normalize" statement 1

And yet a text search dictionary with that name does exist:

$ psql -d dm_test -c '\dFd+ unaccent'
     List of text search dictionaries
  Schema |   Name   |    Template |    Init options    | Description
+--+-++-
  public | unaccent | public.unaccent | rules = 'unaccent' |
(1 row)

Running VACUUM ANALYZE semantic_mapping in psql works:

$ psql -d dm_test -c 'VACUUM ANALYZE semantic_mapping'
VACUUM
Time: 1231,767 ms (00:01,232)

But running it with the vacuumdb command doesn't:

vacuumdb -z -t semantic_mapping dm_test
vacuumdb: vacuuming database "dm_test"
vacuumdb: vacuuming of table "semantic_mapping" in database "dm_test" 
failed: ERROR:  text search dictionary "unaccent" does not exist

CONTEXT:  SQL function "semantic_normalize" statement 1

This is presumably a similar search path problem, because I can 
reproduce this in psql by setting the search path to exclude public:


set search_path to "$user";
vacuum analyze public.semantic_mapping;
ERROR:  text search dictionary "unaccent" does not exist
CONTEXT:  SQL function "semantic_normalize" statement 1
Time: 851,562 ms

Can't find a place to poke the "public." prefix in to work around this 
... I can't even see where it's getting the link to the text search 
dictionary from. Is that in native code in the unaccent extension?


Since the semantic_normalize function is tripping it and it uses 
unaccent I would say it is native to the extension.


What does:

\dFd unaccent

show?



The unaccent definition looks like this:

\df+ public.unaccent
   
List of functions
  Schema |   Name   | Result data type | Argument data types |  Type  | 
Volatility | Parallel | Owner | Security | Access privileges | Language 
|  Source code  | Description

+--+--+-+++--+---+--+---+--+---+-
  public | unaccent | text | regdictionary, text | normal | 
stable | safe | gthb  | invoker  |   | c
| unaccent_dict |
  public | unaccent | text | text    | normal | 
stable | safe | gthb  | invoker  |   | c
| unaccent_dict |

(2 rows)

Any tips?

Cheers,
Gulli

On Wed, Aug 29, 2018 at 7:06 PM Bruce Momjian > wrote:


On Wed, Aug 29, 2018 at 06:09:53PM +, Gunnlaugur Thor Briem wrote:
 > Hi,
 >
 > I'm trying to upgrade an old PG 9.4 cluster to PG 10, and it's
failing on
 > creating an index that uses the unaccent(text) function.
 >
 > That function is part of the unaccent extension, which is
installed in the old
 > DB cluster. I expect pg_upgrade to create that extension as part
of the
 > upgrade. It does create other extensions that are installed in
the old DB
 > cluster. I don't get why this one isn't included.

This is caused by security changes made in PG 10.3 and other minor
releases.  Please see this thread for an outline of the issue:


https://www.postgresql.org/message-id/flat/152106914669.1223.5104148605998271987%40wrigleys.postgresql.org

I think you have to change your index function to specify the schema
name before the unacces function call, e.g.

         SELECT lower(public.unaccent(btrim(regexp_replace(

-- 
   Bruce Momjian  mailto:br...@momjian.us>>

http://momjian.us
   EnterpriseDB http://enterprisedb.com

+ As you are, so once was I.  As I am, so you will be. +
+                      Ancient Roman grave inscription +




--
Adrian Klaver
adrian.kla...@aklaver.com



Re: pg_upgrade fails saying function unaccent(text) doesn't exist

2018-08-30 Thread Gunnlaugur Thor Briem
\dFd unaccent
List of text search dictionaries
 Schema |   Name   | Description
+--+-
 public | unaccent |
(1 row)

\dFd+ unaccent
List of text search dictionaries
 Schema |   Name   |Template |Init options| Description
+--+-++-
 public | unaccent | public.unaccent | rules = 'unaccent' |
(1 row)

Cheers,
Gulli

On Thu, Aug 30, 2018 at 1:48 PM Adrian Klaver 
wrote:

> On 08/30/2018 02:24 AM, Gunnlaugur Thor Briem wrote:
> > Thank you! That got the pg_upgrade to completion. But then during
> > ./analyze_new_cluster.sh vacuum fails thus:
> >
> > vacuumdb: processing database "dm_test": Generating minimal optimizer
> > statistics (1 target)
> > vacuumdb: vacuuming of database "dm_test" failed: ERROR:  text search
> > dictionary "unaccent" does not exist
> > CONTEXT:  SQL function "semantic_normalize" statement 1
> >
> > And yet a text search dictionary with that name does exist:
> >
> > $ psql -d dm_test -c '\dFd+ unaccent'
> >  List of text search dictionaries
> >   Schema |   Name   |Template |Init options| Description
> > +--+-++-
> >   public | unaccent | public.unaccent | rules = 'unaccent' |
> > (1 row)
> >
> > Running VACUUM ANALYZE semantic_mapping in psql works:
> >
> > $ psql -d dm_test -c 'VACUUM ANALYZE semantic_mapping'
> > VACUUM
> > Time: 1231,767 ms (00:01,232)
> >
> > But running it with the vacuumdb command doesn't:
> >
> > vacuumdb -z -t semantic_mapping dm_test
> > vacuumdb: vacuuming database "dm_test"
> > vacuumdb: vacuuming of table "semantic_mapping" in database "dm_test"
> > failed: ERROR:  text search dictionary "unaccent" does not exist
> > CONTEXT:  SQL function "semantic_normalize" statement 1
> >
> > This is presumably a similar search path problem, because I can
> > reproduce this in psql by setting the search path to exclude public:
> >
> > set search_path to "$user";
> > vacuum analyze public.semantic_mapping;
> > ERROR:  text search dictionary "unaccent" does not exist
> > CONTEXT:  SQL function "semantic_normalize" statement 1
> > Time: 851,562 ms
> >
> > Can't find a place to poke the "public." prefix in to work around this
> > ... I can't even see where it's getting the link to the text search
> > dictionary from. Is that in native code in the unaccent extension?
>
> Since the semantic_normalize function is tripping it and it uses
> unaccent I would say it is native to the extension.
>
> What does:
>
> \dFd unaccent
>
> show?
>
> >
> > The unaccent definition looks like this:
> >
> > \df+ public.unaccent
> >
>
> List of functions
> >   Schema |   Name   | Result data type | Argument data types |  Type  |
> > Volatility | Parallel | Owner | Security | Access privileges | Language
> > |  Source code  | Description
> >
> +--+--+-+++--+---+--+---+--+---+-
> >   public | unaccent | text | regdictionary, text | normal |
> > stable | safe | gthb  | invoker  |   | c
> > | unaccent_dict |
> >   public | unaccent | text | text| normal |
> > stable | safe | gthb  | invoker  |   | c
> > | unaccent_dict |
> > (2 rows)
> >
> > Any tips?
> >
> > Cheers,
> > Gulli
> >
> > On Wed, Aug 29, 2018 at 7:06 PM Bruce Momjian  > > wrote:
> >
> > On Wed, Aug 29, 2018 at 06:09:53PM +, Gunnlaugur Thor Briem
> wrote:
> >  > Hi,
> >  >
> >  > I'm trying to upgrade an old PG 9.4 cluster to PG 10, and it's
> > failing on
> >  > creating an index that uses the unaccent(text) function.
> >  >
> >  > That function is part of the unaccent extension, which is
> > installed in the old
> >  > DB cluster. I expect pg_upgrade to create that extension as part
> > of the
> >  > upgrade. It does create other extensions that are installed in
> > the old DB
> >  > cluster. I don't get why this one isn't included.
> >
> > This is caused by security changes made in PG 10.3 and other minor
> > releases.  Please see this thread for an outline of the issue:
> >
> >
> https://www.postgresql.org/message-id/flat/152106914669.1223.5104148605998271987%40wrigleys.postgresql.org
> >
> > I think you have to change your index function to specify the schema
> > name before the unacces function call, e.g.
> >
> >  SELECT lower(public.unaccent(btrim(regexp_replace(
> >
> > --
> >Bruce Momjian  mailto:br...@momjian.us>>
> > http://momjian.us
> >EnterpriseDB http://enterprisedb.com
> >
> > + As you are, so once was I.  As I am, so you will be. +
> > +  Ancient Roman grave inscription +
> >
>
>
> --
> Adrian Klaver
> adrian.kla..

searching a value in a variable/field in all tables in a schema

2018-08-30 Thread Ruiqiang Chen
Does anyone have experience of searching a value in a variable/field in all
tables in a schema?

If you can provide some code not creating stored procedure that would be
great.

Thanks,
Ray


Re: searching a value in a variable/field in all tables in a schema

2018-08-30 Thread David G. Johnston
On Thu, Aug 30, 2018 at 1:44 PM, Ruiqiang Chen  wrote:

> Does anyone have experience of searching a value in a variable/field in
> all tables in a schema?
>

SELECT 'Tbl1'::text AS loc, fld
FROM schema.tbl1
WHERE fld = 'value'
[UNION ALL
SELECT 'Tbl2'::text AS loc, fld
FROM schema.tbl2
WHERE fld = 'value'
...]

David J.


Re: Ways to deal with large amount of columns;

2018-08-30 Thread Tim Cross


a <372660...@qq.com> writes:

> Hi all:
>
>
> I need to make a table contains projected monthly cashflow for multiple 
> agents (10,000 around).
>
>
> Therefore, the column number would be 1000+. 
>
>

Not sure your data model is correct. Typically, with something like
this, increasing the number of agents would result in tables with more
rows rather than more columns. Tables with large numbers of columns is
often a 'code smell' and indicates the underlying data model needs to be
reviewed. Designs which result in new columns being required because you
are adding new data sources is almost certainly an indication of the
need to review the data model.

Postgres (and many other databases) have lots of functionality to help
deal with tables that have large numbers of rows, but working with
tables that have large numbers of columns has less functionality and
options.

While it is very tempting to create a table and then start coding, you
will almost always get a much better result and simpler code if you
spend some initial time to really analyse your domain, understand the
data elements and how they relate to each other, map them out into a
data model and then start development. Have a look at
https://en.wikipedia.org/wiki/Database_normalization for some background
on the normal forms and why they are useful.

HTH

Tim



-- 
Tim Cross



Re: Ways to deal with large amount of columns;

2018-08-30 Thread a
Hi thanks for your mail.


So my data is arranged as this due to the nature of business. I need the 
projected cash flow in each agent to calculate such as present value or other 
related issue to report to regulator. Furthermore, some basic functions such as 
sum, average and other aggregate functions are needed. I also considered to 
store them as array, but it would not be easy to write SQL for summing the 
select rows.


My need to discuss further on designing of the model.


Thank you again.


Shore




-- Original message --
From: "Tim Cross"; 
Sendtime: Friday, Aug 31, 2018 6:24 AM
To: "a"<372660...@qq.com>; 
Cc: "pgsql-general"; 
Subject: Re: Ways to deal with large amount of columns;




a <372660...@qq.com> writes:

> Hi all:
>
>
> I need to make a table contains projected monthly cashflow for multiple 
> agents (10,000 around).
>
>
> Therefore, the column number would be 1000+. 
>
>

Not sure your data model is correct. Typically, with something like
this, increasing the number of agents would result in tables with more
rows rather than more columns. Tables with large numbers of columns is
often a 'code smell' and indicates the underlying data model needs to be
reviewed. Designs which result in new columns being required because you
are adding new data sources is almost certainly an indication of the
need to review the data model.

Postgres (and many other databases) have lots of functionality to help
deal with tables that have large numbers of rows, but working with
tables that have large numbers of columns has less functionality and
options.

While it is very tempting to create a table and then start coding, you
will almost always get a much better result and simpler code if you
spend some initial time to really analyse your domain, understand the
data elements and how they relate to each other, map them out into a
data model and then start development. Have a look at
https://en.wikipedia.org/wiki/Database_normalization for some background
on the normal forms and why they are useful.

HTH

Tim



-- 
Tim Cross

Re: Ways to deal with large amount of columns;

2018-08-30 Thread a
Thank you very much. Creating a function seems to be a good idea :)




-- Original message --
From: "David G. Johnston"; 
Sendtime: Thursday, Aug 30, 2018 8:31 PM
To: "a"<372660...@qq.com>; 
Cc: "pgsql-general"; 
Subject: Re: Ways to deal with large amount of columns;



On Thursday, August 30, 2018, a <372660...@qq.com> wrote:
Hi all:


I need to make a table contains projected monthly cashflow for multiple agents 
(10,000 around).


Therefore, the column number would be 1000+. 


I would need to perform simple aggregate function such as count, sum or average 
on each cashflow projected.


So if there is anyway of doing this? Will there be anything like define a macro 
in C that I can manipulate multiple columns by simple word that representing 
them.



Better to design a data model that doesn't have so many columns. Otherwise 
generating dynamic SQL via the for,at function and loops and such is your best 
bet.  Can be down in pl/pgsql or in your preferred programming language.  Psql 
variables can maybe be useful too.


David J.

Re: searching a value in a variable/field in all tables in a schema

2018-08-30 Thread Ruiqiang Chen
Thanks!

On Thu, Aug 30, 2018 at 4:51 PM David G. Johnston <
david.g.johns...@gmail.com> wrote:

> On Thu, Aug 30, 2018 at 1:44 PM, Ruiqiang Chen 
> wrote:
>
>> Does anyone have experience of searching a value in a variable/field in
>> all tables in a schema?
>>
>
> SELECT 'Tbl1'::text AS loc, fld
> FROM schema.tbl1
> WHERE fld = 'value'
> [UNION ALL
> SELECT 'Tbl2'::text AS loc, fld
> FROM schema.tbl2
> WHERE fld = 'value'
> ...]
>
> David J.
>
>


Re: Ways to deal with large amount of columns;

2018-08-30 Thread Tim Cross
On Fri, 31 Aug 2018 at 10:47, a <372660...@qq.com> wrote:

> Thank you very much. Creating a function seems to be a good idea :)
>
>
> -- Original message --
> *From:* "David G. Johnston";
> *Sendtime:* Thursday, Aug 30, 2018 8:31 PM
> *To:* "a"<372660...@qq.com>;
> *Cc:* "pgsql-general";
> *Subject:* Re: Ways to deal with large amount of columns;
>
> On Thursday, August 30, 2018, a <372660...@qq.com> wrote:
>
>> Hi all:
>>
>> I need to make a table contains projected monthly cashflow for multiple
>> agents (10,000 around).
>>
>> Therefore, the column number would be 1000+.
>>
>> I would need to perform simple aggregate function such as count, sum or
>> average on each cashflow projected.
>>
>> So if there is anyway of doing this? Will there be anything like define a
>> macro in C that I can manipulate multiple columns by simple word that
>> representing them.
>>
>
> Better to design a data model that doesn't have so many columns. Otherwise
> generating dynamic SQL via the for,at function and loops and such is your
> best bet.  Can be down in pl/pgsql or in your preferred programming
> language.  Psql variables can maybe be useful too.
>
> David J.
>
>

Perhaps post your proposed table design/definition. There is nothing in
what you have described so far which would indicate a necessity to have
more columns as you increase the number of agents.  It would be normal to
have something like

| agent_id | year | cash_on_hand | bank | creditors | debtors | 

and queries like

select sum(cash_on_hand)
from table
where agent_id = 'agent1'
and yesr = 2018;

to get the sum of cash on hand for agent1 in 2018.

instead of something like

| agent1_cash2018 | agent2_cash2017 | 

which will not work well.

Tim


-- 
regards,

Tim

--
Tim Cross