Re: [GENERAL] Inconsistency between PgAdmin III GUI and SQL window ?

2014-10-22 Thread Daniel Begin
David, Adrian,
I am new to databases and since PgAdmin displays Catalogs, Event Triggers,
Extensions and Schema as "Child" of the database, I assumed that Extensions
were linked to a specific database, not to a specific schema. 

After reading your answers, I had another look at PostGIS extension
properties and it is pretty clear it belongs to the public schema, which
explains the error message I got.

Thanks
Daniel

-Original Message-
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Adrian Klaver
Sent: October-21-14 18:58
To: Daniel Begin; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Inconsistency between PgAdmin III GUI and SQL window
?

On 10/21/2014 03:33 PM, Daniel Begin wrote:
> I have first to admit the inconsistency is probably on my side!-)
>
> The task - I want to clone a table I created in public schema in 
> another schema (xxx) of the same database.
> The problem - I get an error message when creating the table using the 
> original SQL script: ERROR: type "geography" does not exist.
>
> I understand from
> "http://stackoverflow.com/questions/9067335/how-to-create-table-inside
> -speci fic-schema-by-default-in-postgres" that the original script 
> should work if I set the search path to the destination schema (xxx) 
> prior to execute the script (set search_path to xxx ;) but the PgAdmin 
> III SQL window does not seem aware of the geography type.
>
> What I do not understand is that using the GUI (contextual menu. new table
.
> new column.) within schema xxx, I can create the table as expected, 
> and the geography type is recognized.  Any idea about what is going on 
> when using the SQL window?

A quick test here showed that in the GUI New Object(table) wizard the types
are schema qualified if needed. For instance I did:

CREATE TYPE public.test_type AS (my_int  int);

When I went to another schema and created a table and then a column, the
test_type was shown as public.test_type and I could use it as the column
type. The table was created with no problem. So pgAdmin searches the schemas
for you to retrieve types when you use the creation wizard.

When you are using the SQL window it has no context other then what is
provided by the search_path. If the search_path does not include the schema
that holds the type you want, then you will get an error.

Probably the best way to see this is tail the Postgres log file where the
postgresql.conf file has log_statement = 'mod' or 'all'.

>
> Daniel
>
>
>


--
Adrian Klaver
adrian.kla...@aklaver.com


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] DBlink, postgres to DB2

2014-10-22 Thread Postgres India
Hi,
I am trying to connect DB2 from postgres using dblink, is there any
configuration required at DB2 and postgres server.

If any command string please provide it.



Thanks
Manmohan. K


Re: [GENERAL] DBlink, postgres to DB2

2014-10-22 Thread Rémi Cura
Hey
http://www.postgresql.org/docs/9.3/static/contrib-dblink-connect.html
Seems dblink is for postgres to postgres connections.
Cheers,
Rémi-C

2014-10-22 12:45 GMT+02:00 Postgres India :

> Hi,
> I am trying to connect DB2 from postgres using dblink, is there any
> configuration required at DB2 and postgres server.
>
> If any command string please provide it.
>
>
>
> Thanks
> Manmohan. K
>
>


Re: [GENERAL] DBlink, postgres to DB2

2014-10-22 Thread Rémi Cura
You may want to use foreign data wrapper (fdw)

https://wiki.postgresql.org/wiki/Foreign_data_wrappers

Cheers,
Rémi-C

2014-10-22 12:50 GMT+02:00 Rémi Cura :

> Hey
> http://www.postgresql.org/docs/9.3/static/contrib-dblink-connect.html
> Seems dblink is for postgres to postgres connections.
> Cheers,
> Rémi-C
>
> 2014-10-22 12:45 GMT+02:00 Postgres India :
>
>> Hi,
>> I am trying to connect DB2 from postgres using dblink, is there any
>> configuration required at DB2 and postgres server.
>>
>> If any command string please provide it.
>>
>>
>>
>> Thanks
>> Manmohan. K
>>
>>
>


Re: [GENERAL] DBlink, postgres to DB2

2014-10-22 Thread Pavel Stehule
Hi

try to use a Perl implementation https://github.com/davidfetter/DBI-Link

There is a DBD driver for DB2 http://search.cpan.org/dist/DBD-DB2/DB2.pod

Or you can use a mentioned fdw wrapper - there is ODBC wrapper
https://wiki.postgresql.org/wiki/Foreign_data_wrappers#odbc_fdw and you can
access DB2 via generic ODBC DB2 driver

Regards

Pavel



2014-10-22 12:45 GMT+02:00 Postgres India :

> Hi,
> I am trying to connect DB2 from postgres using dblink, is there any
> configuration required at DB2 and postgres server.
>
> If any command string please provide it.
>
>
>
> Thanks
> Manmohan. K
>
>


Re: [GENERAL] DBlink, postgres to DB2

2014-10-22 Thread Merlin Moncure
On Wed, Oct 22, 2014 at 5:56 AM, Pavel Stehule  wrote:
> Hi
>
> try to use a Perl implementation https://github.com/davidfetter/DBI-Link
>
> There is a DBD driver for DB2 http://search.cpan.org/dist/DBD-DB2/DB2.pod
>
> Or you can use a mentioned fdw wrapper - there is ODBC wrapper
> https://wiki.postgresql.org/wiki/Foreign_data_wrappers#odbc_fdw and you can
> access DB2 via generic ODBC DB2 driver

jdbc-fdw should work too.

https://github.com/atris/JDBC_FDW

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Inconsistency between PgAdmin III GUI and SQL window ?

2014-10-22 Thread Adrian Klaver

On 10/22/2014 03:25 AM, Daniel Begin wrote:

David, Adrian,
I am new to databases and since PgAdmin displays Catalogs, Event Triggers,
Extensions and Schema as "Child" of the database, I assumed that Extensions
were linked to a specific database, not to a specific schema.


Yes and no. If they are installed in the pg_catalog schema then they are 
available database wide:


http://www.postgresql.org/docs/9.3/static/ddl-schemas.html

In addition to public and user-created schemas, each database contains a 
pg_catalog schema, which contains the system tables and all the built-in 
data types, functions, and operators. pg_catalog is always effectively 
part of the search path. If it is not named explicitly in the path then 
it is implicitly searched before searching the path's schemas. This 
ensures that built-in names will always be findable. However, you can 
explicitly place pg_catalog at the end of your search path if you prefer 
to have user-defined names override built-in names.


This is where you will find the procedural languages. So in psql you can 
do the below to see what is installed and where:


test=# \dx
 List of installed extensions
   Name| Version |   Schema   |Description 


---+-++
 hstore| 1.2 | public | data type for storing sets of (key, 
value) pairs

 plpgsql   | 1.0 | pg_catalog | PL/pgSQL procedural language
 tablefunc | 1.0 | public | functions that manipulate whole 
tables, including crosstab






After reading your answers, I had another look at PostGIS extension
properties and it is pretty clear it belongs to the public schema, which
explains the error message I got.

Thanks
Daniel




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] To increase RAM or not

2014-10-22 Thread Emanuel Calvo
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



El 22/10/14 a las 02:37, Nikhil Daddikar escibió:
> Folks,
> 
> I have set about 12GB RAM (shared buffers) for our Postgresql 
> instance. How do I know if this is actually being used? And is 
> there a way to know by how much should I increase it, if it is not 
> enough?
> 
> Thanks.
> 

That's a lot if you are using the latest version.

There are several links and blogs related on this, here you have some
I found quickly:
http://rhaas.blogspot.com.es/2012/03/tuning-sharedbuffers-and-walbuffers.html

http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/

Try to filter those which are quite old if you don't want to mess up
with configuration variables like max_fsm_pages, etc.



- -- 
- --
Emanuel Calvo  http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Bs. As., Argentina (GMT-3)
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
Comment: GPGTools - http://gpgtools.org

iQIcBAEBCgAGBQJUR8GLAAoJEIBeI/HMagHmFrUP/RxLhLEcmn/26Or5OrIYivYv
th37w05IAfRgQJgJk4OIsDKdbbEc/81H4GBu2B0JfW07YQHvxsNB8J0oW7PbAkEt
e+IrMYM13hSZecWXEuBrFHanM8WPoqcAKbJkXdzzZo75r9E/h8UTX87GOoXRqNbU
Y+vPA4pj0iywURMsJyR0cUqK5hygZUmGBfIEEQMAdl3KRxp/lJi5/hrP1H7FlYpm
n1e+AegNcMJPTHyCn9ghQ2I6bviNHDSTwqU9sUc545XuVG2ldDwXsFG+PvWxrx/S
+pm0OY0zBAv09hsXJRaHWS6e68tNGxb1GbkIzb/tLtLwC2ARZb79yuJEX4NKNXiH
Z6vWQF4fnxtjQzBE2SSnYLRtg0x1dDgqMVYC7Ez5kDqEJ7VnvMxAQms3Oo1hg4Lc
eTTumnK6k4SxRrKcBNQcdRx078OBuulfj89BT/vKaftIZfXMSi6mWY2aAyugXlAg
7kmY+Fqjcvm81NHoyAQTtA3ZDsL6PykshJiT/ySnI/JuHBGWZfU9CJjzkaFkWyRB
puDe6TZYH79RaE8aAc6VTWyp6cURisWbTT+MI2DKbYR+u8MPPzkzpt0BiiR79ciY
E5ci8kzi4wNJg6dtXO5otLGK6gpic8W7L2aLWqnBJqnQPYP0/w7S6IYeVK3VQ664
H93mtXqR0MoRlzbTDcsO
=jhtA
-END PGP SIGNATURE-


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] To increase RAM or not

2014-10-22 Thread Igor Neyman


-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Emanuel Calvo
Sent: Wednesday, October 22, 2014 10:39 AM
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] To increase RAM or not

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512



El 22/10/14 a las 02:37, Nikhil Daddikar escibió:
> Folks,
> 
> I have set about 12GB RAM (shared buffers) for our Postgresql 
> instance. How do I know if this is actually being used? And is there a 
> way to know by how much should I increase it, if it is not enough?
> 
> Thanks.
> 

That's a lot if you are using the latest version.

There are several links and blogs related on this, here you have some I found 
quickly:
http://rhaas.blogspot.com.es/2012/03/tuning-sharedbuffers-and-walbuffers.html

http://www.depesz.com/2012/06/09/how-much-ram-is-postgresql-using/

Try to filter those which are quite old if you don't want to mess up with 
configuration variables like max_fsm_pages, etc.



- --
- --
Emanuel Calvo  http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 
Support, Training & Services Bs. As., Argentina (GMT-3) -BEGIN PGP 
SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.18 (Darwin)
Comment: GPGTools - http://gpgtools.org


Here is couple great recent posts by Keith Fiske that help answering your 
question:

http://www.keithf4.com/a-small-database-does-not-mean-small-shared_buffers/

http://www.keithf4.com/a-large-database-does-not-mean-large-shared_buffers/#comment-945


Regards,
Igor Neyman


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Index scan vs indexonly scan method

2014-10-22 Thread Enrico Pirozzi

Hi,
I was working on this simple query

select field1 from table A
where A.field2 <= some_value
order by 1 desc limit some_value

and I saw that:

1) the planner on this query uses an index only scan method:

select field1 from table A
where A.field2 <= '2014-08-13 10:20:59.99648+02'
order by 1 desc limit 100

2) the planner on this query uses a classic index scan method:

select field1 from table A
where A.field2 <= '2014-08-13 10:20:59.99648+02'
order by 1 desc limit 1

the only difference between the two queries is the limit clause,
for the first query the limit is 100 and for the second the limit is 1

it seems a little bit strange...someone can help me to understand why?

My develop PostgreSQL version is a 9.4 beta

regards,
Enrico

--
That's one small step for man; one giant leap for mankind

Enrico Pirozzi
Tel.  +39 0861 1855771 - Mob.+39 328 4164437 - Fax  +39 0861 1850310
http://www.pgtraining.com  -
i...@pgtraining.com
www.enricopirozzi.info - i...@enricopirozzi.info
PEC: enrico.piro...@pec.it
Skype sscotty71 - Gtalk sscott...@gmail.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index scan vs indexonly scan method

2014-10-22 Thread Adrian Klaver

On 10/22/2014 08:18 AM, Enrico Pirozzi wrote:

Hi,
I was working on this simple query

select field1 from table A
where A.field2 <= some_value
order by 1 desc limit some_value

and I saw that:

1) the planner on this query uses an index only scan method:

select field1 from table A
where A.field2 <= '2014-08-13 10:20:59.99648+02'
order by 1 desc limit 100

2) the planner on this query uses a classic index scan method:

select field1 from table A
where A.field2 <= '2014-08-13 10:20:59.99648+02'
order by 1 desc limit 1

the only difference between the two queries is the limit clause,
for the first query the limit is 100 and for the second the limit is 1

it seems a little bit strange...someone can help me to understand why?


The background on index-only scans:

https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.2#Index-only_scans

In either case(index, index-only) the index has to be scanned. The 
difference is where the data is pulled from. In the index-only scan the 
query still needs to consult the visibility map to determine whether the 
tuple pointed to by the index entry is visible. I would say that in the 
limit 1 case the planner determines it is just as easy to check and pull 
the data from the actual tuple as to to check the visibility map. In the 
limit 100 case it becomes more cost effective to use the visibility map 
and pull data directly from the index data.





My develop PostgreSQL version is a 9.4 beta

regards,
Enrico




--
Adrian Klaver
adrian.kla...@aklaver.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Index scan vs indexonly scan method

2014-10-22 Thread Enrico Pirozzi



The background on index-only scans:

https://wiki.postgresql.org/wiki/What%27s_new_in_PostgreSQL_9.2#Index-only_scans 



In either case(index, index-only) the index has to be scanned. The 
difference is where the data is pulled from. In the index-only scan 
the query still needs to consult the visibility map to determine 
whether the tuple pointed to by the index entry is visible. I would 
say that in the limit 1 case the planner determines it is just as easy 
to check and pull the data from the actual tuple as to to check the 
visibility map. In the limit 100 case it becomes more cost effective 
to use the visibility map and pull data directly from the index data.




Thank you for your answer

Enrico


--
That's one small step for man; one giant leap for mankind

Enrico Pirozzi
Tel.  +39 0861 1855771 - Mob.+39 328 4164437 - Fax  +39 0861 1850310
http://www.pgtraining.com  -
i...@pgtraining.com
www.enricopirozzi.info - i...@enricopirozzi.info
PEC: enrico.piro...@pec.it
Skype sscotty71 - Gtalk sscott...@gmail.com



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Function runtime increases after 5 calls in the same session.

2014-10-22 Thread Ilya Ashchepkov
Tom Lane  Wrote in message:
> Marti Raudsepp  writes:
>> On Tue, Oct 21, 2014 at 12:53 PM, Ilya I. Ashchepkov  
>> wrote:
>>> I wrote a function and during testing it I came across the strange
>>> behaviour.
>>> Function runtime is about 200ms first 5 times, 6th and futher calls takes
>>> ~22000 ms.
>>> I simplified my schema, you can see it in attached file.
>>> I've tested on 9.4beta3 and 9.3.5.
> 
>> I didn't look at the test case. But most likely the problem is that
>> after 5 executions, plancache decides that a generic plan is no more
>> expensive than a custom plan, and reverts to executing the generic
>> one. But in reality there is a big difference in execution time.
> 
> Yeah.  Ilya didn't show the exact case he was testing, but I suppose
> it was one where the timestamptz range covered the whole table (since
> the test script created a table with a mighty tiny range of date values,
> it would have taken some effort to do otherwise).  Anyway I tried it with
> this case:
>   select test.fun (1, 'yesterday', 'tomorrow');
> and indeed reproduced a big slowdown in later executions.
> 
> You can examine what the planner is doing with prepared statements,
> such as ones in plpgsql functions, via PREPARE/EXECUTE:
> 
> regression=# prepare foo(int, timestamptz, timestamptz) as
> select sum(V1.data + V2.data)
>   from test.view V1
>   inner join test.view V2 using(id)
>   where
>   V1.object_id=$1
>   and V1.time between $2 and $3
>   and V2.time between $2 and $3;
> PREPARE
> 
> The first time you try it you get:
> 
> regression=# explain execute foo (1, 'yesterday', 'tomorrow');
>   
>  QUERY PLAN   
> 
> 
>  Aggregate  (cost=7037.25..7037.26 rows=1 width=8)
>->  Merge Join  (cost=5909.71..6759.53 rows=55544 width=8)
>  Merge Cond: (v1.id = v2.id)
>  ->  Sort  (cost=2967.35..2975.69 rows= width=8)
>Sort Key: v1.id
>->  Subquery Scan on v1  (cost=0.00..2772.33 rows= width=8)
>  ->  Seq Scan on data  (cost=0.00..2739.00 rows= 
> width=8)
>Filter: (("time" >= '2014-10-20 
> 00:00:00-04'::timestamp with time zone) AND ("time" <= '2014-10-22 
> 00:00:00-04'::timestamp with time zone) AND (object_id = 1) AND 
> test.can_read(object_id))
>  ->  Sort  (cost=2942.35..2950.69 rows= width=8)
>Sort Key: v2.id
>->  Subquery Scan on v2  (cost=0.00..2747.33 rows= width=8)
>  ->  Seq Scan on data data_1  (cost=0.00..2714.00 
> rows= width=8)
>Filter: (("time" >= '2014-10-20 
> 00:00:00-04'::timestamp with time zone) AND ("time" <= '2014-10-22 
> 00:00:00-04'::timestamp with time zone) AND test.can_read(object_id))
> (13 rows)
> 
> After repeating that a few times it switches to:
> 
> regression=# explain execute foo (1, 'yesterday', 'tomorrow');
>   QUERY PLAN  
> 
> --
>  Aggregate  (cost=428.63..428.64 rows=1 width=8)
>->  Nested Loop  (cost=5.20..428.54 rows=17 width=8)
>  Join Filter: (data.id = v2.id)
>  ->  Bitmap Heap Scan on data  (cost=4.91..81.29 rows=17 width=8)
>Recheck Cond: ((object_id = $1) AND ("time" >= $2) AND ("time" 
> <= $3))
>Filter: test.can_read(object_id)
>->  Bitmap Index Scan on data_object_id_time_idx  
> (cost=0.00..4.91 rows=50 width=0)
>  Index Cond: ((object_id = $1) AND ("time" >= $2) AND 
> ("time" <= $3))
>  ->  Materialize  (cost=0.29..342.79 rows=17 width=8)
>->  Subquery Scan on v2  (cost=0.29..342.71 rows=17 width=8)
>  ->  Index Scan using data_object_id_time_idx on data 
> data_1  (cost=0.29..342.54 rows=17 width=8)
>Index Cond: (("time" >= $2) AND ("time" <= $3))
>Filter: test.can_read(object_id)
> (13 rows)
> 
> Notice the plan now contains parameter markers $n instead of literal
> constants; this is because this is a generic plan.  The problem is
> that the planner has estimated this case a lot cheaper than it did
> when it saw the actual parameter values (and could determine that the
> query was going to have to scan the entire table).  Without the actual
> parameter values, it has to fall back on rules of thumb to estimate
> how much of the table will be

Re: [GENERAL] Index scan vs indexonly scan method

2014-10-22 Thread Jeff Janes
On Wed, Oct 22, 2014 at 8:18 AM, Enrico Pirozzi  wrote:

> Hi,
> I was working on this simple query
>
> select field1 from table A
> where A.field2 <= some_value
> order by 1 desc limit some_value
>
> and I saw that:
>
> 1) the planner on this query uses an index only scan method:
>
> select field1 from table A
> where A.field2 <= '2014-08-13 10:20:59.99648+02'
> order by 1 desc limit 100
>
> 2) the planner on this query uses a classic index scan method:
>
> select field1 from table A
> where A.field2 <= '2014-08-13 10:20:59.99648+02'
> order by 1 desc limit 1
>
> the only difference between the two queries is the limit clause,
> for the first query the limit is 100 and for the second the limit is 1
>
> it seems a little bit strange...someone can help me to understand why?
>

Yes, that is strange.  Are they using scans over the same index?

PostgreSQL never demotes an index-only to a regular scan just because it
might not be worthwhile to do it in index only mode.  If it uses a scan on
a  index which it recognizes as being eligible for index-only, it will use
it as index-only.

Without seeing the actual EXPLAIN output, it is hard to say more.

Cheers,

Jeff


Re: [GENERAL] DBlink, postgres to DB2

2014-10-22 Thread Postgres India
Thanks a lot Remi, Merlin and Pavel...I will give it a try.

On Wed, Oct 22, 2014 at 6:36 AM, Merlin Moncure  wrote:

> On Wed, Oct 22, 2014 at 5:56 AM, Pavel Stehule 
> wrote:
> > Hi
> >
> > try to use a Perl implementation https://github.com/davidfetter/DBI-Link
> >
> > There is a DBD driver for DB2
> http://search.cpan.org/dist/DBD-DB2/DB2.pod
> >
> > Or you can use a mentioned fdw wrapper - there is ODBC wrapper
> > https://wiki.postgresql.org/wiki/Foreign_data_wrappers#odbc_fdw and you
> can
> > access DB2 via generic ODBC DB2 driver
>
> jdbc-fdw should work too.
>
> https://github.com/atris/JDBC_FDW
>
> merlin
>