Re: [GENERAL] OLAP

2013-08-27 Thread Jayadevan
Alban Hertroys-4 wrote
> How is Pentaho an OLAP tool? Aren't you mixing up a few things?
> We already use Pentaho for ETL, so I'm a bit familiar with it. Why do you
> consider it suitable for managing an OLAP database?
> 
> How would Pentaho manage cube rollup triggers, business models, dimensions
> and stuff like that?
> We don't want to hand code those, that's far too error-prone and far too
> much work to keep track of. That stuff needs to be automated, preferably
> similar to what we're used to from Gentia (well, not me - I can't make
> heads or tails of Gentia, but the person who asked me about PG's
> suitability has been developing with it for years). That's what we're
> comparing to.

Pentaho, Jasper and Actuate are the common OLAP tools. Pentahos' ETL tool
(Kettle) is more popular than their Reporting solution. Similarly, Jasper's
reporting tool is more popular than their ETL (they integrate talend). The
OLAP functionality in both come from Mondrian, as far as I know.
The cubes, dimensions etc can be defined using a UI tool and uploaded to the
reporting "server" - either Jasper or Pentaho. These typically support MDX,
in addition to standard SQL.
http://en.wikipedia.org/wiki/MultiDimensional_eXpressions



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/OLAP-tp5768698p5768782.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query - CPU issue

2013-09-18 Thread Jayadevan
Kevin Grittner-5 wrote
> What is the longest you have let it run, in hours or minutes?

I let it run for about 10 minutes and killed it.

Kevin Grittner-5 wrote
> By the way, IMMUTABLE has to be wrong here, since the results
> depend on the state of the database.  STABLE is likely the right
> designation.

The data will not change unless I do a reload of the data set manually. In
that case, is IMMUTABLE wrong?
Here is the EXPLAIN (no analyze)
explain 
select  distinct geoip_city(src_ip) , src_ip
from alert where timestamp>=1378512000 and timestamp < 1378598400;
 QUERY PLAN 
 
-
 Unique  (cost=3815.75..3894.61 rows=39 width=8)
   ->  Sort  (cost=3815.75..3842.04 rows=10515 width=8)
 Sort Key: (geoip_city(src_ip)), src_ip
 ->  Index Scan using idx_alert_ts on alert  (cost=0.29..3113.34
rows=10515 width=8)
   Index Cond: (("timestamp" >= 1378512000) AND ("timestamp" <
1378598400))
(5 rows)





--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Query-CPU-issue-tp5771421p5771552.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query - CPU issue

2013-09-18 Thread Jayadevan
Kevin Grittner-5 wrote
> We already had this.  I was asking for you to get EXPLAIN ANALYZE
> output for a run of the SELECT statement inside the geoip_city()
> function.

"Merge Join  (cost=9268.34..26193.41 rows=6282 width=24) (actual
time=892.188..892.190 rows=1 loops=1)"
"  Merge Cond: (l.id = b.location_id)"
"  ->  Index Scan using locations_pkey on locations l  (cost=0.42..15739.22
rows=438386 width=24) (actual time=0.022..865.025 rows=336605 loops=1)"
"  ->  Sort  (cost=9267.84..9283.54 rows=6282 width=8) (actual
time=1.329..1.330 rows=1 loops=1)"
"Sort Key: b.location_id"
"Sort Method: quicksort  Memory: 25kB"
"->  Index Scan using ix_end_start_ip on blocks b 
(cost=0.43..8871.54 rows=6282 width=8) (actual time=0.573..1.268 rows=1
loops=1)"
"  Index Cond: ((3721196957::bigint <= end_ip) AND
(3721196957::bigint >= start_ip))"
"Total runtime: 892.439 ms"




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Query-CPU-issue-tp5771421p5771558.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query - CPU issue

2013-09-18 Thread Jayadevan
Rewriting the query in the function like this solved the issue (this is how
it was in the SQL at
https://github.com/tvondra/geoip/blob/master/sql/geoip--0.1.0.sql)

explain analyze
SELECT l.id, l.country, l.region, l.city  FROM blocks b JOIN locations l ON
(b.location_id = l.id)
 WHERE 3721196957 >= start_ip   ORDER BY start_ip DESC LIMIT 1

"Limit  (cost=0.85..2.09 rows=1 width=32) (actual time=0.015..0.015 rows=1
loops=1)"
"  ->  Nested Loop  (cost=0.85..2216242.97 rows=1784157 width=32) (actual
time=0.014..0.014 rows=1 loops=1)"
"->  Index Scan Backward using ix_start_end_ip on blocks b 
(cost=0.43..934027.92 rows=1784157 width=16) (actual time=0.008..0.008
rows=1 loops=1)"
"  Index Cond: (3721196957::bigint >= start_ip)"
"->  Index Scan using locations_pkey on locations l 
(cost=0.42..0.71 rows=1 width=24) (actual time=0.003..0.003 rows=1 loops=1)"
"  Index Cond: (id = b.location_id)"
"Total runtime: 0.039 ms"




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Query-CPU-issue-tp5771421p5771561.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query - CPU issue

2013-09-18 Thread Jayadevan
Thank you for the pointers. I will try those suggestions. As I mentioned
later, resolving the query solved the problem for now.
Regards,
Jayadevan


On Thu, Sep 19, 2013 at 11:40 AM, Kevin Grittner-5 [via PostgreSQL] <
ml-node+s1045698n5771567...@n5.nabble.com> wrote:

> Jayadevan <[hidden 
> email]<http://user/SendEmail.jtp?type=node&node=5771567&i=0>>
> wrote:
>
> > "Merge Join  (cost=9268.34..26193.41 rows=6282 width=24) (actual
> time=892.188..892.190 rows=1 loops=1)"
> > "  Merge Cond: (l.id = b.location_id)"
> > "  ->  Index Scan using locations_pkey on locations l
> (cost=0.42..15739.22 rows=438386 width=24) (actual time=0.022..865.025
> rows=336605 loops=1)"
> > "  ->  Sort  (cost=9267.84..9283.54 rows=6282 width=8) (actual
> time=1.329..1.330 rows=1 loops=1)"
> > "Sort Key: b.location_id"
> > "Sort Method: quicksort  Memory: 25kB"
> > "->  Index Scan using ix_end_start_ip on blocks b
> (cost=0.43..8871.54 rows=6282 width=8) (actual time=0.573..1.268 rows=1
> loops=1)"
> > "  Index Cond: ((3721196957::bigint <= end_ip) AND
> (3721196957::bigint >= start_ip))"
> > "Total runtime: 892.439 ms"
>
> This is the query which needs to be optimized.  When I multiply the
> runtime of this function's query by the estimated number of
> function calls, I get 2.6 hours.
>
> Copying the query from the first email on the thread:
>
> SELECT l.id || l.country || l.region || l.city
>   FROM blocks b
>   JOIN locations l ON (b.location_id = l.id)
>   WHERE $1 >= start_ip
> and $1 <= end_ip
>   limit 1;
>
> Can you provide the table definitions for blocks and locations,
> including indexes?  Also, could you tell us what the OS is, how
> much RAM is on the system, what the storage system looks like, and
> provide the output from running this?:
>
> SELECT version();
> SELECT name, current_setting(name), source
>   FROM pg_settings
>   WHERE source NOT IN ('default', 'override');
>
> You might also try running EXPLAIN ANALYZE for this query after
> running these statements on the connection, and see if you get a
> different plan:
>
> VACUUM ANALYZE blocks;
> VACUUM ANALYZE locations;
> SET cpu_tuple_cost = 0.03;
> SET random_page_cost = 1;
> SET effective_cache_size = <75% of machine RAM>
> SET work_mem = <25% of machine RAM / max_connections>
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-general mailing list ([hidden 
> email]<http://user/SendEmail.jtp?type=node&node=5771567&i=1>)
>
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://postgresql.1045698.n5.nabble.com/Query-CPU-issue-tp5771421p5771567.html
>  To unsubscribe from Query - CPU issue, click 
> here<http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5771421&code=bWF5bWFsYS5qYXlhZGV2YW5AZ21haWwuY29tfDU3NzE0MjF8LTE0MDY3ODcxNjA=>
> .
> NAML<http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Query-CPU-issue-tp5771421p5771569.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

Re: [GENERAL] PostgreSQL Point In Time Recovery

2013-10-24 Thread Jayadevan
Jeff Janes wrote
> I restore from my base backup plus WAL quite often.  It is how I get a
> fresh dev or test instance when I want one.  (It is also how I have
> confidence that everything is working well and that I know what I'm doing
> should the time come to do a real restore).  When that starts to take an
> annoyingly long time, I run a new base backup.  How often that is, can be
> anywhere from days to months, depending on what's going on in the
> database.
> 
> Cheers,
> 
> Jeff

That makes sense. So we take a new base backup once we feel "Hey , recovery
may take time". Thanks.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-Point-In-Time-Recovery-tp5775717p5775872.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Point In Time Recovery

2013-10-24 Thread Jayadevan
Alan Hodgson wrote
> That's basically what warm standby's do, isn't it? As long as they keep 
> recovery open it should work.

A warn standby will be almost in sync with the primary, right? So recovery
to point-in-time (like 10 AM this morning) won't be possible. We need a
base, but it shouldn't be so old that it takes hours to catchup- that was my
thought. As John mentioned, looking at the WAL/transaction numbers, time to
recover etc need to be looked at.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-Point-In-Time-Recovery-tp5775717p5775874.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL Point In Time Recovery

2013-10-25 Thread Jayadevan
Alan Hodgson wrote
> Well, yeah. The point was that you possibly could run it for a while to
> "catch 
> up" without taking a new base backup if you desired. You should also keep 
> copies of it for PITR.

Something like this - 
delayed replication
   might
help. I could say lag by 12 hours, or 1 transactions... 



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/PostgreSQL-Point-In-Time-Recovery-tp5775717p5775997.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Connection pooling

2013-10-31 Thread Jayadevan
I have never used pgbouncer myself. But my guess is you have to look at the
Timeout parameters in the configuration file.



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Connection-pooling-tp5776378p5776481.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Explanantion on pgbouncer please

2013-11-04 Thread Jayadevan
Why don't you try adding layers one by one?
1) Ensure you can connect to PostgreSQL from psql client (on another
machine)
2) Configure pgbouncer
3) See if you can connect from psql > pgbouncer > PostgreSQL
Check the data in pgbpuncer and PostgreSQL (number of sessions, idle
connections etc). You could also try the effect of the timeout parameters in
pgbouncer.
If the above is working fine, try connecting from tomcat/map server/whatever




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Explanantion-on-pgbouncer-please-tp5776515p5776952.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Theory question

2013-11-12 Thread Jayadevan
Kevin Grittner-5 wrote
> The checkpointer process is responsible for creating safe points
> from which a recovery can begin; the background writer tries to
> keep some pages available for re-use so that processes running
> queries don't need to wait for page writes  in order to have free
> spots to use in shared buffers.

Thank you. Do both of them write to the same files? Is it that checkpoint
writes only committed data whereas background writer may also write
non-committed data if there is no space available in the buffers?




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Theory-question-tp5777838p5778052.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Theory question

2013-11-13 Thread Jayadevan
Jeff Janes wrote
> No.  The checkpointer writes all data that was dirty as of a certain time
> (the start of the checkpoint) regardless of how often it was used since
> dirtied, and the background writer writes data that hasn't been used
> recently, regardless of when it was first dirtied.  Neither knows or cares
> whether the data being written was committed, rolled back, or still in
> progress.

Thank you. So checkpointer writes "all dirty data" while backgrounder writes
"all or some dirty data" depending on some (Clocksweep?) algorithm. Correct?
>From this discussion
http://postgresql.1045698.n5.nabble.com/Separating-bgwriter-and-checkpointer-td4808791.html

  
the bgwrites has some 'other dutties'. Probably those involve marking the
buffers - when they were last used, how frequently etc?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Theory-question-tp5777838p5778272.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] log_line_prefix

2013-11-16 Thread Jayadevan
hubert depesz lubaczewski-2 wrote
> It looks like you're using csvlog. log_line_prefix is used only for
> stderr/syslog logging.

Yes, that is right. Thank you.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/log-line-prefix-tp5778674p5778688.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] vacuuming - doubt

2013-12-11 Thread Jayadevan
Scott Marlowe-2 wrote
> 30 second vacuum lesson:

Thank you.




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/vacuuming-doubt-tp5782828p5783057.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] authentication failure

2014-01-05 Thread Jayadevan
Sameer Kumar wrote
> This only tells that there is one instance running!
> 
> There could be multiple PostgreSQL installations. And I guess that is what
> Tom meant here.

I doubt that was what Tom meant. Anyway, we can see from the error that the
request did reach the server.


Sameer Kumar wrote
> Can you try
> $ which psql
> $ which pg_clt
> 
> Can you set your $PGUSER variable to postgres and then try?
> 
> $ export $PGUSER=postgres

It is already set. How would which psql is being used have an impact?




--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/authentication-failure-tp5785193p5785366.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to convert ByteA to Large Objects

2011-09-15 Thread Jayadevan
Thank you. We are working on an Oracle to PostgreSQL migration project.  
BLOB columns got converted to BYTEA in PostgreSQL and we ran into problems.
We used this to convert the data type to OID. Thank you.

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/How-to-convert-ByteA-to-Large-Objects-tp1914180p4809638.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Partitioning and constraint exclusion

2015-09-07 Thread Jayadevan M
Hello ,

I have a parent table and 6 child tables (partitions). The child tables
have check constraints defined in the form
CHECK (myuid in (123,456,..)).
myuid is bigint, the constraints for the 6 child tables are definitely
mutually exclusive. The number of values in the list ranges from 2-10 for 5
of the child tables. For the 6th child table, the list is 2500+ elements.
When I try explain/explain analyze for even a simple query like

select * from parent where myuid in (123,456,789)

the child table with 2500+ elements gets always scanned. I have an index on
the column and that does get used. But why doesn't the planner just use
constraint exclusion and not go for the index scan? Anyone faced a similar
issue?

Thanks,
Jayadevan


Re: [GENERAL] Partitioning and constraint exclusion

2015-09-07 Thread Jayadevan M
On Mon, Sep 7, 2015 at 7:12 PM, Melvin Davidson 
wrote:

> First, what is the PostgresSQL version ??
>

9.3.6

>
> Next, in postgresql.conf, what is the value of constraint_exclusion ?
>

partition

In response to the other possible issues pointed out - the planner is
indeed *skipping the rest of the child tables* (the ones with just a few
values in the check constraint). That is why I feel the number of elements
in the check constraint on this particular child table is causing it to be
scanned. The query ends up scanning the table where the data will be found
and the table with 2500+ values in the check constraint. I may be missing
something?
I tried changing the filter from myuid in (1,2,3) to myuid = 1 or myuid = 2
or
It did not improve the plan. One Index Cond became 3 Index Cond .


Thanks,
Jayadevan


Re: [GENERAL] Partitioning and constraint exclusion

2015-09-08 Thread Jayadevan M
>
>
>
> ​I am not sure but am doubting it is intelligent enough to recognize the
> functional expression even if all of the values are present.  "simple
> equality" (
> http://www.postgresql.org/docs/9.4/interactive/ddl-partitioning.html)
> this is not.
>

Looks like the tables with about 100+ values in the check list gets pulled
in, even with constraint exclusion on. I created a simple test case. One
parent table with just one column, and 3 child tables with one column.
test=# \d+ parent
Table "public.parent"
 Column |  Type   | Modifiers | Storage | Stats target | Description
+-+---+-+--+-
 id | integer |   | plain   |  |
Child tables: child1,
  child2,
  child3

test=# \d+ child1
Table "public.child1"
 Column |  Type   | Modifiers | Storage | Stats target | Description
+-+---+-+--+-
 id | integer |   | plain   |  |
Check constraints:
"c" CHECK (id = ANY (ARRAY[1, 2]))
Inherits: parent

test=# \d+ child2
Table "public.child2"
 Column |  Type   | Modifiers | Storage | Stats target | Description
+-+---+-+--+-
 id | integer |   | plain   |  |
Check constraints:
"c" CHECK (id = ANY (ARRAY[3, 4]))
Inherits: parent

test=# \d+ child3
Table "public.child3"
 Column |  Type   | Modifiers | Storage | Stats target | Description
+-+---+-+--+-
 id | integer |   | plain   |  |
Check constraints:
"c3" CHECK (id = ANY (ARRAY[5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54,
55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73,
74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92,
93, 94, 95, 96, 97, 98, 99, 100]))
Inherits: parent

test=# explain analyze select * from parent where id = 1;
   QUERY PLAN


 Append  (cost=0.00..40.00 rows=13 width=4) (actual time=0.002..0.002
rows=0 loops=1)
   ->  Seq Scan on parent  (cost=0.00..0.00 rows=1 width=4) (actual
time=0.001..0.001 rows=0 loops=1)
 Filter: (id = 1)
*   ->  Seq Scan on child1  (cost=0.00..40.00 rows=12 width=4) (actual
time=0.000..0.000 rows=0 loops=1)*
* Filter: (id = 1)*
 Total runtime: 0.029 ms

If I increase the number of values a bit more.
with t as (select generate_series(*5,110*) x ) select  'alter table child3
add constraint c3 check  ( id in   ( ' || string_agg(x::text,',')  || ' ))
; ' from t;

test=# explain analyze select * from parent where id = 1;
   QUERY PLAN


 Append  (cost=0.00..80.00 rows=25 width=4) (actual time=0.003..0.003
rows=0 loops=1)
   ->  Seq Scan on parent  (cost=0.00..0.00 rows=1 width=4) (actual
time=0.001..0.001 rows=0 loops=1)
 Filter: (id = 1)
   ->  Seq Scan on child1  (cost=0.00..40.00 rows=12 width=4) (actual
time=0.000..0.000 rows=0 loops=1)
 Filter: (id = 1)
*   ->  Seq Scan on child3  (cost=0.00..40.00 rows=12 width=4) (actual
time=0.000..0.000 rows=0 loops=1)*
* Filter: (id = 1)*


Thanks,
Jayadevan

David J.
>
>


[GENERAL] truncate table getting blocked

2016-04-26 Thread Jayadevan M
Hello,

I have a python script. It opens a cursor, and sets the search_path (using
psycopg2). In case something goes wrong in the script , a record is
inserted into a table. In that script, I am not doing any thing else other
than reading a file and publishing the lines to a queue (no database
operations). The database is used just to track the errors. But my set
search_path is locking a truncate table I am executing from a psql session.
Is this expected?

When the truncate table hung, I used this query
 SELECT blocked_locks.pid AS blocked_pid,
 blocked_activity.usename  AS blocked_user,
 blocking_locks.pid AS blocking_pid,
 blocking_activity.usename AS blocking_user,
 blocked_activity.queryAS blocked_statement,
 blocking_activity.query   AS current_statement_in_blocking_process
   FROM  pg_catalog.pg_locks blocked_locks
JOIN pg_catalog.pg_stat_activity blocked_activity  ON
blocked_activity.pid = blocked_locks.pid
JOIN pg_catalog.pg_locks blocking_locks
ON blocking_locks.locktype = blocked_locks.locktype
AND blocking_locks.DATABASE IS NOT DISTINCT FROM
blocked_locks.DATABASE
AND blocking_locks.relation IS NOT DISTINCT FROM
blocked_locks.relation
AND blocking_locks.page IS NOT DISTINCT FROM blocked_locks.page
AND blocking_locks.tuple IS NOT DISTINCT FROM blocked_locks.tuple
AND blocking_locks.virtualxid IS NOT DISTINCT FROM
blocked_locks.virtualxid
AND blocking_locks.transactionid IS NOT DISTINCT FROM
blocked_locks.transactionid
AND blocking_locks.classid IS NOT DISTINCT FROM
blocked_locks.classid
AND blocking_locks.objid IS NOT DISTINCT FROM blocked_locks.objid
AND blocking_locks.objsubid IS NOT DISTINCT FROM
blocked_locks.objsubid
AND blocking_locks.pid != blocked_locks.pid
   JOIN pg_catalog.pg_stat_activity blocking_activity ON
blocking_activity.pid = blocking_locks.pid
   WHERE NOT blocked_locks.GRANTED;

and got this (schemaname/user/tablename modified)-

blocked_pid | blocked_user | blocking_pid | blocking_user |
 blocked_statement | current_statement_in_blocking_process
-+--+--+---+--+---
9223 | myuser   |12861 | myuser  | truncate table
myschema.table1; | SET search_path TO  myschema,public


PG version :
PostgreSQL 9.4.5 on x86_64-suse-linux-gnu, compiled by gcc (SUSE Linux)
4.8.5, 64-bit


Thanks,
Jayadevan


Re: [GENERAL] truncate table getting blocked

2016-04-26 Thread Jayadevan M
On Tue, Apr 26, 2016 at 7:25 PM, Albe Laurenz 
wrote:

>
>
> It is not the "SET search_path" statement that is blocking the truncate,
> but probably some earlier statement issued in the same transaction.
>

You are right. I had a select against that table.
Adding  this line fixed it ...
conn.set_isolation_level(psycopg2.extensions.ISOLATION_LEVEL_AUTOCOMMIT)

Reference -
http://initd.org/psycopg/docs/faq.html

Thanks,
Jayadevan


Re: [GENERAL] How to search ignoring spaces and minus signs

2010-10-14 Thread Jayadevan M
Hello,
> 
> For example searching for code 12344 should return
> 12 3-44  as matching item.
> 
> Andrus.
> 
This will do?
postgres=# select name from myt;
name

 13-333-333
 12 3-44
 33 33 333
 12345
(4 rows)

postgres=# select * from myt where  translate(translate(name,'-',''),' 
','') = '1333';
name

 13-333-333
(1 row)

postgres=# select * from myt where  translate(translate(name,'-',''),' 
','') = '12344';
  name
-
 12 3-44
(1 row)

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-18 Thread Jayadevan M
Hello,

> > PostgreSQL 9.1 is likely to have, as a feature, the ability to create
> > tables which are "unlogged", meaning that they are not added to the
> > transaction log, and will be truncated (emptied) on database restart.
> > Such tables are intended for highly volatile, but not very valuable,
> > data, such as session statues, application logs, etc.
> > 

One doubt - if the tables are 'unlogged' , will the DMLs against these 
still be replicated to a slave? 

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-18 Thread Jayadevan M
Hi,

> > One doubt - if the tables are 'unlogged' , will the DMLs against these
> > still be replicated to a slave?
> >
> 
> Yes, because the system tables, which store the layout of all tables, is 

> written to WAL.
Thank you for the reply. But my doubt was not about layout, rather the 
DMLs. If I do an insert into an 'unlogged' table, what happens to that? 
Will that be replicated in the slave (using PostgreSQL's inbuilt 
replication)?
Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Survey on backing up unlogged tables: help us with PostgreSQL development!

2010-11-19 Thread Jayadevan M
Hello,

> > Thank you for the reply. But my doubt was not about layout, rather the 

> > DMLs. If I do an insert into an 'unlogged' table, what happens to 
that? 
> > Will that be replicated in the slave (using PostgreSQL's inbuilt 
> > replication)?
> 
> What are the use-cases for replicating unlogged tables?
> 
I do not have a use case for replicating unlogged tables. But I may use 
temp/unlogged tables in my master to populate actual tables. 

Say, I have a db archival/purge process. I populate temp tables with PKs 
of my permanent tables and use that to drive my insertion into history 
tables, deletion from live tables etc. 

Pseudocode

Insert into mytemptable (id)  tables select mypk from liveable where 
lastuseddate < archivedate;

insert into myhist select a.* from livetable  a join mytemp table on 
a.mypk=mytemptable.id
delete from liveable where mypk in (select id from mytemp )

Reading about what goes into WAL tells me that the permanent table data 
will be replicated all right even if the temp tables are not logged. Is 
that right? 

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] gap between error and cure

2010-11-22 Thread Jayadevan M
Hello,


> 
> I saw a reference to run a command "show hba_file" but there hasn't been 
any
> clue as to WHERE one runs that command.  Please hold my hand.  Whichuser 
(root
> or postgres) and which program?
Please try the command from psql.

postgres=# select user;
 current_user
--
 postgres
(1 row)

postgres=# show hba_file;
 hba_file
---
 /usr/local/pgsql/data/pg_hba.conf
(1 row)

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Debug advice when postgres connection maxing out

2010-11-22 Thread Jayadevan M
Hello,
> 
> Server: OS X 10.5
> PostgreSQL version: 8.3
> 
> We experience this connection maxing out once in the full moon.
> The request from client reaches to the server but client never 
> receive response back. 
> The queries are very simple update on one record or select one 
> record using primary key (checked current_query from pg_stat_activity).
> Once this started, I normally disconnect all the client (quit client
> programs) however, the processes don't die on postgres server.
Will this command help?
select pg_cancel_backend(pid);
Please see
http://www.postgresql.org/docs/9.0/static/functions-admin.html

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to restore from backup to 8.4.3 server using 9.0 dump/restore

2010-12-16 Thread Jayadevan M
Hello,
> I don't know for sure, but I don't see why it should fail - it's only 
> reading it, not writing data to it or making any changes.
Probably it will fail...
http://www.postgresql.org/docs/9.0/static/sql-createdatabase.html
Although it is possible to copy a database other than template1 by 
specifying its name as the template, this is not (yet) intended as a 
general-purpose "COPY DATABASE" facility. The principal limitation is that 
no other sessions can be connected to the template database while it is 
being copied. CREATE DATABASE will fail if any other connection exists 
when it starts;  
Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PostgreSQL documentation on kindle - best practices?

2011-04-04 Thread Jayadevan M
> 
> So my question: has anyone found a best practice solution to convert
> the PostgreSQL documentaiton into a kindle-friendly format? Or has
> even an .azw file downloadable somewhere?
> 
> Best wishes,
> 
> Harald
You could always send the pdf file and get it converted to kindle format, 
free of cost. It is not a good idea to try and read pdf files in Kindle.
You have to send the pdf file to @free.kindle.com 
Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






[GENERAL] Query planning

2014-10-31 Thread Jayadevan M
Hi all,

I am going through PostgreSQL documentation on how row estimation is done
in PostgreSQL.
http://www.postgresql.org/docs/current/static/row-estimation-examples.html
There is a reference to cardinality (rel_cardinality) there. But that is an
estimate of the number of rows in the table, right?

How many unique values are there in a column (cardinality), that is
estimated while finding out selectivity?

Regards,
Jayadevan


[GENERAL] Streaming replication - slave not getting promoted

2015-01-04 Thread Jayadevan M
Hi,
I have streaming replication set up, with PostgreSQL 9.3. The entries in
recovery.conf on the slave are as follows -
standby_mode = 'on'
primary_conninfo = 'host=127.0.0.1 port=2345 user=postgres
password=password'
restore_command = 'cp /pgdata/archive/%f "%p"'
trigger_file = ‘/tmp/down.trg’

In postgresql.conf on the slave, I have set
hot_standby = on

Replication is working fine. The trigger file does get created by the
failover script when I shut down the primary. But I just keep getting these
entries in the log file in the slave and it does not get promoted.
FATAL:  could not connect to the primary server: could not connect to
server: Connection refused

Any suggestions?

One more doubt -

In https://wiki.postgresql.org/wiki/Streaming_Replication, it says
"
# Note that recovery.conf must be in $PGDATA directory.
# It should NOT be located in the same directory as postgresql.conf
"
postgresql.conf is in $PGDATA. So they will be in the same directory?

Regards,
Jayadevan


Re: [GENERAL] Streaming replication - slave not getting promoted

2015-01-04 Thread Jayadevan M
On Sun, Jan 4, 2015 at 8:01 PM, Adrian Klaver 
wrote:

> On 01/04/2015 06:09 AM, Jayadevan M wrote:
>
>> Hi,
>> I have streaming replication set up, with PostgreSQL 9.3. The entries in
>> recovery.conf on the slave are as follows -
>> standby_mode = 'on'
>> primary_conninfo = 'host=127.0.0.1 port=2345 user=postgres
>> password=password'
>> restore_command = 'cp /pgdata/archive/%f "%p"'
>> trigger_file = ‘/tmp/down.trg’
>>
>> In postgresql.conf on the slave, I have set
>> hot_standby = on
>>
>> Replication is working fine. The trigger file does get created by the
>> failover script when I shut down the primary. But I just keep getting
>> these entries in the log file in the slave and it does not get promoted.
>> FATAL:  could not connect to the primary server: could not connect to
>> server: Connection refused
>>
>> Any suggestions?
>>
>
> Try pg_ctl promote on the standby server to see if it works at all:
>
> http://www.postgresql.org/docs/9.3/interactive/app-pg-ctl.html
>
> If it does then maybe your standby cannot 'see' the trigger file.
>
>
> That was indeed the case. I had copied/pasted the code in Microsoft Word
and then copied/pasted from Word to recovery.conf. Word intelligently (?)
changed the 'quote' type around the file name.
I changed log_min_messages to debug2 and saw that the trigger file name was
not coming up right.

Thanks.
Regards,
Jayadevan


[GENERAL] json functions

2013-06-18 Thread Jayadevan M
Hi,

I have PostgreSQL 9.2.1. I can see a few json functions under pg_catalog, 
json_send, for example. But I can't find any documentation. Am I missing 
something?

Regards,
Jayadevan



DISCLAIMER: "The information in this e-mail and any attachment is intended only 
for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect."


Re: [GENERAL] json functions

2013-06-19 Thread Jayadevan M
Hi,
>json_send like all send functions is internal.  Not all functions available in 
>the
>catalog are exposed through SQL -- for example they may be used to serialize
>data for transmission over the wire.  If you can handle C you can hunt down
>the location of the function to see what it does (which isn't much since json 
>is
>an already serialized format).
OK. That explains it. I was wondering why we have quite a few functions under 
pg_catalog and just a few in the documentation. Thank you.
Regards,
Jayadevan


DISCLAIMER: "The information in this e-mail and any attachment is intended only 
for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] query on query

2013-07-04 Thread Jayadevan M
Hi,
I have a table like this -
postgres=# \d m
   Table "public.m"
Column |  Type   | Modifiers
+-+---
id | integer |
marks  | integer |

postgres=# select * from m;
id | marks
+---
  1 |27
  2 |37
  3 |17
  4 |27
  5 |18
(5 rows)

I wanted to get cumulative counts of students in different ranges - >90, > 80  
, > 70  etc.
>10  5
>20  3
>30  1
>40 0
> 50 0

So each student may get counted many times, someone with 99 will be counted 10 
times. Possible to do this with a fat query? The table will have many thousands 
of records.


Regards,
Jayadevan



DISCLAIMER: "The information in this e-mail and any attachment is intended only 
for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect."


Re: [GENERAL] query on query

2013-07-05 Thread Jayadevan M
>
>>
>> So each student may get counted many times, someone with 99 will be
>> counted
>> 10 times. Possible to do this with a fat query? The table will have
>> many thousands of records.
>>
>
>
>Not sure I got the point, but I guess this is a good candidate for a CTE:
>
>WITH RECURSIVE t(n) AS (
>VALUES (10)
>  UNION ALL
>SELECT n+10 FROM t WHERE n < 50
>)
>select count(*), t.n from m, t where mark > t.n group by t.n;

I meant 'fast', not 'fat. Sorry for the typo.

You got it right. The query gets the data exactly as I wanted it. Thanks a lot.

Regards,
Jayadevan


DISCLAIMER: "The information in this e-mail and any attachment is intended only 
for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] pg recovery

2013-07-11 Thread Jayadevan M
Hi,

I have postgresql streaming replication set up. I forgot to add an entry for 
trigger_file in recovery.conf. So I added that entry and did a pg_ctl reload. 
Is there a way to confirm that the entry has been read by the server? Any 
view/function?

Regards,
Jayadevan



DISCLAIMER: "The information in this e-mail and any attachment is intended only 
for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect."


Re: [GENERAL] pg recovery

2013-07-11 Thread Jayadevan M
Hi,
> wrote:
>> I have postgresql streaming replication set up. I forgot to add an
>> entry for trigger_file in recovery.conf. So I added that entry and did
>> a pg_ctl reload.
>Recovery parameters are not GUC parameters, so doing a parameter reload
>has no effect. Also, such parameters cannot be changed once recovery has
>begun.
Recovery has not begun. Without the change, the recovery will not start. So, do 
I have to do a reload, or restart, for the parameter to take effect? The 
situation is  - master is up and running, slave is up and running. I made a 
change to recovery.conf. How can I make slave 'accept' that change?
Regards,
Jayadevan



DISCLAIMER: "The information in this e-mail and any attachment is intended only 
for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg recovery

2013-07-11 Thread Jayadevan M
overy has begun.
>> Recovery has not begun. Without the change, the recovery will not start. So,
>do I have to do a reload, or restart, for the parameter to take effect? The
>situation is  - master is up and running, slave is up and running. I made a
>change to recovery.conf. How can I make slave 'accept' that change?
>I think that you are mixing things here. recovery.conf is read once at server
>start up and its presence puts the server in archive recovery.
>Check the output of "SELECT pg_is_in_recovery()" and you will see that its
>output is true, meaning that the node is in recovery. On the other hand,
>trigger_file is used for a node promotion, making the node exit the archive
>recovery mode and turn it into a master node. So to make the new parameter
>have effect restart the server.
>--
As you said, I was mixing things there. When you said recovery has begun, I 
confused it with promotion. Apologies.
I tried restarting the standby node and it worked.
Thanks a lot,
Jayadevan


DISCLAIMER: "The information in this e-mail and any attachment is intended only 
for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect."


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migration from Symfoware to PostgreSQL-Constructor functions

2013-08-04 Thread Jayadevan M
Hi,
System tables,  views and most of the metadata can be retrieved from these
views -
http://www.postgresql.org/docs/9.2/static/information-schema.html
Regards,
Jayadevan


On Sun, Aug 4, 2013 at 2:48 PM, sachin kotwal  wrote:

> Hello
>
> While migrating small application from Symfoware to PostgreSQL.
> There are some constructor functions in Symfoware.
> Which I unable to execute in Symfoware database.
>
> If anyone knows how to execute constructor functions in Symfoware.
> how to see list of system tables, functions, view and user defined tables,
> functions, views.
> Please reply.
>
>
>
> -
> Thanks and Regards,
>
> Sachin Kotwal
> NTT-DATA-OSS Center (Pune)
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Migration-from-Symfoware-to-PostgreSQL-Constructor-functions-tp5766203.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Replication Postgre > Oracle

2013-08-07 Thread Jayadevan M
Hi,
>From Oracle to PostgreSQL, you could have a look at Goldengate. It does not
support PostgreSQL as  the source database.
Regards,
Jayadevan


On Wed, Aug 7, 2013 at 1:54 PM, BOUVARD Aurélien <
aurelien.bouv...@supinfo.com> wrote:

>  Hi all,
>
> ** **
>
> My compagny would like to configure replication between PostgreSQL 9.2.4
> and Oracle Database (11g and maybe 12c soon). We saw that* Postgres Plus
> Database Replication *could be a good solution for us.* *
>
> * *
>
> We also thank to develop a solution based on trigger and/or WAL , but we
> didn’t know if it’s possible in our case…we have a huge amount of data (100
> terabits) and it will increase soon (Datawarehouse context)
>
> ** **
>
> So it will be very interesting for us to have some feedback about PostGre
> Plus or other concepts/solutions.
>
> ** **
>
> Regards,
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>


Re: [GENERAL] Trigger to update table info

2013-08-12 Thread Jayadevan M
Hi,
In case you are not keen on getting the latest and really accurate
counts/size, you could just refer the views readily available -
http://stackoverflow.com/questions/2596670/how-do-you-find-the-row-count-for-all-your-tables-in-postgres
You won't get the updatetime, though.
Regards,
Jayadevan


On Mon, Aug 12, 2013 at 12:46 PM, Arvind Singh  wrote:

> Hello friends,
>
> I have a table with the following structure
>
>
> Create table postablestatus
>
> (
>
> tablename varchar(30) NOT NULL,
>
> updatetime timestamp,
>
> reccount int,
>
> size int,
>
>CONSTRAINT postablestatus_pkey PRIMARY KEY(tablename)
>
> ) WITH (oids = false);
>
>
> where
> a) tablename - is the name of table where a record is inserted,updated or
> deleted
> b) updatetime - the time when table was updated (w/o timezone)
> c) reccount - records in table
> d) size - bytesize of the table
>
> --
> for example, if a table called item_master had 1 record and a
>  new record is inserted and it's size is 2000 bytes
>
> > postablestatus must have a entry like
>
> item_master08/12/2013 12:40:00  2 2000
> --
>
> I request to know how to write a trigger in PLSQL so that the
> postablestatus
> is updated whenever a record is inserted,updated or deleted in item_master
>
> PLEASE NOTE : a new record must be created if the item_master is not
> present in postablestatus
>
> thanks in advance
> arvind
>
>
>


Re: [GENERAL] Debugging Postgres?

2013-08-14 Thread Jayadevan M
Hi,
Having a look at the PostgreSQL log may help.
http://stackoverflow.com/questions/71/how-to-log-postgres-sql-queries
You could also try logging in via psql on the database host itself, to
ensure it is not a network issue.
Regards,
Jayadevan


On Wed, Aug 14, 2013 at 2:21 AM, Barth Weishoff  wrote:

> Hello
>
>I'm having an interesting issue with PGSQL.   It seems that I'm
> experiencing timeouts at various times.   The servers are not busy and have
> plenty of resources.  The databases are ~50GB in size, the systems
> typically have 8-12GB physical RAM, and the connections are low (less than
> 15 at any given time).
>
> The issue I'm seeing is that randomly I'm getting these pauses, or stalls,
> while trying to simply connect to the database server(s) from connected
> clients using the psql command line client.  I cannot tell if the server is
> even getting the request for service as they don't seem to show up in the
> logs at the time the event is occurring, so I'm thinking it's maybe a
> client-side issue.
>
> Is there a good general starting place for debugging these types of issues
> ?
>
>
> -B.
>


[GENERAL] PostgreSQL 9.3

2013-08-18 Thread Jayadevan M
Hello all,
Is the release date for PostgreSQL 9.3 production decided? We are going
live in a couple of weeks with a portal and if possible, would like to go
with 9.3, Materialized Views being the key feature that will add value.
Regards,
Jayadevan


Re: [GENERAL] PostgreSQL 9.3

2013-08-18 Thread Jayadevan M
Hi,
Thanks. So we are close to the tentative release date. Good.
Regards,
Jayadevan


On Mon, Aug 19, 2013 at 10:16 AM, Sandro CAZZANIGA <
cazzaniga.san...@gmail.com> wrote:

> Le 19/08/2013 06:38, Jayadevan M a écrit :
> > Hello all,
> > Is the release date for PostgreSQL 9.3 production decided? We are going
> > live in a couple of weeks with a portal and if possible, would like to
> > go with 9.3, Materialized Views being the key feature that will add
> value.
> > Regards,
> > Jayadevan
>
> http://www.postgresql.org/developer/roadmap/
>
> --
> Sandro Cazzaniga
>
> Site web: http://sandrocazzaniga.fr
> Jabber:   kha...@jabber.fr
> Twitter:  @Kharec
> GitHub:   http://github.com/Kharec
> IRC:  Kharec (Freenode, OFTC)
>
>


[GENERAL] Inheritance and foreign keys

2017-05-25 Thread Jayadevan M
Hi,

I designed three tables so that one table inherits another, and the third
table references the parent table. If a record is inserted into the third
table and the value does exist in the parent table indirectly, because it
is present in the inherited table, I still get an error.
Is some option available while creating the foreign key so that it will
consider the data in the child tables also while doing a constraint
validation?

create table myt(id serial primary key);
create table mytc (like myt);
alter table mytc inherit myt;
insert into myt values(1);
insert into mytc values(2);
 select * from myt;
 id

  1
  2

create table a (id integerreferences myt(id));
insert into a values(2);
ERROR:  insert or update on table "a" violates foreign key constraint
"a_id_fkey"
DETAIL:  Key (id)=(2) is not present in table "myt".


Regards,
Jayadevan


Re: [GENERAL] Inheritance and foreign keys

2017-05-25 Thread Jayadevan M
On Thu, May 25, 2017 at 6:00 PM, Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:

> The way I do it is the following :
> - ensure a common sequence for the ID for all tables in the inheritance
> tree (usually one parent and one or more children)
> - enforce normal FK constraints for all FK relations within the same
> "realm"/"tenant"/"schema" etc, i.e. where it makes sense
> - for enforcing FK constraints between tables in different "realms", you
> should implement this as a pair of CONSTRAINT triggers which implement the
> two sides of the FK dependency. For the referencing tables you'd want to
> check upon INSERT or UPDATE, with smth like :
>
> CREATE OR REPLACE FUNCTION 
> public.accounting_docs_cases_fk_to_public_accounting_docs()
> RETURNS TRIGGER
> LANGUAGE plpgsql
> AS $$
> DECLARE
> tmp INTEGER;
> BEGIN
> IF (TG_OP = 'DELETE') THEN
>   RAISE EXCEPTION 'TRIGGER : % called on unsuported op :
> %',TG_NAME, TG_OP;
> END IF;
> SELECT ad.id INTO tmp FROM public.accounting_docs ad WHERE ad.id
> =NEW.acct_doc_id;
> IF NOT FOUND THEN
>   RAISE EXCEPTION '%''d % (id=%) with NEW.acct_doc_id (%) does not
> match any accounting_docs ',TG_OP, TG_TABLE_NAME, NEW.id, NEW.acct_doc_id
> USING ERRCODE = 'foreign_key_violation';
> END IF;
> RETURN NEW;
> END
> $$
> ;
>
> -- here public.accounting_docs is a top level INHERITANCE table. Has
> bcompanyFOO.accounting_docs and bcompanyBAR.accounting_docs as inherited
> tables
>
> CREATE CONSTRAINT TRIGGER 
> accounting_docs_cases_fk_to_public_accounting_docs_tg
> AFTER INSERT OR UPDATE
> ON public.accounting_docs_cases FROM public.accounting_docs DEFERRABLE FOR
> EACH ROW EXECUTE PROCEDURE public.accounting_docs_cases_f
> k_to_public_accounting_docs();
>
> For the referenced tables you'd want to check upon UPDATE or DELETE with
> smth like :
>
> CREATE OR REPLACE FUNCTION 
> public.accounting_docs_fk_from_accounting_docs_cases()
> RETURNS TRIGGER
> LANGUAGE plpgsql
> AS $$
> DECLARE
> tmp INTEGER;
> BEGIN
> IF (TG_OP = 'INSERT') THEN
>   RAISE EXCEPTION 'TRIGGER : % called on unsuported op :
> %',TG_NAME, TG_OP;
> END IF;
> IF (TG_OP = 'DELETE' OR OLD.id <> NEW.id) THEN
>   SELECT adc.id INTO tmp FROM accounting_docs_cases adc WHERE
> adc.acct_doc_id=OLD.id;
>   IF FOUND THEN
> RAISE EXCEPTION '%''d % (OLD id=%) matches existing
> accounting_docs_cases with id=%',TG_OP, TG_TABLE_NAME, OLD.id,tmp USING
> ERRCODE = 'foreign_key_violation';
>   END IF;
> END IF;
> IF (TG_OP = 'UPDATE') THEN
> RETURN NEW;
> ELSE
> RETURN OLD;
> END IF;
> END
> $$
> ;
>
> CREATE CONSTRAINT TRIGGER accounting_docs_fk_from_accounting_docs_cases
> AFTER DELETE OR UPDATE
> ON public.accounting_docs FROM accounting_docs_cases DEFERRABLE FOR EACH
> ROW EXECUTE PROCEDURE accounting_docs_fk_from_accounting_docs_cases();
>
> CREATE CONSTRAINT TRIGGER accounting_docs_fk_from_accounting_docs_cases
> AFTER DELETE OR UPDATE
> ON bcompanyFOO.accounting_docs FROM accounting_docs_cases DEFERRABLE FOR
> EACH ROW EXECUTE PROCEDURE accounting_docs_fk_from_accoun
> ting_docs_cases();
>
> CREATE CONSTRAINT TRIGGER accounting_docs_fk_from_accounting_docs_cases
> AFTER DELETE OR UPDATE
> ON bcompanyBAR.accounting_docs FROM accounting_docs_cases DEFERRABLE FOR
> EACH ROW EXECUTE PROCEDURE accounting_docs_fk_from_accoun
> ting_docs_cases();
>
>
> Note that still this is not a proper case of a FK constraint, since this
> requires a true common unique index across all tables of the inheritance
> tree, which is not possible as of today.
>
>
Thank you. This should work for me.


Re: [GENERAL] [pgeu-general] Call for design: PostgreSQL mugs

2013-09-08 Thread Jayadevan M
Hello all,
"Elephants Never Forget" sounds like a good idea. It refers to reliability
of the database (from a transaction perspective) and ability to recover
"lost data".
http://www.scientificamerican.com/article.cfm?id=elephants-never-forget
Regards,
Jayadevan


On Sun, Sep 8, 2013 at 9:20 PM, Erik Rijkers  wrote:

> On Sun, September 8, 2013 15:27, Andreas 'ads' Scherbaum wrote:
> >
> > So, we discussed a number ideas here on the lists, mainly a new text for
> > the mugs. But it does not look like we have a winner.
> >
> > There was also this cheap shot at MySQL, but I think we all agree that
> > we don't need this kind of design.
> >
>
> Clearly we did *not* all agree.. :-)
>
> But here is another old one that I always liked, and would like to see
> processed onto a mug.  It would need some work, but
> the idea seems excellent:
> a large, impressive elephant head, frontal, with the text: "Never Forgets"
> or "PostgreSQL Never Forgets"
>
>
> http://www.gsbrown.org/compuserve/all-new-electronic-mail-1982-04/elephant-ad-800.jpg
>
> ( Greg S. sent me an .xcf file (2MB) which I could forward if anyone is
> interested to improving it... )
>
>
>
>
>
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


[GENERAL] Moving from 9.2 to 9.3

2013-09-18 Thread Jayadevan M
Hello,
When I upgraded from 9.2 to 9.3, I copied the postgresql.conf and
pg_hba.conf files form 9.2 installation and used those. Is that likely to
cause any issues? I did not see any significant parameter changes from 9.2
to 9.3. So far, it is running fine.
Regards,
Jayadevan


[GENERAL] Query - CPU issue

2013-09-18 Thread Jayadevan M
Hi,
I have this query

explain analyze
select  distinct geoip_city(src_ip) , src_ip
from alert where timestamp>=1378512000 and timestamp < 1378598400

The explain takes forever, and CPU goes upto 100%. So I end up killing the
query/explain.

This one, without the function call, comes back in under a second -

explain analyze
select  distinct
 src_ip
from alert where timestamp>=1378512000 and timestamp < 1378598400
"HashAggregate  (cost=493.94..494.40 rows=46 width=8) (actual
time=38.669..38.684 rows=11 loops=1)"
"  ->  Index Scan using idx_alert_ts on alert  (cost=0.29..468.53
rows=10162 width=8) (actual time=0.033..20.436 rows=10515 loops=1)"
"Index Cond: (("timestamp" >= 1378512000) AND ("timestamp" <
1378598400))"
"Total runtime: 38.740 ms"

The function doesn't do much, code given below -
CREATE OR REPLACE FUNCTION geoip_city(IN p_ip bigint, OUT loc_desc
character varying)
  RETURNS character varying AS
$BODY$
SELECT l.id || l.country ||l.region || l.city  FROM blocks b JOIN locations
l ON (b.location_id = l.id)
 WHERE $1 >= start_ip and $1 <= end_ip limit 1 ;
$BODY$
  LANGUAGE sql IMMUTABLE
  COST 100;

There are indexes on the start_ip and end_ip and an explain tells me the
indexes are being used (if I execute the SELECT in the function using a
valid value for the ip value.

Regards,
Jayadevan


Re: [GENERAL] Using oracle stored procedures for postgresql database

2013-09-20 Thread Jayadevan M
Hi,

Have a look at orafce for the plugins. Try ora2pg for initial migration.
There are differences in how you execute procedures (syntax), packages are
missing in PostgreSQL and so on. Please have a look at
http://wiki.postgresql.org/wiki/Converting_from_other_Databases_to_PostgreSQL#Oracle

Regards,
Jayadevan


On Fri, Sep 20, 2013 at 3:19 PM, nikhilruikar wrote:

> Hi
>   I am planning to migrate oracle database to postgres. I need to know
> if there are any plugins or tool with which I can use stored procedures
> written in oracle schema for postgres with out changing them.
>
> Thanks in advance
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Using-oracle-stored-procedures-for-postgresql-database-tp5771714.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] Postgres 9.3 Installation Problems Windows 7 64 Bit Pro

2013-09-20 Thread Jayadevan M
It should be possible to install the SW without initializing the cluster.
That way you may be in a better position to troubleshoot the issue.
Complete the installation, then use initdb to initialize a cluster. Have a
look at this url too -
http://wiki.postgresql.org/wiki/Running_%26_Installing_PostgreSQL_On_Native_Windows#Common_installation_errors


On Fri, Sep 20, 2013 at 5:55 PM, Tom Fraser  wrote:

> Hi,
>
> We are having a problem installing Postgres 9.3 on Windows 7 pro.
>
> The installation freezes and you can leave it running overnight without
> any error messages.
>
> The Bitrock log's last entries are:
>
> [14:45:43] Removing the existing ldconfig setting - set during the
> previous installation.
> [14:45:43] Running the post-installation/upgrade actions:
> [14:45:43] Write the base directory to the ini file...
> [14:45:43] Write the version number to the ini file...
>
> The program (User interface) says "Database cluster initalisation" or
> something like that
>
> We've tried uninstalling and reinstalling, as well as installing to a
> different drive. Checked permissions on the data directory..
>
> Does anyone have any suggestions? Thanks
>
>
>


Re: [GENERAL] Query runs forever after upgrading to 9.3

2013-09-23 Thread Jayadevan M
Could you please post EXPLAIN for that query?
How 'fat' are the temporary tables - just a couple of columns or really
wide?



On Mon, Sep 23, 2013 at 7:08 PM, Andrus  wrote:

> Query
>
> SELECT * FROM toode
> WHERE toode in (SELECT toode FROM tempkaive)
> OR toode in (SELECT toode FROM tempalgsemu)
>
> stops working after upgrading to 9.3 RTM in Windows from earlier version.
>
> Task Manager shows that postgres.exe process has constantly 13% CPU usage
> (this is 8 core computer) and private working set memory is 16 MB
>
> PgAdmin shows that this query is running .
>
> toode field type is char(20) and it is toode table primary key.
>
> tempkaive and tempalgsemu are temporary tables created eralier this
> transaction. They do not have indexes.
> toode is real table which has 509873 records .
> Probably tempkaive temp table size is bigger that toode table and
> templalgemu temp table size is smaller than in toode.
>
> How to fix this or find the reason ?
> How to rewrite the query so that it works ?
>
> analyze command was executed but problem persists.
> I tested it running Postgres 9.3 RTM in 32 bin Windows 7 and  64 bit
> Windows 2008 R2 servers.
> In both cases same problem occurs.
> Only single user is using database and only this query is running.
>
>
> Locks window shows:
>
> 7840toy53749admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy53652admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy54605admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy54608admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy49799admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840admin7/133757/13375ExclusiveLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
> 7840toy53750admin7/13375AccessShareLockYes
> 2013-09-23 15:57:08+03SELECT * FROM toode WHERE toode in (SELECT toode
> FROM tempkaive)OR toode in (SELECT toode FROM tempalgsemu)
>
> Andrus.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/**mailpref/pgsql-general
>


[GENERAL] logging statements in PostgreSQL

2013-09-24 Thread Jayadevan M
Hi all,
I am planning to use pgbadger for analyzing queries. Documentation for
pgbadger says "Do not enable log_statement as their log format will not be
parsed by pgBadger. "
I have
log_min_duration_statement = 0
I do see quite a few SELECTs, INSERTS etc in the log files, function calls
too, for example.
 select sp_pg_refresh_alert_data(current_date);
 But the SELECT and INSERT in the function are not getting logged. Is that
how PostgreSQL logging works, or I have missed something?
Regards,
Jayadevan


Re: [GENERAL] logging statements in PostgreSQL

2013-09-25 Thread Jayadevan M
Thanks for the pointer. I do not really want to log the plans of queries,
just the queries, execution time and a couple of other details
(database,user). If I use the auto-explain module, it will end up printing
the plan for all query execution in the log files?


On Wed, Sep 25, 2013 at 1:43 PM, Amit Langote wrote:

> On Wed, Sep 25, 2013 at 12:18 PM, Jayadevan M
>  wrote:
> > Hi all,
> > I am planning to use pgbadger for analyzing queries. Documentation for
> > pgbadger says "Do not enable log_statement as their log format will not
> be
> > parsed by pgBadger. "
> > I have
> > log_min_duration_statement = 0
> > I do see quite a few SELECTs, INSERTS etc in the log files, function
> calls
> > too, for example.
> >  select sp_pg_refresh_alert_data(current_date);
> >  But the SELECT and INSERT in the function are not getting logged. Is
> that
> > how PostgreSQL logging works, or I have missed something?
>
> Have a look at auto_explain module which has configuration parameter:
>
> "auto_explain.log_nested_statements (boolean)"
> auto_explain.log_nested_statements causes nested statements
> (statements executed inside a function) to be considered for logging.
> When it is off, only top-level query plans are logged.
>
> http://www.postgresql.org/docs/devel/static/auto-explain.html
>
> --
> Amit Langote
>


Re: [GENERAL] logging statements in PostgreSQL

2013-10-01 Thread Jayadevan M
Hi,
I was looking for options to make sure SQLs executed as part of functions
also get logged. Since this is a production system, I wanted to do it
without the EXPLAIN also written to the logs. May be that is not possible?
Regards,
Jayadevan


On Mon, Sep 30, 2013 at 5:08 PM, Albe Laurenz wrote:

> Jayadevan M wrote:
> > Thanks for the pointer. I do not really want to log the plans of
> queries, just the queries, execution
> > time and a couple of other details (database,user). If I use the
> auto-explain module, it will end up
> > printing the plan for all query execution in the log files?
>
> You can configure it so that only statements exceeding a certain
> duration will be logged.
>
> Yours,
> Laurenz Albe
>


[GENERAL] postgresql.conf error

2013-10-18 Thread Jayadevan M
Hi,

Which is the quickest way to troubleshot the message "

LOG:  configuration file "/postgresql.conf" contains errors;
unaffected changes were applied"" ?

I made a couple of changes a few days ago, and did not reload Today I made
some more changes and did a pg_ctl reload.

Is there an option to test the configuration file for errors, after making
changes?

Regards,
Jayadevan


Re: [GENERAL] postgresql.conf error

2013-10-18 Thread Jayadevan M
Thanks. This is what I have. May be it is not really an error?

2013-10-18 12:23:54.996 IST,,,8855,,523c23ea.2297,20,,2013-09-20 16:01:06
IST,,0,LOG,0,"received SIGHUP, reloading configuration files",""
2013-10-18 12:23:54.996 IST,,,8855,,523c23ea.2297,21,,2013-09-20 16:01:06
IST,,0,LOG,55P02,"parameter ""superuser_reserved_connections"" cannot be
changed without restarting the server",""
2013-10-18 12:23:54.997 IST,,,8855,,523c23ea.2297,22,,2013-09-20 16:01:06
IST,,0,LOG,F,"configuration file
""/pgdata/prod/data_93/postgresql.conf"" contains errors; unaffected
changes were applied",""



On Fri, Oct 18, 2013 at 1:12 PM, Tom Lane  wrote:

> Jayadevan M  writes:
> > Which is the quickest way to troubleshot the message "
> > LOG:  configuration file "/postgresql.conf" contains errors;
> > unaffected changes were applied"" ?
>
> There should be log message(s) before that one complaining about the
> specific problems.
>
> regards, tom lane
>


[GENERAL] PostgreSQL Point In Time Recovery

2013-10-23 Thread Jayadevan M
Hi,
I went through
http://www.postgresql.org/docs/9.3/static/continuous-archiving.html
and set up the archiving process. With this approach, if my database
crashes after a couple of weeks after the base backup is taken, recovering
would mean replaying the WAL logs for about 2 weeks, right? To avoid that,
what is the standard process followed - take a base backup every day or
once a week?
Regards,
Jayadevan


[GENERAL] search_path and current_schema

2013-10-27 Thread Jayadevan M
Hi,

I was trying out schema settings and related functions. PostgreSQL version
is 9.3

[postgres@MyCentOS 9.3]$ psql
psql (9.3.0)
Type "help" for help.

postgres=# show search_path;
  search_path

 "$user",public
(1 row)

postgres=# select current_schemas(true);
   current_schemas
-
 {pg_catalog,public}
(1 row)

postgres=# set search_path=mynewschema, "$user", public;
SET
postgres=# select current_schemas(true);
   current_schemas
-
 {pg_catalog,public}
(1 row)

postgres=# show search_path;
 search_path
--
 mynewschema, "$user", public
(1 row)

I thought current_schemas and search_path will return the same set of
schemas (except that current_schema will show pg_catalog also, if we use
true). Shouldn't mynewschema appear in the output of select
current_schemas(true)?

Regards,
Jayadevan


Re: [GENERAL] search_path and current_schema

2013-10-27 Thread Jayadevan M
OK. When I logged in as a user who had access to the schema, the output
from current_schemas and search_path were matching.
Thanks.


On Sun, Oct 27, 2013 at 9:55 PM, Tom Lane  wrote:

> Jayadevan M  writes:
> > I thought current_schemas and search_path will return the same set of
> > schemas (except that current_schema will show pg_catalog also, if we use
> > true). Shouldn't mynewschema appear in the output of select
> > current_schemas(true)?
>
> Only if it actually exists (and you have usage privilege on it).  See
> the description of the search_path variable: nonexistent entries are
> silently ignored.
>
> Possibly this behavior should be documented under current_schemas()
> as well as under the GUC variable.
>
> regards, tom lane
>


[GENERAL] Documents/ppts/pdfs

2013-11-08 Thread Jayadevan M
Hi,

When I search for material on PostgreSQL, once in a while I see reasonably
old, but very good content in file such as
http://www.postgresql.org/files/developer/transactions.pdf
How can I browse to a top level folder (
http://www.postgresql.org/files/developer) and see what else is available ?
I can't find any links in postgresql.org or documentation pointing to files
like this.

Regards,
Jayadevan


Re: [GENERAL] Documents/ppts/pdfs

2013-11-09 Thread Jayadevan M
> These presentations are available in
> http://www.postgresql.org/developer/coding/
>
> Presentations
>
>- Transaction Processing in 
> PostgreSQL<http://www.postgresql.org/files/developer/transactions.pdf> (From
>OSDN Oct 2000)
>- A Tour of PostgreSQL 
> Internals<http://www.postgresql.org/files/developer/tour.pdf> (From
>OSDN Oct 2000)
>- History of PostgreSQL Open-Source 
> Development<http://www.postgresql.org/files/developer/history.pdf> (From
>OSDN Oct 2000)
>- Database Internals 
> Presentation<http://www.postgresql.org/files/developer/internalpics.pdf> (From
>2001)
>- PostgreSQL Concurrency 
> Issues<http://www.postgresql.org/files/developer/concurrency.pdf> (From
>OSCON 2002)
>- Recent PostgreSQL Optimizer Improvements (in 
> 7.4)<http://www.postgresql.org/files/developer/optimizer.pdf> (From
>OSCON 2003)
>
>
> Thanks & Regards
> Raghu Ram
>
> Thank you.
Regards,
Jayadevan


[GENERAL] Theory question

2013-11-11 Thread Jayadevan M
Hi,
What are the real differences between the bgwriter and checkpointer
process? Both of them write data from the buffer to the data files, right?
Is it just a matter of 'when' they write?
Regards,
Jayadevan


Re: [GENERAL] Theory question

2013-11-13 Thread Jayadevan M
On Thu, Nov 14, 2013 at 7:58 AM, Jayadevan wrote:

> Jeff Janes wrote
> > No.  The checkpointer writes all data that was dirty as of a certain time
> > (the start of the checkpoint) regardless of how often it was used since
> > dirtied, and the background writer writes data that hasn't been used
> > recently, regardless of when it was first dirtied.  Neither knows or
> cares
> > whether the data being written was committed, rolled back, or still in
> > progress.
>
> Thank you. So checkpointer writes "all dirty data" while backgrounder
> writes
> "all or some dirty data" depending on some (Clocksweep?) algorithm.
> Correct?
> From this discussion
>
> http://postgresql.1045698.n5.nabble.com/Separating-bgwriter-and-checkpointer-td4808791.html
> <
> http://postgresql.1045698.n5.nabble.com/Separating-bgwriter-and-checkpointer-td4808791.html
> >
> the bgwrites has some 'other dutties'. Probably those involve marking the
> buffers - when they were last used, how frequently etc?
>
>
>
> That should have been "backgrounder writes "all or some dirty or non-dirty
data" "...


Re: [GENERAL] Tuning 9.3 for 32 GB RAM

2013-11-14 Thread Jayadevan M
Hi,

On Thu, Nov 14, 2013 at 7:14 PM, Alexander Farber <
alexander.far...@gmail.com> wrote:

> Hello,
>
> do these changes please look okay for a PostgreSQL 9.3 running on CentOS
> 6.4 server with 32 GB RAM (with Drupal 7 and few custom PHP scripts)
>
> postgresql.conf:
>
>shared_buffers = 4096MB
>work_mem = 32MB
>checkpoint_segments = 32
>log_min_duration_statement = 1
>
> sysctl.conf:
>
>kernel.shmmax=17179869184
>kernel.shmall=4194304
>
> pgbouncer.ini:
>
>listen_port = 6432
>unix_socket_dir = /tmp
>pool_mode = session
>server_reset_query = DISCARD ALL
>server_check_delay = 10
>max_client_conn = 600
>default_pool_size = 50
>
> I understand, that nobody can tell me the optimal settings - unless I
> provide full source code to everything. And if I provide "the full source
> code", nobody will look at it anyway.
>
> So I am just asking, if the settings look okay or if they will waste
> gigabytes of RAM.
>
> Thank you
> Alex
>
> You may want to look at effective_cache_size also. May be quickly go
through
http://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server and see what
http://pgfoundry.org/projects/pgtune/
says.
Jayadevan


[GENERAL] Postgres Server backend process

2013-11-15 Thread Jayadevan M
Hello,

I was going through
http://www.postgresql.org/files/developer/tour.pdf and the source at
http://doxygen.postgresql.org/postmaster_8c_source.html

The pdf shows the daemon process doing authentication and spawning a
process to handle the request from the client.
The code has these comments -
"When a request message is received, we now fork() immediately. The child
process performs authentication of the request,"
Now authentication is done by the Backend process and not by the daemon?

Regards,
Jayadevan


[GENERAL] log_line_prefix

2013-11-15 Thread Jayadevan M
Hi,
I was trying different options of log_line_prefix. I am making chnages,
doing a pg_ctl reload and checking the output in the log files. For some
reason ,the changes seem to have no impact. What am I doing wrong? Here is
the output form the log files where you can see the change being accepted
(reload) and then the output of a select now().

2013-11-16 08:24:50.657 IST,,,3186,,5286c0d6.c72,5,,2013-11-16 06:18:22
IST,,0,LOG,0,"received SIGHUP, reloading configuration files",""
2013-11-16 08:24:50.657 IST,,,3186,,5286c0d6.c72,6,,2013-11-16 06:18:22
IST,,0,LOG,0,"parameter ""log_line_prefix"" changed to ""%t
""",""
2013-11-16 08:24:56.805
IST,"postgres","postgres",3460,"[local]",5286de7e.d84,1,"SELECT",2013-11-16
08:24:54 IST,2/0,0,LOG,0,"duration: 1.623 ms  statement: select
now();","psql"
2013-11-16 08:25:49.376 IST,,,3186,,5286c0d6.c72,7,,2013-11-16 06:18:22
IST,,0,LOG,0,"received SIGHUP, reloading configuration files",""
2013-11-16 08:25:49.378 IST,,,3186,,5286c0d6.c72,8,,2013-11-16 06:18:22
IST,,0,LOG,0,"parameter ""log_line_prefix"" changed to ""%m
""",""
2013-11-16 08:25:53.384
IST,"postgres","postgres",3464,"[local]",5286deb7.d88,1,"SELECT",2013-11-16
08:25:51 IST,2/0,0,LOG,0,"duration: 0.978 ms  statement: select
now();","psql"
2013-11-16 08:27:28.348 IST,,,3186,,5286c0d6.c72,9,,2013-11-16 06:18:22
IST,,0,LOG,0,"received SIGHUP, reloading configuration files",""
2013-11-16 08:27:28.349 IST,,,3186,,5286c0d6.c72,10,,2013-11-16 06:18:22
IST,,0,LOG,0,"parameter ""log_line_prefix"" changed to ""%m %d %u
""",""
2013-11-16 08:27:34.681
IST,"postgres","postgres",3469,"[local]",5286df1d.d8d,1,"SELECT",2013-11-16
08:27:33 IST,2/0,0,LOG,0,"duration: 0.732 ms  statement: select
now();","psql"

I think some other setting is printing all the info anyway. But which
setting?

Regards,
Jayadevan


[GENERAL] WITH and exclusive locks

2013-11-23 Thread Jayadevan M
Hi,

When conducting performance tests of our application, I saw quite a few
exclusive locks in SELECTs with WITH clause. So I created a big table and
executed a query which takes time.
with x as (select * from emp where first_name like 'Sco%')
select * from x;
I got 2 locks - one accesshare lock against the table and one ExclusiveLock
with out  any database/relation.
2417postgres10/9510/95ExclusiveLockYes
2013-11-23 17:38:58+05:30

My guess is that the Exclusive lock is against some in-memory structure
(the temp table created by the WITH clause) and hence is not an issue. Is
that correct?
Regards,
Jayadevan


[GENERAL] vacuuming - doubt

2013-12-11 Thread Jayadevan M
Hi,
Another theory question -
PostgreSQL documentation says that -
"There are two variants of VACUUM: standard VACUUM and VACUUM FULL. VACUUM
FULL can reclaim more disk space "
I created a table, inserted 1000 records and deleted them. The size after a
vacuum and a vacuum full are given -
select pg_total_relation_size('myt');;
 pg_total_relation_size

  65536
(1 row)

accounts=> vacuum   myt;
VACUUM

accounts=> select pg_total_relation_size('myt');;
 pg_total_relation_size

  16384
(1 row)

accounts=> vacuum  full myt;
VACUUM
accounts=> select pg_total_relation_size('myt');;
 pg_total_relation_size

  0
(1 row)

So what was the 65536 bytes left behind after standard vacuum?

Regards,
Jayadevan


Re: [GENERAL] design for multiple time series

2013-12-13 Thread Jayadevan M
On Fri, Dec 13, 2013 at 12:15 AM, Seb  wrote:

> Hi,
>
> I'm working on the design of a database for time series data collected
> by a variety of meteorological sensors.  Many sensors share the same
> sampling scheme, but not all.  I initially thought it would be a good
> idea to have a table identifying each parameter (variable) that the
> sensors report on:
>
> CREATE TABLE parameters (
> parameter_id serial PRIMARY KEY,
> parameter_name character_varying(200) NOT NULL,
> ...
> )
>
> and then store the data in a table referencing it:
>
> CREATE TABLE series (
> record_id serial PRIMARY KEY,
> parameter_id integer REFERENCES parameters,
> reading 
> ...
> )
>
> but of course, the data type for the parameters may vary, so it's
> impossible to assign a data type to the "reading" column.  The number of
> variables measured by the sensors is quite large and may grow or
> decrease over time, and grouping them into subjects (tables) is not
> clear, so it's not simple to just assign them to different columns.
>
> I've been trying to search for solutions in various sources, but am
> having trouble finding relevant material.  I'd appreciate any advice.
>
> Cheers,
>
> --
> Seb
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
If you are not keen on using PostgreSQL, you could have a look at
http://opentsdb.net/
That was one project we found interesting when we were faced with a similar
problem a couple of years ago. In the end, many other factors made us opt
for Cassandra. We started with PostgreSQL. But our requirements included,
among others, ability to add new devices/parameters quickly. So the
persistence layer was mostly a data sink and we planned to move
cleansed/aggregated data to PostgreSQL for analysis. Most of the master
data was also in PostgreSQL - devicies, parameters, units.


[GENERAL] Refresh Materialized View

2014-01-01 Thread Jayadevan M
Hello all,
A few questions about materialized views.
When I refresh a materialized view, is it a DELETE/INSERT behind the
scenes?
Do we need to vacuum to reclaim space?
If a query is executed against the view when the refresh is happening, will
the query see the data before the refresh started?
Does the refresh result in exclusive locking?
Regards,
Jayadevan


Re: [GENERAL] Refresh Materialized View

2014-01-01 Thread Jayadevan M
To answer my own question, I saw this thread -
http://postgresql.1045698.n5.nabble.com/Efficiency-of-materialized-views-refresh-in-9-3-td5762618.html
I think that does answer my questions.  Nothing has changed?
Regards,
Jayadevan


On Wed, Jan 1, 2014 at 6:42 PM, Jayadevan M wrote:

> Hello all,
> A few questions about materialized views.
> When I refresh a materialized view, is it a DELETE/INSERT behind the
> scenes?
> Do we need to vacuum to reclaim space?
> If a query is executed against the view when the refresh is happening,
> will the query see the data before the refresh started?
> Does the refresh result in exclusive locking?
> Regards,
> Jayadevan
>


[GENERAL] authentication failure

2014-01-03 Thread Jayadevan M
Hi,
I am trying to login from psql and consistently getting a
"psql: FATAL:  password authentication failed for user "xyz"" for all
users. I am not being prompted for a password at all. I faced a similar
issue sometime ago because there was a .pgpass file and it had wrong
entries. This time there is no .pgpass file.
Any clues? How do I trouble-shoot?
Other possibly relevant info - this is a chrooted environment. I upgraded
the OS recently in this env and faced some issues with /dev/null /proc etc
not being present and so on. Some issues there?
The database itself is running in the same server in the non-chroot
environment. I am also running a python application which uses psycopg2 and
that is working fine.
Regards,
Jayadevan


Re: [GENERAL] authentication failure

2014-01-03 Thread Jayadevan M
Nope -
psql -W
psql: FATAL:  password authentication failed for user "postgres"



On Fri, Jan 3, 2014 at 2:49 PM, Ashesh Vashi
wrote:

> Try "psql -W" for prompting the password forcefully.
>
>
> On Fri, Jan 3, 2014 at 2:46 PM, Jayadevan M 
> wrote:
>
>> Hi,
>> I am trying to login from psql and consistently getting a
>> "psql: FATAL:  password authentication failed for user "xyz"" for all
>> users. I am not being prompted for a password at all. I faced a similar
>> issue sometime ago because there was a .pgpass file and it had wrong
>> entries. This time there is no .pgpass file.
>> Any clues? How do I trouble-shoot?
>> Other possibly relevant info - this is a chrooted environment. I upgraded
>> the OS recently in this env and faced some issues with /dev/null /proc etc
>> not being present and so on. Some issues there?
>> The database itself is running in the same server in the non-chroot
>> environment. I am also running a python application which uses psycopg2 and
>> that is working fine.
>> Regards,
>> Jayadevan
>>
>>
>
>
> --
> --
>
> Thanks & Regards,
>
> Ashesh Vashi
> EnterpriseDB INDIA: Enterprise PostgreSQL Company<http://www.enterprisedb.com>
>
>
>
> *http://www.linkedin.com/in/asheshvashi*<http://www.linkedin.com/in/asheshvashi>
>


Re: [GENERAL] authentication failure

2014-01-03 Thread Jayadevan M
I am able to login from the non-chroot environment. So it is not an issue
with pg_hba.conf and not an issue of password expiration. Is there a debug
psql option?




On Fri, Jan 3, 2014 at 5:09 PM, Chris Curvey  wrote:

>
>
>
> On Fri, Jan 3, 2014 at 4:16 AM, Jayadevan M 
> wrote:
>
>> Hi,
>> I am trying to login from psql and consistently getting a
>> "psql: FATAL:  password authentication failed for user "xyz"" for all
>> users. I am not being prompted for a password at all. I faced a similar
>> issue sometime ago because there was a .pgpass file and it had wrong
>> entries. This time there is no .pgpass file.
>> Any clues? How do I trouble-shoot?
>> Other possibly relevant info - this is a chrooted environment. I upgraded
>> the OS recently in this env and faced some issues with /dev/null /proc etc
>> not being present and so on. Some issues there?
>> The database itself is running in the same server in the non-chroot
>> environment. I am also running a python application which uses psycopg2 and
>> that is working fine.
>> Regards,
>> Jayadevan
>>
>
> Could it be a problem with your pg_hba.conf?  Perhaps password
> authentication is not enabled there?
>
>
>
> --
> I asked the Internet how to train my cat, and the Internet told me to get
> a dog.
>


Re: [GENERAL] authentication failure

2014-01-03 Thread Jayadevan M
Yes. All the basic checks I have done. I upgraded from CENTOs 6.4 to 6.5.
Another interesting thing - if I su - postgres and then try, it works. So
it has something to do with the chrt user (root) settings.


On Fri, Jan 3, 2014 at 5:36 PM, dinesh kumar wrote:

>
> On Fri, Jan 3, 2014 at 5:13 PM, Jayadevan M 
> wrote:
>
>> I am able to login from the non-chroot environment. So it is not an issue
>> with pg_hba.conf and not an issue of password expiration. Is there a debug
>> psql option?
>>
>> OK.
>
> Have you checked the PGPASSWORD environment variable, from where you are
> trying to login.
>
> Regards,
> Dinesh
> manojadinesh.blogspot.com
>
>
>>
>>
>> On Fri, Jan 3, 2014 at 5:09 PM, Chris Curvey wrote:
>>
>>>
>>>
>>>
>>> On Fri, Jan 3, 2014 at 4:16 AM, Jayadevan M >> > wrote:
>>>
>>>> Hi,
>>>> I am trying to login from psql and consistently getting a
>>>> "psql: FATAL:  password authentication failed for user "xyz"" for all
>>>> users. I am not being prompted for a password at all. I faced a similar
>>>> issue sometime ago because there was a .pgpass file and it had wrong
>>>> entries. This time there is no .pgpass file.
>>>> Any clues? How do I trouble-shoot?
>>>> Other possibly relevant info - this is a chrooted environment. I
>>>> upgraded the OS recently in this env and faced some issues with /dev/null
>>>> /proc etc not being present and so on. Some issues there?
>>>> The database itself is running in the same server in the non-chroot
>>>> environment. I am also running a python application which uses psycopg2 and
>>>> that is working fine.
>>>> Regards,
>>>> Jayadevan
>>>>
>>>
>>> Could it be a problem with your pg_hba.conf?  Perhaps password
>>> authentication is not enabled there?
>>>
>>>
>>>
>>> --
>>> I asked the Internet how to train my cat, and the Internet told me to
>>> get a dog.
>>>
>>
>>
>


Re: [GENERAL] authentication failure

2014-01-03 Thread Jayadevan M
There is only one instance -

 ps -eaf | grep bin/postgres | grep -v grep
postgres  3203 1  0  2013 ?00:02:04 /usr/pgsql-9.3/bin/postgres

The basic checks I did -
Connectivity from other machines work (so server is accessible)
No .pgpass file in the system
Able to login as postgres and application users from the same system (as
long as it is not from the chroot environment). So this is not a
show-stopper- I am able to work and application is also working fine
Even in the chroot, as I mentioned, once I do a 'su - postgres', I am able
to login.

Overall, it is just a minor inconvenience, but I would like to resolve
this. The no password supplied message comes back so fast, it is as if it
did not even attemp to connect.




On Fri, Jan 3, 2014 at 8:37 PM, Tom Lane  wrote:

> Adrian Klaver  writes:
> > On 01/03/2014 04:54 AM, Jayadevan M wrote:
> >> Yes. All the basic checks I have done. I upgraded from CENTOs 6.4 to
> >> 6.5. Another interesting thing - if I su - postgres and then try, it
> >> works. So it has something to do with the chrt user (root) settings.
>
> > It might be helpful to detail what are the 'basic' checks you did? Also
> > whenever I see unexplained behavior of this nature on Linux I suspect
> > SELinux. Is it enabled?
>
> I'm wondering if there are two Postgres instances on the machine,
> and the apparent inconsistency comes from connecting to one or the other.
>
> A different line of thought is that connections from inside and outside
> the chroot are getting different treatment in pg_hba.conf.
>
> In any case, debugging this from the client's perspective is almost
> impossible, because the server intentionally doesn't give a lot of
> detail about why authentication failed.  You need to be looking at
> the postmaster log, possibly with log_min_messages cranked up.
>
> regards, tom lane
>


Re: [GENERAL] authentication failure

2014-01-04 Thread Jayadevan M
Log entries for 3 situations - 2 successful and one failed attempt -

>From non-chroot, shell user  postgres

2014-01-05 10:11:58 IST [17008]: [2-1] user=postgres,db=postgres LOG:
connection authorized: user=postgres database=postgres
2014-01-05 10:12:03 IST [17008]: [3-1] user=postgres,db=postgres LOG:
disconnection: session time: 0:00:04.413 user=postgres database=postgres
host=::1 port=47944


>From chroot, shell user root, db user postgres
2014-01-05 10:12:18 IST [17021]: [1-1] user=[unknown],db=[unknown] LOG:
connection received: host=::1 port=47945
2014-01-05 10:12:18 IST [17022]: [1-1] user=[unknown],db=[unknown] LOG:
connection received: host=::1 port=47946
2014-01-05 10:12:18 IST [17022]: [2-1] user=postgres,db=postgres FATAL:
password authentication failed for user "postgres"
2014-01-05 10:12:18 IST [17022]: [3-1] user=postgres,db=postgres DETAIL:
Connection matched pg_hba.conf line 90: "hostall
all ::1/128 md5"


chroot, shell user postgres

2014-01-05 10:12:48 IST [17051]: [1-1] user=[unknown],db=[unknown] LOG:
connection received: host=::1 port=47948
2014-01-05 10:12:51 IST [17052]: [1-1] user=[unknown],db=[unknown] LOG:
connection received: host=::1 port=47949
2014-01-05 10:12:51 IST [17052]: [2-1] user=postgres,db=postgres LOG:
connection authorized: user=postgres database=postgres



On Sat, Jan 4, 2014 at 8:43 PM, Adrian Klaver wrote:

> On 01/03/2014 09:29 PM, Jayadevan M wrote:
>
>> There is only one instance -
>>
>>   ps -eaf | grep bin/postgres | grep -v grep
>> postgres  3203 1  0  2013 ?00:02:04
>> /usr/pgsql-9.3/bin/postgres
>>
>> The basic checks I did -
>> Connectivity from other machines work (so server is accessible)
>> No .pgpass file in the system
>> Able to login as postgres and application users from the same system (as
>> long as it is not from the chroot environment). So this is not a
>> show-stopper- I am able to work and application is also working fine
>> Even in the chroot, as I mentioned, once I do a 'su - postgres', I am
>> able to login.
>>
>
> So, if when you are in the chroot environment what user(s) fail to log in
> and are they the same user(s) as outside the chroot?
>
>
>
>> Overall, it is just a minor inconvenience, but I would like to resolve
>> this. The no password supplied message comes back so fast, it is as if
>> it did not even attemp to connect.
>>
>
> Well, per Toms suggestion you will need to look at the Postgres log file
> to see from the servers perspective.
>
> Also, in case there is third party program involved it would not hurt to
> tail the system log during your connection attempts.
>
>
>>
>>
>>
>>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] authentication failure

2014-01-05 Thread Jayadevan M
With md5
psql
psql: FATAL:  password authentication failed for user "postgres"

with trust
psql -h localhost
psql (9.3.2)
Type "help" for help.

back to md5
psql -h localhost
psql: FATAL:  password authentication failed for user "postgres"

But...

find / -name .pgpass


$ env | grep PG
PGPORT=1234
PGUSER=postgres
PGHOST=localhost



On Sun, Jan 5, 2014 at 9:03 PM, Adrian Klaver wrote:

> On 01/04/2014 08:46 PM, Jayadevan M wrote:
>
>> Log entries for 3 situations - 2 successful and one failed attempt -
>>
>>  From non-chroot, shell user  postgres
>>
>> 2014-01-05 10:11:58 IST [17008]: [2-1] user=postgres,db=postgres LOG:
>> connection authorized: user=postgres database=postgres
>> 2014-01-05 10:12:03 IST [17008]: [3-1] user=postgres,db=postgres LOG:
>> disconnection: session time: 0:00:04.413 user=postgres database=postgres
>> host=::1 port=47944
>>
>>
>>  From chroot, shell user root, db user postgres
>> 2014-01-05 10:12:18 IST [17021]: [1-1] user=[unknown],db=[unknown] LOG:
>> connection received: host=::1 port=47945
>> 2014-01-05 10:12:18 IST [17022]: [1-1] user=[unknown],db=[unknown] LOG:
>> connection received: host=::1 port=47946
>> 2014-01-05 10:12:18 IST [17022]: [2-1] user=postgres,db=postgres FATAL:
>> password authentication failed for user "postgres"
>> 2014-01-05 10:12:18 IST [17022]: [3-1] user=postgres,db=postgres
>> DETAIL:  Connection matched pg_hba.conf line 90: "host
>> all all ::1/128 md5"
>>
>>
> Alright so it (psql I assume) makes the connection but is failing when the
> password is supplied. That would point to an erroneous password being
> supplied from the root shell. This would seem to mean there is a ~/.pgpass
> file with an incorrect value or a PGPASSWORD environment variable with an
> incorrect value. There is also the possibility that PGPASSFILE was set to
> point to a file other than .pgpass. To confirm that it is a password issue
> you could temporarily change the pg_hba.conf line above to trust and
> attempt the connection to see if it succeeds.
>
>
>> chroot, shell user postgres
>>
>> 2014-01-05 10:12:48 IST [17051]: [1-1] user=[unknown],db=[unknown] LOG:
>> connection received: host=::1 port=47948
>> 2014-01-05 10:12:51 IST [17052]: [1-1] user=[unknown],db=[unknown] LOG:
>> connection received: host=::1 port=47949
>> 2014-01-05 10:12:51 IST [17052]: [2-1] user=postgres,db=postgres LOG:
>> connection authorized: user=postgres database=postgres
>>
>>
>>
>
>
> --
> Adrian Klaver
> adrian.kla...@gmail.com
>


Re: [GENERAL] authentication failure

2014-01-05 Thread Jayadevan M
I am able to login as postgres with password from the same machine. So it
is not an expiry issue (as you too concluded).  Output from strace is about
500 lines. I am pasting what I feel may be relevant. I hope this will be
useful.

execve("/usr/pgsql-9.3/bin/psql", ["psql", "-h", "localhost"], [/* 24 vars
*/]) = 0
brk(0)  = 0x1962000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7fb252e42000
access("/etc/ld.so.preload", R_OK)  = -1 ENOENT (No such file or
directory)
open("/etc/ld.so.cache", O_RDONLY)  = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=40284, ...}) = 0
mmap(NULL, 40284, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7fb252e38000
close(3)= 0
open("/usr/pgsql-9.3/lib/libpq.so.5", O_RDONLY) = 3
read(3,
"\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\240\225\0\0\0\0\0\0"...,
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=188904, ...}) = 0
mmap(NULL, 2284256, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) =
0x7fb2529f6000
mprotect(0x7fb252a21000, 2097152, PROT_NONE) = 0

munmap(0x7fb252e41000, 4096)= 0
open("/usr/pgsql-9.3/share/locale/en_IN/LC_MESSAGES/psql-9.3.mo", O_RDONLY)
= -1 ENOENT (No such file or directory)
open("/usr/pgsql-9.3/share/locale/en/LC_MESSAGES/psql-9.3.mo", O_RDONLY) =
-1 ENOENT (No such file or directory)
geteuid()   = 0
socket(PF_FILE, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1
ENOENT (No such file or directory)
close(3)= 0
socket(PF_FILE, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1
ENOENT (No such file or directory)
close(3)= 0

stat("/root/.pgpass", 0x7fffc631e030)   = -1 ENOENT (No such file or
directory)
socket(PF_NETLINK, SOCK_RAW, 0) = 3
bind(3, {sa_family=AF_NETLINK, pid=0, groups=}, 12) = 0
.
socket(PF_FILE, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1
ENOENT (No such file or directory)
close(3)= 0
socket(PF_FILE, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, 0) = 3
connect(3, {sa_family=AF_FILE, path="/var/run/nscd/socket"}, 110) = -1
ENOENT (No such file or directory)
close(3)= 0
open("/etc/host.conf", O_RDONLY)= 3
fstat(3, {st_mode=S_IFREG|0644, st_size=9, ...}) = 0

sendto(3, "\0\0\0T\0\3\0\0user\0postgres\0database\0p"..., 84,
MSG_NOSIGNAL, NULL, 0) = 84
poll([{fd=3, events=POLLIN|POLLERR}], 1, -1) = 1 ([{fd=3, revents=POLLIN}])
recvfrom(3, "R\0\0\0\f\0\0\0\5:\314\261\264", 16384, 0, NULL, NULL) = 13
close(3)= 0
open("/dev/tty", O_RDONLY)  = 3
open("/dev/tty", O_WRONLY|O_CREAT|O_TRUNC, 0666) = 4
ioctl(3, SNDCTL_TMR_TIMEBASE or TCGETS, 0x7fffc631e550) = -1 ENOTTY
(Inappropriate ioctl for device)
ioctl(3, SNDCTL_TMR_CONTINUE or TCSETSF, {B0 -opost -isig icanon -echo
...}) = -1 ENOTTY (Inappropriate ioctl for device)
fstat(4, {st_mode=S_IFREG|0644, st_size=0, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7fb252e41000
write(4, "Password: ", 10)  = 10
fstat(3, {st_mode=S_IFREG|0644, st_size=10, ...}) = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
0x7fb252e4
read(3, "Password: ", 4096) = 10
read(3, "", 4096)   = 0
read(3, "", 4096)   = 0
ioctl(3, SNDCTL_TMR_CONTINUE or TCSETSF, {B0 -opost -isig icanon -echo
...}) = -1 ENOTTY (Inappropriate ioctl for device)
write(4, "\n", 1)   = 1
close(3)= 0
munmap(0x7fb252e4, 4096)= 0
close(4)= 0
munmap(0x7fb252e41000, 4096)= 0
socket(PF_NETLINK, SOCK_RAW, 0) = 3

connect(3, {sa_family=AF_INET, sin_port=htons(1234),
sin_addr=inet_addr("212.71.253.117")}, 16) = 0
getsockname(3, {sa_family=AF_INET6, sin6_port=htons(49998),
inet_pton(AF_INET6, ":::212.71.253.117", &sin6_addr), sin6_flowinfo=0,
sin6_scope_id=0}, [28]) = 0
close(3)= 0
socket(PF_NETLINK, SOCK_RAW, 0) = 3
bind(3, {sa_family=AF_NETLINK, pid=0, groups=}, 12) = 0
getsockname(3, {sa_family=AF_NETLINK, pid=3632, groups=}, [12]) = 0
sendto(3, "\24\0\0\0\22\0\1\3m4\312R\0\0\0\0\0\0\0\0", 20, 0,
{sa_family=AF_NETLINK, pid=0, groups=}, 12) = 20
recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=},
msg_iov(1)=[{"0\4\0\0\20\0\2\0m4\312R0\16\0\0\0\0\4\3\1\0\0\0I\0\1\0\0\0\0\0"...,
4096}], msg_controllen=0, msg_flags=0}, 0) = 3244
close(3)= 0
socket(PF_INET6, SOCK_STREAM, IPPROTO_IP) = 3
setsockopt(3, SOL_TCP, TCP_NODELAY, [1], 4) = 0
fcntl(3, F_SETFL, O_R

[GENERAL] table design and data type choice

2014-01-07 Thread Jayadevan M
Hi,

We have a table to record the voteup/votedown by users of questions and
answers (like on stackoverflow). So there will be a large number of inserts
(voteup/down), some updates(user changes mind)and may be a few deletes. The
queries will be mostly aggregates (count(*) where question_id=  and
vote_up =1) . Is it better to have data type of Boolean, varchar or int?

I assume there isanother decision too - have 2 columns - one for up and one
for down, or have just one column which will be 1 or -1.

Regards,
Jayadevan


Re: [GENERAL] High Level Committers Wanted

2014-03-18 Thread Jayadevan M
On Sat, Mar 15, 2014 at 1:51 AM, Andy Colson  wrote:

> On 3/14/2014 6:08 AM, Antman, Jason (CMG-Atlanta) wrote:
>
>> I'm not a "high level committer", nor am I even a regular poster to this
>> list.
>>
>> not saying this post is true, but... If I'm reading between the lines
>> correctly, this could make for quite a striking headline in the tech
>> news - "NSA dumps Oracle proprietary database in favor of PostgreSQL."
>>
>>
> No no.   bobJobS is from Oracle.  They are tying to
> lure all the PG developers to a secret meeting  ...  where they will all
> disappear.  Problem solved. :-)
>
> -Andy
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>
I love that theory. You have been watching too many thrillers. But which
problem gets solved?


[GENERAL] Doubts on startup costs

2014-03-30 Thread Jayadevan M
Hi,

A few academic questions related PostgreSQL query planner and output -

In the output of EXPLAIN for SQL statements I have seen so far, the startup
cost for the innermost node/leaf has been 0. Are there situations where it
may be non-zero?

The startup costs for outer nodes will always be equal to or greater than
the total cost of the inner nodes? (My guess is NO, there may be cases
where the outer node can start processing before the inner node is
completely done).

Regards,
Jayadevan


Re: [GENERAL] Insert zero to auto increment serial column

2014-04-02 Thread Jayadevan M
Will a rule work?
http://www.postgresql.org/docs/9.3/static/sql-createrule.html


On Wed, Apr 2, 2014 at 5:47 PM, loc  wrote:

> I'm currently using an Informix Innovator-C database with Aubit4GL and I
> would like to migrate to PostgreSQL, it looks like the transition will not
> be too difficult, however there is one feature that I would like added to
> PostgreSQL.  Where is the best place to request a feature add?  With
> Informix inserting a 0 into a serial column will auto increment the serial
> value, MySQL also works this way.  With PostgreSQL I will need to do a lot
> of code modification to my Aubit4GL programs, since I will need to either
> insert with the key word default or omit the serial column in the insert
> statement.  A typical insert with Aubit4GL looks like this:
>
> create table table_name (link serial, col2 integer, col3 integer )
> define var_rec record like table_name.*
> let table_name.link = 0
> insert into table_name values(var_rec.*)
>
> As you can see, with Informix the serial column is set to 0 before the
> insert, with PostgreSQL I will need to list all the columns and will not be
> able to use the wildcard syntax, which supplies all the column names to the
> insert.  Setting the serial column to null to auto increment would also
> work for me.
>
>
>


[GENERAL] Getting value of bind variables

2011-08-19 Thread Jayadevan M
Hell l,
I am trying to debug a query that gives an error. The error in the 
application server log is 
" ERROR: operator does not exist: timestamp without time zone = character 
varying
  Hint: No operator matches the given name and argument type(s). You might 
need to add explicit type casts.
  Position: 1274"
How can I get the value of the $1,$2 etc in the query?
WHERE MST.CMPCOD= $1 
  AND   UPPER  (  CUSINDINF.FAMNAM  )=  UPPER  ($2) 
  AND   UPPER  (  CUSINDINF.INITLS  )=  UPPER  ($3) 
  AND   ( CUSINDINF.MEMDOB ) = ($4) 
  AND MST.CUSNUM <> $5

It looks like the error is for the condition ( CUSINDINF.MEMDOB ) = ($4) , 
memdob being a date of birth (timestamp) column. When I try the query at 
psql with some values, the data is retrieved OK. Is there some logging 
available in PostgreSQL that will tell me what values were actually used?
As of now, I am trying with logging level set to debug5 in postgresql.conf 
 for all logging options. But what is the 'correct' approach?

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] Getting value of bind variables

2011-08-22 Thread Jayadevan M
Hi,
Thank you for the reply.
> > I am trying to debug a query that gives an error. The error in the 
> > application server log is 
> > " ERROR: operator does not exist: timestamp without time zone = 
character 
> > varying
> 
> > It looks like the error is for the condition ( CUSINDINF.MEMDOB ) = 
($4) , 
> > memdob being a date of birth (timestamp) column. When I try the query 
at 
> > psql with some values, the data is retrieved OK. Is there some logging 

> > available in PostgreSQL that will tell me what values were actually 
used?
> 
> Yes, if you're using a reasonably recent version of Postgres ---
> log_statements should provide that information.  However, it's 100%
> irrelevant to this problem what the specific value is.  The problem is
> that the application is declaring the *type* of $4 as varchar rather
> than something appropriate.
> 
I guess so. But when I tried the same query on psql by replacing ($4) with 
a value like '20110404', the query works OK. The value of $4 is being 
passed from a java application. So does this mean I have to change the 
data type in java code?

By the way ,even log_statemtnts did not give me the bind variable value. 
It gave me values for bind variables for queries which got executed, not 
for this particular query. I am logging to csv file and loading it to the 
postgres_log as mentioned in the document. For this scenario, does the 
output go somewhere else? I am looking at the column query_text (select * 
from postgres_log where error_severity='ERROR' and log_time > '20110818')

and it shows
.AND   (  CUSINDINF.MEMDOB  )=  ($4)  


Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] COPY FROM how long should take ?

2011-08-26 Thread Jayadevan M
> Date: 08/26/2011 05:40 PM
> Subject: [GENERAL] COPY FROM how long should take ?
> Sent by: pgsql-general-ow...@postgresql.org
> 
> Hello,
> today I try to restore on of my tables with copy from file that I made 
> before.
> The file is 2.4 GB, only integers ... took me 3 hours and 30 min and I 
> hit
> CTRL+C on i7 processor with 8 GB memory, sata 2 hard drive. I modify 
> some
> psql conf file values and increase memory, work, wal, temp, check point 
> segments to 55
> Is that normal ? Whole file is 37 mil lines. When I hit enter it was on 
> line
> as logs says 26 million. I run it twice and second time after 45 min I 
> again hit
> CTRL+C and in logs I see it was again on 26 million line. Well, the 
> line number
> is approximately. Is everything normal ?
> 
> Postgresql 9.0.4
'It was on 26 million line' - does this mean there were 26 million records 
in the table? I was migrating data from Oracle to PostgreSQL, using ora2pg 
with COPY option and in under 3 hours it easily copied a table that was 10 
GB in size. Are you seeing any errors in postgres log file? In my case, 
the number of records in postgresql table stopped increasing after some 
time and I realized data transfer was failing. I checked the log file for 
errors,corrected the data issues in the source table (Oracle) and it 
finished the load without issues.
Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] Complex query question

2011-09-07 Thread Jayadevan M
Hello,
> I have a complex query question whose answer I think would help me to
> understand subselects and aggregates better. I have a table with four
> columns of interest:
> 
> id (int primary key), loc_title (varchar null), loc_value (float
> null), loc_unit (varchar null)
> 
> I want the output columns to be:
> (1) each distinct value of loc_title, sorted
> (2) an id of a record containing that loc_title
> (3) the loc_value for the record in column 2
> (4) the loc_unit for the record in column 2
> 
> I don't care as much how the records for columns 2-4 are chosen. It
> could be max(loc_value), min(id), or something else. I just need some
> sample records to test my program against.
> 
> Is this something I should be able to do with a single query with a
> subselect, or is it too much for one query? I tried a few ways and
> none of them were syntactically valid.

Will this do?
test=# select * from myt;
 id | loc_title | loc_value | loc_unit
+---+---+--
  1 | AA|80 | 10
  2 | AA|80 | 10
  3 | BB|80 | 10
  4 | AA|80 | 10
  5 | BB|80 | 10
(5 rows)

test=# select a.* from myt a where id in (select min(id) from myt group by 
loc_title) order by loc_title;
 id | loc_title | loc_value | loc_unit
+---+---+--
  1 | AA|    80 | 10
  3 | BB|80 | 10
(2 rows)

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] How to convert ByteA to Large Objects

2011-09-16 Thread Jayadevan M
> > Thank you. We are working on an Oracle to PostgreSQL migration 
project.
> > BLOB columns got converted to BYTEA in PostgreSQL and we ran into 
problems.
> > We used this to convert the data type to OID. Thank you.
> 
> you probably should detail the problems you ran into.   large objects 
> normally aren't used unless you need to store over 1GB objects in the 
> database, and at least speaking for myself, I'd rather not have objects 
> that large in my database at all, I'd as soon use files for things that 
big.
> 
Well, we are storing scanned images of boarding passes. We are using 
Hibernate and the insert statement generated threw this error. 
"Caused by: org.hibernate.exception.SQLGrammarException: could not insert:
at 
org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:67)
at 
org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:43)
at 
org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2202)
at 
org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2595)
at 
org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:51)
at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:248)
at 
org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:232)
at 
org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:139)
at 
org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:298)
at 
org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1000)
at 
com.ibsplc.xibase.server.framework.persistence.hibernate.HibernateJTATxWrapper.commit(HibernateJTATxWrapper.java:93)
... 51 more
Caused by: org.postgresql.util.PSQLException: ERROR: column "docdta" is of 
type bytea but expression is of type oid
  Hint: You will need to rewrite or cast the expression."

If we try rewriting, that would mean code changes in a few other places. 
We changed the data type and now 'some' data has been inserted. Once we 
fix the retrieval screen, we will know it is getting processed correctly. 
select data from pg_largeobject where loid= gave us a couple of 
hundred records, so we assume data has been inserted.
Switching to file storage will mean quite a bit of changes at the code 
level, and a lot of testing of the products that runs fine on Oracle now.
Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] Materialized views in Oracle

2011-09-22 Thread Jayadevan M
Hello,
..
> up front, I'm not a database expert by any means and I'm nowhere close
> to an Oracle expert, or even an Oracle novice.  If Oracle was a
> swimming pool, I would have those little floaty duck things on my
> arms. 
Please go through this link
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::NO::P11_QUESTION_ID:15695764787749
Going through this and other queries/replies from Tom helps much more than 
going through Oracle documentation. Tom is far more helpful than the 
floaty ducks.
By the way, I have used MVs in Oracle for a long time and did not run into 
major issues with them. But I started working with Oracle sometime ago. So 
what comes across as 'strange' SQL to non-Oracle users is 'very normal' 
SQL for me:).
Do be careful with on COMMIT FAST REFRESH MVs in OLTP systems. 

See..
http://asktom.oracle.com/pls/asktom/f?p=100:11:0P11_QUESTION_ID:4541191739042

"
4) basically, your commits will tend to serialize. Whether this is an 
issue depends on how long your materialized view takes to refresh itself. 
Remember - the commits will serialize - not entire transactions, just the 
very end of them. "

"As mentioned above, materialized views typically add overhead to 
individual transactions and, if created with REFRESH ON COMMIT, will 
introduce contention. The overhead arises from the need to track the 
changes made by a transaction, these changes will either be maintained in 
the session state or in log tables. In a high end OLTP system, this 
overhead is not desirable. "


Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] Help on PostgreSQL

2011-10-12 Thread Jayadevan M
Hello,
>  2.Is there any enterprise version available with all features?
We just completed migrating one of our products to PostgreSQL and load 
testing it. My suggestion- if your product uses stored procedures/packages 
heavily, have a look at EnterpriseDB. Otherwise, try plain simple 
PostgreSQL. That is what we did. We used ora2pg for database migration and 
orafce (http://pgfoundry.org/projects/orafce/) to minimize code changes. 
Since we did not have many procedures/packages it worked very well. 
Regards,
Jayadevan






DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] Conceptual Architecture

2011-10-12 Thread Jayadevan M
Hello,
> > Could you please point us to a simple white paper/doc which describes
> > the Conceptual Architecture of PostgresSQL?
I found these very useful.
http://www.postgresql.org/files/developer/tour.pdf
http://www.postgresql.org/files/developer/internalpics.pdf
http://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf

Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] PostgreSQL data loads - turn off WAL

2012-09-25 Thread Jayadevan M
Hi, 
> Basically my question is:
> Is there currently any way to avoid wal generation during data load for given
> tables and then have point in time recovery after that?
Please have a look at unlogged and temporary options here -
http://www.postgresql.org/docs/9.1/static/sql-createtable.html
I don't think they are crash safe and point in time recovery may not be 
possible for these tables.
If this is something similar to a daily load in a data warehouse, you could 
consider using temporary tables for all the processing/aggregation and then 
move data to the target tables (real, logged tables). This url might also help 
- 
http://www.postgresql.org/docs/9.1/static/populate.html
Regards,
Jayadevan




DISCLAIMER:   "The information in this e-mail and any attachment is intended 
only for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect." 

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Replication - doubts

2012-10-16 Thread Jayadevan M
Hello all,
I went through the documentation on replication. and am trying to understand 
the various options available.
Does replication available in PostgreSQL always use WAL (i.e irrespective of 
whether it is synchronous/asynchronous, whether it is using archived logs or 
'live' logs or streaming)?

I understand that we can use WALs in 3 modes

1)  Use the archived WALs - results in the slave being significantly behind 
the master. This is set up using the archive_command

2)  Use entire WAL segments

3)  Use 'streaming' , i.e. use WAL files before they are complete - in this 
case the slave is almost uptodate

If I set up a slave using the archived WAls and set hot_standby on in the 
slave, will I be able to read from the slave?


Regards,
Jayadevan




DISCLAIMER:   "The information in this e-mail and any attachment is intended 
only for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect." 

[GENERAL] Data sets for download

2012-10-24 Thread Jayadevan M
Hello all,
Does anyone know of reasonably-sized data dumps (csv or excel or xml..) that 
can be used for learning/teaching about performance tuning. Say - a set of 6-7 
tables, may be two of them with a few million records etc? Total data volume 
would be in a few GB range. There are tools which generate data, but most of 
them seem to generate junk data. I came across this one (pretty good) -
http://www.ourairports.com/data/
If there were schedule and bookings tables to go with this, it would have been 
great.
There is http://www.imdb.com/interfaces also. But the data extraction process 
not simple.
Anything similar - the typical warehouse/customer/order tables or 
emp/dept/project ?
Regards,
Jayadevan




DISCLAIMER:   "The information in this e-mail and any attachment is intended 
only for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect." 

Re: [GENERAL] Data sets for download

2012-10-25 Thread Jayadevan M
>Have a look a Benerator. It can create quite reasonable test data (e.g. valid
>addresses, "real" looking names and so on).
>
>It has a bit steep learning curve, but I'm quite happy with the results
>http://databene.org/databene-benerator
>
>
>Another option might be the Dell DVD Store Loadtest:
>http://linux.dell.com/dvdstore/
>
>It can generate testdata with a specific scale and it works well with Postgres.
>
Thank you. Will try these.
Regards,
Jayadevan



DISCLAIMER:   "The information in this e-mail and any attachment is intended 
only for the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly contact 
the sender and destroy all copies of the original communication. IBS makes no 
warranty, express or implied, nor guarantees the accuracy, adequacy or 
completeness of the information contained in this email or any attachment and 
is not liable for any errors, defects, omissions, viruses or for resultant loss 
or damage, if any, direct or indirect." 
-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Oracle to PostgreSQL replication with Goldengate

2012-11-11 Thread Jayadevan M
Hello all,
GoldenGate added PostgreSQL as a target database for replication. I tried
setting it up, and not finding any tutorial, put together a how to here -
http://jayadevanm.wordpress.com/2012/11/07/goldengate-replication-from-oracle-to-postgresql/
I think near real-time replication might have quite a few cases - such as
trying out the application on PostgreSQL before really cutting over from an
Oracle database, may be running reports off PostgreSQL and so on.
Regards,
Jayadevan


Re: [GENERAL] hstore problem with UNION?

2010-05-10 Thread Jayadevan M
> How to get around this? I really don't care how hstores get sorted and 
> more, would like to avoid sorting them at all as they could get big.
union all 
seems to work. Would that serve the purpose?
Regards,
Jayadevan 





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






Re: [GENERAL] hstore problem with UNION?

2010-05-10 Thread Jayadevan M
When we do a union, the database has to get rid of duplicates and get 
distinct values. To achieve this, probably it does a sort. Just 
guesswork
Regards,
Jayadevan





DISCLAIMER: 

"The information in this e-mail and any attachment is intended only for 
the person to whom it is addressed and may contain confidential and/or 
privileged material. If you have received this e-mail in error, kindly 
contact the sender and destroy all copies of the original communication. 
IBS makes no warranty, express or implied, nor guarantees the accuracy, 
adequacy or completeness of the information contained in this email or any 
attachment and is not liable for any errors, defects, omissions, viruses 
or for resultant loss or damage, if any, direct or indirect."






  1   2   >