[GENERAL] Re: [GENERAL] Puzzling table scan in a CTE

2013-11-25 Thread slapo

Sorry for the delay, but I didn't have access to the database during the 
weekend.
 
Here's the output of "explain (analyze, buffers)":
http://explain.depesz.com/s/scC
 
I'm also curious why it actually seems to touch the table assuming there are 
output columns which I haven't defined anywhere (see previously posted plan).
 
Cheers. :-)
 
Peter Slapansky
 
__

Od: Elliot 
Komu: , 
Dátum: 22.11.2013 19:58
Predmet: Re: [GENERAL] Puzzling table scan in a CTE

CC: "Elliot"

On 2013-11-22 12:49, sl...@centrum.sk  wrote:Thanks for the 
suggestion.
I've tried it with seqscan set to off, but there's still a bitmap heap scan 
going on:
http://explain.depesz.com/s/zIJl 
 
I have random_page_cost set to 1.5 at the moment, as the database is on a solid 
state disk.
 
Every user has a parent, but not every parent has a child.
The number of rows returned by the query is 17 at the moment.
It would be even less for a child tree of other users, usually 0 to 3, and the 
plan remains the same in those cases.
The table itself has only slightly below 5000 rows right now. It's not a lot, 
but it seems too many to go for a table scan for just 17 rows.
Could it be that the planner cannot estimate the possible match count because 
of the CTE?
Can you do "explain (analyze, buffers)"? I'm wondering if your entire table is 
in a very small number of pages, possibly all on just one page, in which case a table 
scan makes sense. The plan with seqscan off has just a tiny bit higher estimated cost, 
and it ran 1.5ms slower - that difference could be noise, but I'm thinking that all it's 
doing is an extra index page read without eliminating any table data page reads (under 
the assumption that the table is all in a single page).



[GENERAL] Re: [GENERAL] Puzzling table scan in a CTE

2013-11-25 Thread slapo

I apologise for the late response.
 
I've increased "effective_cache_size" to 50% and tried again - no change.
Afterwards, I've increased "cpu_tuple_cost" from 0.02 to 0.05 and tried again - 
no change.
 
What is most curious to me is that I think the initial result set is very 
small, so any JOINs on it should be pretty selective and the number of results 
should be pretty small with each scan: 1 to 10 from about 4000 rows. I might be 
missing something, but the row estimates are accurate.
The actual number of loops should be between 1 and 3 right now, as the tree 
doesn't go any deeper - a user has at most two parents right now, although that 
might change later.
 
Thank you. :-)
 
Peter Slapansky
 
__

Od: Kevin Grittner 
Komu: "sl...@centrum.sk" , "pgsql-general@postgresql.org" 

Dátum: 22.11.2013 20:25
Predmet: Re: [GENERAL] Puzzling table scan in a CTE


"sl...@centrum.sk"  wrote:


I have a recursive CTE where a table scan occurs, even though
there doesn't seem to be a good reason for it.


Do you have effective_cache_size set to 50% to 75% of machine RAM? 
Do you have cpu_tuple_cost set to between 0.03 and 0.05?  If not,

do changes to these settings help?

--
Kevin Grittner
EDB: http://www.enterprisedb.com 
The Enterprise PostgreSQL Company



[GENERAL] Weird "function does not exist" error.

2013-11-25 Thread 邓尧
Hi,

I ran into a "function  does not exist" error when invoking a function
created by myself. What's weird is that the function works perfectly if I
invoke it through the command line tool "psql" but fails if I invoke it
through the C API.

The SQL to create the function is:
CREATE OR REPLACE FUNCTION "lo_select_page"(uid_ BIGINT, lottery_type_ INT,
type_ INT, time_start_ BIGINT, time_end_ BIGINT,
page_number_ BIGINT, prize_status_ INT)
RETURNS SETOF REFCURSOR AS $$
...
END $$ LANGUAGE plpgsql;

The SQL to invoke the function:
BEGIN;SELECT lo_select_page(4163481095, 0, 0, 0, 189347760, 0, 0);FETCH
ALL IN order_list_cursor; FETCH ALL IN page_count_cursor;COMMIT;

Server version is 9.1, both 9.1 & 9.2 psql works, C API version is 9.2

Any ideas ?

Thanks in advance
Yao


Re: [GENERAL] Weird "function does not exist" error.

2013-11-25 Thread 邓尧
My apologies, I connected to the wrong database in my C code ...

Sorry, if this E-mail bothers you.


On Mon, Nov 25, 2013 at 5:33 PM, 邓尧  wrote:

> Hi,
>
> I ran into a "function  does not exist" error when invoking a function
> created by myself. What's weird is that the function works perfectly if I
> invoke it through the command line tool "psql" but fails if I invoke it
> through the C API.
>
> The SQL to create the function is:
> CREATE OR REPLACE FUNCTION "lo_select_page"(uid_ BIGINT, lottery_type_ INT,
> type_ INT, time_start_ BIGINT, time_end_ BIGINT,
> page_number_ BIGINT, prize_status_ INT)
> RETURNS SETOF REFCURSOR AS $$
> ...
> END $$ LANGUAGE plpgsql;
>
> The SQL to invoke the function:
> BEGIN;SELECT lo_select_page(4163481095, 0, 0, 0, 189347760, 0,
> 0);FETCH ALL IN order_list_cursor; FETCH ALL IN page_count_cursor;COMMIT;
>
> Server version is 9.1, both 9.1 & 9.2 psql works, C API version is 9.2
>
> Any ideas ?
>
> Thanks in advance
> Yao
>


Re: [GENERAL] Solution for Synonyms

2013-11-25 Thread Albe Laurenz
Thomas Kellerer wrote:
> mrprice22 wrote on 22.11.2013 19:25:
>> We are in the process of moving from Oracle to PostgreSQL.  We use a stored
>> procedure to populate some reporting tables once an hour.  There are two
>> sets of these tables, set A and set B.  We use synonyms to point to the
>> “active” set of tables at any given time.
>>
>> The procedure works like this:
>>
>> 1. If set A tables are “active” truncate set B tables or if set A tables are
>> “active” truncate set B tables.
>> 2. Populate set B tables.
>> 3. Set synonyms to point to set B tables.
>>
>> How might I accomplish the same thing in PostgreSQL?
> 
> You can use a view

You can also keep the tables A and B in different schemas
(probably a good idea anyway) and change search_path on the client side.

Yours,
Laurenz Albe

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: corruption issue after server crash - ERROR: unexpected chunk number 0

2013-11-25 Thread Shaun Thomas
> Update - I have two hot replication slaves of this db, both have the problem.
> I took one out of recovery and ran REINDEX table session_session and it
> fixed the errors about this row.  Now Im going to run vacuum and see if
> there are other tables that complain, but Im guessing if so I will need to see
> if there is a way to force vacuum to continue on error, worst case I might
> have to script a table by table vacuum script I guess..  If anyone has a 
> better
> suggestion for determining the extent of the damage Id appreciate it.

Oh man. I'm sorry, Mike.

One of the cardinal rules I have is to disconnect any replication following a 
database crash. It's just too easy for damaged replicated rows to be propagated 
unless you're on 9.3 and have checksums enabled. If you want to perform a  
table-by-table check, don't vacuum the database, but the individual tables. I'd 
go with a DO loop and have it raise notices into the log so you can investigate 
further:

COPY (
SELECT 'VACUUM ' || oid::regclass::text || ';'
  FROM pg_class
 WHERE relkind = 'r'
) to '/tmp/vac_all.sql';

Run the /tmp/vac_all.sql through psql and pipe the contents into a log file. 
Any table that doesn't vacuum successfully will need to be repaired manually. 
One way you can do this if there are dupes, is by checking the ctid value after 
disabling index scans:

SET enable_indexscan TO False;

SELECT ctid, * FROM [broken_table] WHERE ...;

Just construct the WHERE clause based on the error output, and you should get 
all rows if there are dupes. You'll need to figure out which row to keep, then 
delete the bad row based on the ctid. Do this as many times as it takes, then 
reindex to make sure the proper row versions are indexed.

It's also a good idea to dump any table that came back with an error, just in 
case.

After you've done all of that, you should re-base your replicas once you've 
determined your production system is usable. In the meantime, I highly 
recommend you set up a VIP you can assign to one of your replicas if your 
production system dies again, and remove any autostart code. If your production 
system crashes, switch the VIP immediately to a replica, and invalidate your 
old production system. Data corruption is insidious when streaming replication 
is involved.

Look into tools like repmgr to handle managing your replicas as a cluster to 
make forced invalidation and re-basing easier.

Good luck!

--
Shaun Thomas
OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604
312-676-8870
stho...@optionshouse.com

__

See http://www.peak6.com/email_disclaimer/ for terms and conditions related to 
this email


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Streaming replication slave crash

2013-11-25 Thread Mahlon E. Smith
On Tue, Sep 10, 2013, Mahlon E. Smith wrote:
> On Mon, Sep 09, 2013, Jeff Davis wrote:
> > 
> > You may have seen only partial information about that bug and the fix.
> 
> Yep, I totally glazed over the REINDEX.  Giving it a go -- thank you!

As a followup for anyone else landing on this thread, the REINDEX was
(unsurprisingly) the solution.  Been runnin' like a champ for the last
2.5 months.

--
Mahlon E. Smith  
http://www.martini.nu/contact.html


pgpI5Zd9jTRIA.pgp
Description: PGP signature


Re: [GENERAL] pg_xlog is getting bigger

2013-11-25 Thread Pankaj Mishra
Dera Kevin,

If you run checkpoint in database then you can delete the pg_xlog other
then current timestamp. We do this to our customer on regular basis when
pg_xlog exceed more then 15 gb
Regards
Pankaj


On Sat, Nov 23, 2013 at 1:48 AM, Kevin Grittner  wrote:

> Pankaj  wrote:
>
> > now go to pg_xlog folder you can can see a pattern in file names of logs.
> > Just remove the file other then current time stamp. Or for safety remove
> > logs other then current date.
>
> No!!!  NEVER remove ANY files from the pg_xlog sub-directory!  That
> will corrupt your database.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>


[GENERAL] Slony-I installation Help !

2013-11-25 Thread Tobadao
Hi.
I have downloaded "postgresql-9.3.1-1-windows.exe" and
"edb_slony_i_pg93.exe"
I'm using Windows XP v3.
installation + set Slony-I path: C:\Program Files\PostgreSQL\9.3\share
But When I try to initiate a new Slony cluster in the PgAdmin interface, the
status-bar says "Slony-I creation scripts not available; only joining
possible"  and the ‘OK’ button in the Dialog is in disabled state.
Please tell me how to fix the error ?



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Slony-I-installation-Help-tp5780178.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] passing tables name into a cursor

2013-11-25 Thread John Kelly
Im having a problem changing several cursors in a function to use a passed
in child table name.  Database is 8.4.3 on Linux
--Original cursor

c_runway18a cursor is

( select id, geom
, way_num as waydesignator, status_d as status
, t_width as width
, t_len as length
, coalesce(type_d,'X') as type
  from v_features
where f_type = ' || feature_type  and transaction_id = ' || id );


-- Modified cursor

SQL_18a text :='select id '
  || ' , geom '
|| ' , way_num as waydesignator '
|| ' , status_d as status '
|| ' , t_width as width '
|| ' , t_len as length '
|| ' , coalesce(type_d,''X'') as type '
|| ' from ' ||v_features_child
   --the table is a child table, so I have to passed in to the function so
it will use indexes
|| ' where f_type = ' || feature_type
|| ' and transaction_id = ' || id ;


 c_18a cursor is execute SQL_18a ;
 --  this works on 9.1 Windows, but when I compile it on a 8.4.3 system I
get the following error

psql:val_ways.sql:756: ERROR:  syntax error at or near ":"
LINE 1:  execute $1
 ^
QUERY:   execute $1
CONTEXT:  SQL statement in PL/PgSQL function "val_ways" near line 26

I have tried  "c_18a cursor is SQL_18a" without the execute clause, with
single quotes and ':' and get the same general error "QUERY:  $1" .
Any thoughts?


Re: [GENERAL] Slony-I installation Help !

2013-11-25 Thread Glyn Astill


> From: Tobadao 
> To: pgsql-general@postgresql.org
> Cc: 
> Sent: Monday, 25 November 2013, 16:40
> Subject: [GENERAL] Slony-I installation Help !
> 
> Hi.
> I have downloaded "postgresql-9.3.1-1-windows.exe" and
> "edb_slony_i_pg93.exe"
> I'm using Windows XP v3.
> installation + set Slony-I path: C:\Program 
> Files\PostgreSQL\9.3\share
> But When I try to initiate a new Slony cluster in the PgAdmin interface, the
> status-bar says "Slony-I creation scripts not available; only joining
> possible"  and the ‘OK’ button in the Dialog is in disabled state.
> Please tell me how to fix the error ?
> 

You need to point pgAdmin at the sql scripts provided by Slony-I, I think you 
may find them in the "share" directory of your postgresql prefix / install 
location.

I'm not sure how up to date pgAdmins slony functionality is though, I always 
prefer to use slonik...



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Re: corruption issue after server crash - ERROR: unexpected chunk number 0

2013-11-25 Thread Mike Broers
Thanks Shaun,

Im planning to schedule a time to do the vacuum freeze suggested
previously.  So far the extent of the problem seems limited to the one
session table and the one session row that was being used by a heavy bot
scan at the time of the crash.  Currently Im testing a recovery of a
production backup from today to rebase one of the replication targets that
I was using to test fixes last week.  Hopefully that validates the current
backups and I can proceed inquiring with our managed services provider
about the false notification of the disk write and ways to prevent that
going forward.

I'll update the list if I uncover anything interesting in the process
and/or need more advice, thanks again for your input - its much appreciated
as always.  Nothing like a little crash corruption to get the blood flowing!

Mike


On Mon, Nov 25, 2013 at 10:29 AM, Shaun Thomas wrote:

> > Update - I have two hot replication slaves of this db, both have the
> problem.
> > I took one out of recovery and ran REINDEX table session_session and it
> > fixed the errors about this row.  Now Im going to run vacuum and see if
> > there are other tables that complain, but Im guessing if so I will need
> to see
> > if there is a way to force vacuum to continue on error, worst case I
> might
> > have to script a table by table vacuum script I guess..  If anyone has a
> better
> > suggestion for determining the extent of the damage Id appreciate it.
>
> Oh man. I'm sorry, Mike.
>
> One of the cardinal rules I have is to disconnect any replication
> following a database crash. It's just too easy for damaged replicated rows
> to be propagated unless you're on 9.3 and have checksums enabled. If you
> want to perform a  table-by-table check, don't vacuum the database, but the
> individual tables. I'd go with a DO loop and have it raise notices into the
> log so you can investigate further:
>
> COPY (
> SELECT 'VACUUM ' || oid::regclass::text || ';'
>   FROM pg_class
>  WHERE relkind = 'r'
> ) to '/tmp/vac_all.sql';
>
> Run the /tmp/vac_all.sql through psql and pipe the contents into a log
> file. Any table that doesn't vacuum successfully will need to be repaired
> manually. One way you can do this if there are dupes, is by checking the
> ctid value after disabling index scans:
>
> SET enable_indexscan TO False;
>
> SELECT ctid, * FROM [broken_table] WHERE ...;
>
> Just construct the WHERE clause based on the error output, and you should
> get all rows if there are dupes. You'll need to figure out which row to
> keep, then delete the bad row based on the ctid. Do this as many times as
> it takes, then reindex to make sure the proper row versions are indexed.
>
> It's also a good idea to dump any table that came back with an error, just
> in case.
>
> After you've done all of that, you should re-base your replicas once
> you've determined your production system is usable. In the meantime, I
> highly recommend you set up a VIP you can assign to one of your replicas if
> your production system dies again, and remove any autostart code. If your
> production system crashes, switch the VIP immediately to a replica, and
> invalidate your old production system. Data corruption is insidious when
> streaming replication is involved.
>
> Look into tools like repmgr to handle managing your replicas as a cluster
> to make forced invalidation and re-basing easier.
>
> Good luck!
>
> --
> Shaun Thomas
> OptionsHouse | 141 W. Jackson Blvd | Suite 500 | Chicago IL, 60604
> 312-676-8870
> stho...@optionshouse.com
>
> __
>
> See http://www.peak6.com/email_disclaimer/ for terms and conditions
> related to this email
>


[GENERAL] Streaming replication and WAL archives

2013-11-25 Thread Tree


TLDR: We want to be able to use streaming replication, WAL archiving, and 
have the ability to restore from a backup made before a failover using the 
WAL archive.


Setup:
We use postgresql 9.1 with streaming replication between a pair of nodes. 
The nodes are identical, and can fail over and fail back between 
master/slave pairs. We have separate reliable storage where we store 
periodic base backups and WAL archives. On each node, archive_command and 
restore_command write and read from this shared location.


Currently, WAL archives are written from the current master to a shared 
location. On failover, the new master will write WAL files with the same 
name as already exist in the archive. Currently we allow these to 
overwrite the old files. (See more below)


When a failed node comes back, if it is able to resume streaming from the 
new master, it does. Otherwise it does an rsync backup from the new master 
and eventually gets back into sync by replaying archived WAL files.


WAL archiving by new master:
When the new master comes up, it wants to archive some of the WAL files it 
has. These WAL files have the same name as files that have already been 
archived by the old master, but different contents. Newer WAL files in the 
new timeline cannot be archived until the old ones are archived. For this 
reason we allow the old files to be overwritten. This works consistently 
for *failover*, and is effectively the same thing as having each node 
write archives to a separate location, and have each only restore from the 
others' archives.


Note that some of the contents of the WAL files on the old slave were 
never archived, and do need to be archived. The slave has transactions 
that were part of the open WAL file on the master that the master had not 
archived, yet, but that the slave received via streaming replication. So 
the end of the old timeline can only come from the old slave/new master.


Restore:
We also want to be able to restore from the base backups. The problem we 
are encountering is that a server restored from a base backup fails to 
continue to restore once it hits a WAL file that was overwritten. I.E. 
that was written by the new master after a failover.


Related discussions:
http://www.postgresql.org/message-id/CADKfymHjBa9=edv1z8qh8g9o44ia2wncrmnuowqj90djl+y...@mail.gmail.com
http://www.postgresql.org/message-id/CAHGQGwHVYqbX=a+zo+avfbvhlgoypo9g_qdkbabexgxbvgd...@mail.gmail.com

Questions:
- Is there a single series of WAL files that will be the full timeline for 
the old timeline? I.E. if we had every WAL file from each node, could we 
put them together in a way that would allow us to replay them from a 
backup made before the failover?

- Is there a better way to handle any of this?
- Is there something we're missing?

Thanks,
-Alan Bryant


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_xlog is getting bigger

2013-11-25 Thread Alvaro Herrera
Pankaj Mishra escribió:
> Dera Kevin,
> 
> If you run checkpoint in database then you can delete the pg_xlog other
> then current timestamp. We do this to our customer on regular basis when
> pg_xlog exceed more then 15 gb

That is "somewhat" correct (meaning you are less likely to see data
corruption in that case), but it is still misleading advice.  The system
is prepared to delete the files it no longer needs automatically.  If
you have WAL files piling up for no apparent reason, the most likely
explanation is that the server is misconfigured, for example due to a
failing archive_command, or to a high "keep" setting for WAL files
(which is used in some replication scenarios).  In those cases, the real
solution is to fix that problem, not delete files yourself.


> On Sat, Nov 23, 2013 at 1:48 AM, Kevin Grittner  wrote:
> 
> > Pankaj  wrote:
> >
> > > now go to pg_xlog folder you can can see a pattern in file names of logs.
> > > Just remove the file other then current time stamp. Or for safety remove
> > > logs other then current date.
> >
> > No!!!  NEVER remove ANY files from the pg_xlog sub-directory!  That
> > will corrupt your database.
> >
> > --
> > Kevin Grittner
> > EDB: http://www.enterprisedb.com
> > The Enterprise PostgreSQL Company
> >


-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] pg_xlog is getting bigger

2013-11-25 Thread Andrew Sullivan
On Mon, Nov 25, 2013 at 03:00:56PM +0530, Pankaj Mishra wrote:
> We do this to our customer on regular basis when
> pg_xlog exceed more then 15 gb

Wow.  Your poor customers.  But I'm glad you phrased it "do this to
our customer" rather than "do this _for_ our customer", since your
customer certainly is having something done to them.

-- 
Andrew Sullivan
a...@crankycanuck.ca


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query runs slow

2013-11-25 Thread Kevin Grittner
Hengky Liwandouw  wrote:
> On Nov 24, 2013, at 11:21 PM, Kevin Grittner wrote:
>> Hengky Lie  wrote:
>>
>>> this query takes long time to process. It takes around 48
>>> seconds to calculate about 690 thousand record.
>>
>>> Is there any way to make calculation faster ?
>>
>> Quite possibly -- that's about 70 microseconds per row, and even
>> fairly complex queries can often do better than that.

> After reading the link you gave to me, changing shared_buffers to
> 25% (512MB) of available RAM and effective_cache_size to 1500MB
> (about 75% of available RAM) make the query runs very fast.
> Postgres only need 1.8 second to display the result.

That's 4.6 microseconds per row.  Given the complexity of the
query, it might be hard to improve on that.  A simple tablescan
that returns all rows generally takes 1 to 2 microseconds on the
hardware I generally use.

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Debugging of C functions

2013-11-25 Thread Janek Sendrowski
Hi,
 
How is it possible to debug user defined functions written in C.
I can't just include all the files and run it.
 
Does anyone have an idea?
 
Janek Sendrowski


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] wiki on monitoring locks has queries that don't seem to work

2013-11-25 Thread Scott Marlowe
So I'm running 8.4 and go to this
page:http://wiki.postgresql.org/wiki/Lock_Monitoring

I have a query that is definitely blocked by another query. I run the
first or third queries, the ones that explicitly say that they're for
<9.2 and neither produces any output. The third query gives me a list
of 182 rows, only a few of which are actually locked in any meaningful
ways.

Now if I run this query:

 select relname,pg_locks.* from pg_class,pg_locks where
relfilenode=relation and not granted;

I get the one row for the update / insert / delete that is getting blocked.

I could swear that the original query:

 select bl.pid as blocked_pid, a.usename as blocked_user,
kl.pid as blocking_pid, ka.usename as blocking_user,
a.current_query as blocked_statement
 from pg_catalog.pg_locks bl
  join pg_catalog.pg_stat_activity a
  on bl.pid = a.procpid
  join pg_catalog.pg_locks kl
   join pg_catalog.pg_stat_activity ka
   on kl.pid = ka.procpid
  on bl.transactionid = kl.transactionid and bl.pid != kl.pid
 where not bl.granted;

worked once upon a time, but no longer. If anyone has a newer query
that works on <9.2 they could pass along that would be great.
-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] passing tables name into a cursor

2013-11-25 Thread Merlin Moncure
On Mon, Nov 25, 2013 at 11:43 AM, John Kelly  wrote:
> Im having a problem changing several cursors in a function to use a passed
> in child table name.  Database is 8.4.3 on Linux
> --Original cursor
>
> c_runway18a cursor is
>
> ( select id, geom
> , way_num as waydesignator, status_d as status
> , t_width as width
> , t_len as length
> , coalesce(type_d,'X') as type
>   from v_features
> where f_type = ' || feature_type  and transaction_id = ' || id );
>
>
> -- Modified cursor
>
> SQL_18a text :='select id '
>   || ' , geom '
> || ' , way_num as waydesignator '
> || ' , status_d as status '
> || ' , t_width as width '
> || ' , t_len as length '
> || ' , coalesce(type_d,''X'') as type '
> || ' from ' ||v_features_child
>--the table is a child table, so I have to passed in to the function so
> it will use indexes
> || ' where f_type = ' || feature_type
> || ' and transaction_id = ' || id ;
>
>
>  c_18a cursor is execute SQL_18a ;
> --  this works on 9.1 Windows, but when I compile it on a 8.4.3 system I get
> the following error
>
> psql:val_ways.sql:756: ERROR:  syntax error at or near ":"
> LINE 1:  execute $1
>  ^
> QUERY:   execute $1
> CONTEXT:  SQL statement in PL/PgSQL function "val_ways" near line 26
>
> I have tried  "c_18a cursor is SQL_18a" without the execute clause, with
> single quotes and ':' and get the same general error "QUERY:  $1" .
> Any thoughts?


Problem is in the EXECUTE statement, not in the query being executed.
My guess is you are relying on some feature that is not present in
8.4.  Can we see the complete execute statement?

merlin


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] wiki on monitoring locks has queries that don't seem to work

2013-11-25 Thread Jeff Janes
On Mon, Nov 25, 2013 at 11:57 AM, Scott Marlowe wrote:

> So I'm running 8.4 and go to this
> page:http://wiki.postgresql.org/wiki/Lock_Monitoring
>
> I have a query that is definitely blocked by another query. I run the
> first or third queries, the ones that explicitly say that they're for
> <9.2 and neither produces any output. The third query gives me a list
> of 182 rows, only a few of which are actually locked in any meaningful
> ways.
>

Those queries only find row-level locks, not object-level locks (as
indicated).  I suspect that you are blocking on object-level locks. Maybe
you will have better luck with
http://wiki.postgresql.org/wiki/Lock_dependency_information.


>
> Now if I run this query:
>
>  select relname,pg_locks.* from pg_class,pg_locks where
> relfilenode=relation and not granted;
>
> I get the one row for the update / insert / delete that is getting blocked.
>

Can you show us that row?


Cheers,

Jeff


Re: [GENERAL] Easiest way to CREATE EXTENSION when upgrading from 9.0 to 9.2?

2013-11-25 Thread Joshua Boyd
Ah - that's what unpackaged meant.  I read that page, but didn’t fully 
understand what it was referring to.  Thank you for your quick reply.  :)

Josh

-Original Message-
From: Kevin Grittner [mailto:kgri...@ymail.com] 
Sent: Friday, November 22, 2013 1:21 PM
To: Joshua Boyd; pgsql-general@postgresql.org
Subject: Re: [GENERAL] Easiest way to CREATE EXTENSION when upgrading from 9.0 
to 9.2?

Joshua Boyd  wrote:

> We have a pre-production environment that was running postgresql
> 9.0 and we created the extension “pgcrypto” in it by feeding psql the 
> contrib/pgcrypto.sql ..  We just upgraded to 9.2 (via
> pg_upgrade) – naturally all the functions still exist, but pgcrypto is 
> not a registered extension with the database, nor is it the most 
> recent sql for the extension.  What would be the easiest way to both 
> register and upgrade to the most recent version of pgcrypto in our new 
> version? Or is running a rollback script to remove all of those and 
> then running CREATE EXTENSION our only option?

http://www.postgresql.org/docs/9.2/interactive/sql-createextension.html

CREATE EXTENSION pgcrypto FROM unpackaged;

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Debugging of C functions

2013-11-25 Thread hamann . w
>> Hi,
>> �
>> How is it possible to debug user defined functions written in C.
>> I can't just include all the files and run it.
>> �
>> Does anyone have an idea?
>> �
>> Janek Sendrowski
>> 

Hi,

I recall compiling with debug as normal, adding a printf followed by 30 sec 
sleep, and
use the pause to attach gdb.
If it is at all possible, you should try to run the database in single mode 
rather than
normal server operation

Regards
Wolfgang Hamann



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Inserting rows containing composite foreign keys

2013-11-25 Thread Nelson Green
Hello,

I have a question regarding inserting a row into a child table when the PK of 
the parent table is a composite key.

In this case, I have funding sources that buy projects, and projects generate 
jobs. Project numbers are not universally unique, but are unique to funding 
source. Therefore the PK of project entity is a composite of both the funding 
source FK and a sequentially incrementing number per funding source. The jobs 
table then has a PK that is an incrementing sequential number per the composite 
source/project FK:

CREATE TABLE fundsrc
(
   fundsrc_number INTEGER,
   fundsrc_name TEXT,
   fundsrc_desc TEXT,
   PRIMARY KEY (fundsrc_number)
);
INSERT INTO fundsrc
   VALUES (1, 'source01', 'first source'), (2, 'source02', 'second source');

CREATE TABLE projects
(
   fundsrc_number INTEGER,
   project_seq INTEGER,
   project_name TEXT,
   project_desc TEXT,
   CONSTRAINT fk_fundsrc FOREIGN KEY (fundsrc_number)
  REFERENCES fundsrc(fundsrc_number),
   PRIMARY KEY (fundsrc_number, project_seq)
);
CREATE INDEX project_id
   ON projects(fundsrc_number, project_seq);
INSERT INTO projects
   VALUES ((SELECT fundsrc_number FROM fundsrc
WHERE fundsrc_name = 'source01'),
1, 'proj1-1', 'first source01 project'),
((SELECT fundsrc_number FROM fundsrc

WHERE fundsrc_name = 'source02'),
1, 'proj2-1', 'first source02 project');

CREATE TABLE jobs
(
   fundsrc_number INTEGER,
   project_seq INTEGER,
   job_seq INTEGER,
   job_name TEXT,
   job_desc TEXT,
   CONSTRAINT fk_project FOREIGN KEY (fundsrc_number, project_seq)
  REFERENCES projects(fundsrc_number, project_seq),
   PRIMARY KEY (fundsrc_number, project_seq, job_seq)
);
CREATE INDEX job_id
   ON jobs(fundsrc_number, project_seq, job_seq);

When inserting a record into the jobs table that references projects by name, 
do I have to query the projects table twice, once to get the funding source 
number, and once to get the project sequence number, even though both results 
will return the same row? Or put another way, is there a way to insert a row 
into the jobs table without having to perform two sub-queries for the same row, 
thus avoiding this:

INSERT INTO jobs
   VALUES ((SELECT fundsrc_number FROM projects
WHERE project_name = 'proj1-1'),
   (SELECT project_seq FROM projects
WHERE project_name = 'proj1-1'),
1, 'job1-1.1', 'first project 1-1 job');

Note that the name attributes are not candidate keys because they are to remain 
changeable.

This is not a terrible big deal, but I ask to satisfy my curiosity.

Thanks!
  

Re: [GENERAL] wiki on monitoring locks has queries that don't seem to work

2013-11-25 Thread Scott Marlowe
On Mon, Nov 25, 2013 at 1:47 PM, Jeff Janes  wrote:
> On Mon, Nov 25, 2013 at 11:57 AM, Scott Marlowe 
> wrote:
>>
>> So I'm running 8.4 and go to this
>> page:http://wiki.postgresql.org/wiki/Lock_Monitoring
>>
>> I have a query that is definitely blocked by another query. I run the
>> first or third queries, the ones that explicitly say that they're for
>> <9.2 and neither produces any output. The third query gives me a list
>> of 182 rows, only a few of which are actually locked in any meaningful
>> ways.
>
>
> Those queries only find row-level locks, not object-level locks (as
> indicated).  I suspect that you are blocking on object-level locks. Maybe
> you will have better luck with
> http://wiki.postgresql.org/wiki/Lock_dependency_information.
>
>>
>>
>> Now if I run this query:
>>
>>  select relname,pg_locks.* from pg_class,pg_locks where
>> relfilenode=relation and not granted;
>>
>> I get the one row for the update / insert / delete that is getting
>> blocked.
>
>
> Can you show us that row?
>
>
> Cheers,
>
> Jeff

well it's gone now. it was a delete on sl_log_1 in slony. I'll wait
for it to happen again and can repost it. meanwhile I'll try the link
you posted. Thanks.

-- 
To understand recursion, one must first understand recursion.


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Wrap around id failure and after effects

2013-11-25 Thread Arun P . L
Hi all,
We had a wraparound failure in the db and most of the tables and data were 
missing. So we have done a full vacuum in db and after that the tables 
reappeared but now the problem is, all the tables have duplicate when listing 
tables with /dt. And also after the vacuum we recievied the following warning.
INFO:  free space map: 48 relations, 29977 pages stored; 134880 total pages 
neededDETAIL:  Allocated FSM size: 1000 relations + 2 pages = 215 kB shared 
memory.WARNING:  some databases have not been vacuumed in over 2 billion 
transactionsDETAIL:  You may have already suffered transaction-wraparound data 
loss.

Is this an error happened between the vacuum?  If so what can be done next to 
prevent data loss? The vacuum was not done as superuser, we are doing a second 
time vacuum as superuser now. And what are the further steps to be followed now 
like reindexing,etc?
Please advise...
Thanks for your helps in advance,
Arun

  

Re: [GENERAL] Inserting rows containing composite foreign keys

2013-11-25 Thread Thomas Kellerer
Nelson Green, 25.11.2013 23:01:
> Hello,
> When inserting a record into the jobs table that references projects by name, 
> do I have to query the projects table twice, 
> once to get the funding source number, and once to get the project sequence 
> number, even though both results will 
> return the same row? Or put another way, is there a way to insert a row into 
> the jobs table without having to 
> perform two sub-queries for the same row, thus avoiding this:
> 
> INSERT INTO jobs
>VALUES ((SELECT fundsrc_number FROM projects
> WHERE project_name = 'proj1-1'),
>(SELECT project_seq FROM projects
> WHERE project_name = 'proj1-1'),
> 1, 'job1-1.1', 'first project 1-1 job');
> 

Use an INSERT based on a SELECT, not based on VALUES:

  INSERT INTO projects (fundsrc_number, project_seq, project_name, project_desc)
  SELECT fundsrc_number, 1, 'proj1-1', 'first source01 project'
  FROM fundsrc
  WHERE fundsrc_name IN ('source01', 'source02');

  INSERT INTO jobs (fundsrc_number, project_seq, job_seq, job_name, job_desc)
  SELECT fundsrc_number, project_seq, 1, 'job1-1.1', 'first project 1-1 job'
  FROM projects
  WHERE project_name = 'proj1-1';


Note that it's good coding style to always specify the columns in an INSERT 
statement. 
It makes your statements more robust against changes.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general