[GENERAL] noobie join question

2015-05-11 Thread Steve Clark

Hi List,
I am having trouble trying to figure out
how to get the result listed at the bottom.

I have 3 tables units, types of units which has a description of the units,
and a table that list associations of the units. I can't figure out
how to do the proper joins. Any pointers would be appreciated.

create table types (
   id integer,
   descr varchar(30)
);

COPY types (id, descr) FROM stdin;
1descr 1
2descr 2
3descr 3
4descr 4
\.

create table units (
   uid integer,
   udevice varchar(30),
   utype integer
);

COPY units (uid, udevice, utype) FROM stdin;
1a1
2b1
3c4
4d3
\.


create table assoc (
   aid integer,
   src_id integer,
   dest_id integer
);

COPY assoc (aid, src_id, dest_id) FROM stdin;
112
213
334
442
\.

desired result
a | descr 1 | b | descr 1
a | descr 1 | c | descr 4
c | descr 4 | d | descr 3
d | descr 3 | b | descr 1


Thanks,
Steve







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] noobie join question

2015-05-11 Thread Steve Clark

On 05/11/2015 07:16 AM, Oliver Elphick wrote:

On Mon, 2015-05-11 at 06:46 -0400, Steve Clark wrote:

Hi List,
I am having trouble trying to figure out
how to get the result listed at the bottom.

I have 3 tables units, types of units which has a description of the units,
and a table that list associations of the units. I can't figure out
how to do the proper joins. Any pointers would be appreciated.

SELECT  us.udevice, ts.descr, ud.udevice, td.descr
   FROM  assoc AS a
 LEFT JOIN units AS us
ON a.src_id = us.uid
 LEFT JOIN types AS ts
ON us.utype = ts.id
 LEFT JOIN units AS ud
ON a.dest_id = ud.uid
 LEFT JOIN types AS td
ON ud.utype = td.id;





Thanks Oliver - that worked perfectly.


--
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] dumb question

2016-06-02 Thread Steve Clark

Hi List,

I am a noob trying to do something that seems like it should be easy but I 
can't figure it out.

I have a table like so:

id | ref_id | sts
--
1  ||  0
2  | 1  |  1
3  ||  0
4  ||  0
5  | 4  |  1
6  ||  0
7  | 6  |  1

I want to find the max(id) whose sts is 0 but whose id is not referenced by 
ref_id.

so the answer would be id=3.

Thanks for any pointers,
Steve

--



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dumb question

2016-06-02 Thread Steve Clark

Thanks all the below seem to do the trick.

On 06/02/2016 01:58 PM, David G. Johnston wrote:

select max(id) from yourtable where sts=0 and id not in (select ref_id from 
yourtable);


select max(id) from yourtable where sts=0 and id not in (select ref_id from 
yourtable);

--
Stephen Clark



[GENERAL] dumb question

2016-06-02 Thread Steve Clark

Hi List,

I am a noob trying to do something that seems like it should be easy but I 
can't figure it out.

I have a table like so:

id | ref_id | sts
--
1  ||  0
2  | 1  |  1
3  ||  0
4  ||  0
5  | 4  |  1
6  ||  0
7  | 6  |  1

I want to find the max(id) whose sts is 0 but whose id is not referenced by 
ref_id.

so the answer would be id=3.

Thanks for any pointers,
Steve

--



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] dumb question

2016-06-02 Thread Steve Clark

On 06/02/2016 04:07 PM, Dann Corbit wrote:

This is your request, translated directly into SQL

select max(id) from sometable where sts=0 and ref_id IS NULL

Looking at your sample, it seems that sts is always 1 when ref_id exists, so it 
may possibly simplify to:

select max(id) from sometable where sts=0

-Original Message-
From: pgsql-general-ow...@postgresql.org 
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Steve Clark
Sent: Thursday, June 2, 2016 9:56 AM
To: pgsql 
Subject: [GENERAL] dumb question

Hi List,

I am a noob trying to do something that seems like it should be easy but I 
can't figure it out.

I have a table like so:

id | ref_id | sts
--
1  ||  0
2  | 1  |  1
3  ||  0
4  ||  0
5  | 4  |  1
6  ||  0
7  | 6  |  1

I want to find the max(id) whose sts is 0 but whose id is not referenced by 
ref_id.

so the answer would be id=3.

Thanks for any pointers,
Steve


Hi Dan,

Thanks for the response - but I think that would give me id=6 and not id=3.

--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] deadlock error - version 8.4 on CentOS 6

2016-10-28 Thread Steve Clark

Hello List,

I am occasionally seeing the following error:
ALERT  3 sqlcode=-400 errmsg=deadlock detected on line 3351
from an application written using ecpg when trying an update to the table.
Can autovacuum be causing this,
since no one else is updating this database table.

Thanks,


--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deadlock error - version 8.4 on CentOS 6

2016-10-28 Thread Steve Clark

On 10/28/2016 09:15 AM, Adrian Klaver wrote:

On 10/28/2016 05:28 AM, Steve Clark wrote:

Hello List,

I am occasionally seeing the following error:
ALERT  3 sqlcode=-400 errmsg=deadlock detected on line 3351

So what exactly is it doing at line 3351?


from an application written using ecpg when trying an update to the table.
Can autovacuum be causing this,
since no one else is updating this database table.

Is there more then one instance of the application running?


Thanks,





No. But I examined the pg_log/log_file and saw an error indicating it was 
autovacuum:


2016-10-27 09:47:02 EDT:srm2api:12968:LOG:  sending cancel to blocking 
autovacuum PID 12874
2016-10-27 09:47:02 EDT:srm2api:12968:DETAIL:  Process 12968 waits for 
ExclusiveLock on relation 955454549 of database 955447411.
2016-10-27 09:47:02 EDT:srm2api:12968:STATEMENT:  lock table t_unit_status_log 
in exclusive mode
2016-10-27 09:47:02 EDT::12874:ERROR:  canceling autovacuum task
2016-10-27 09:47:02 EDT::12874:CONTEXT:  automatic vacuum of table 
"srm2.public.t_unit_status_log"
2016-10-27 09:47:02 EDT:srm2api:9189:ERROR:  deadlock detected at character 8
2016-10-27 09:47:02 EDT:srm2api:9189:DETAIL:  Process 9189 waits for 
RowExclusiveLock on relation 955454549 of database 955447411; blocked by 
process 12968.
Process 12968 waits for ExclusiveLock on relation 955454518 of database 
955447411; blocked by process 9189.
Process 9189: update t_unit_status_log set status_date = now ( ) , 
unit_active = 'y' , last_updated_date = now ( ) , last_updated_by = 
current_user , devices_down = $1  where unit_serial_no = $2
Process 12968: lock table t_unit in exclusive mode

This is at the same time and same table that my application reported the error 
on.

So I feel pretty confident this is the issue. I guess I should retry the update 
in my application.

Thanks,

**

--
Stephen Clark



Re: [GENERAL] deadlock error - version 8.4 on CentOS 6

2016-10-28 Thread Steve Clark

On 10/28/2016 09:48 AM, Tom Lane wrote:

Steve Clark  writes:

No. But I examined the pg_log/log_file and saw an error indicating it was 
autovacuum:
2016-10-27 09:47:02 EDT:srm2api:12968:LOG:  sending cancel to blocking 
autovacuum PID 12874
2016-10-27 09:47:02 EDT:srm2api:12968:DETAIL:  Process 12968 waits for 
ExclusiveLock on relation 955454549 of database 955447411.
2016-10-27 09:47:02 EDT:srm2api:12968:STATEMENT:  lock table t_unit_status_log 
in exclusive mode
2016-10-27 09:47:02 EDT::12874:ERROR:  canceling autovacuum task
2016-10-27 09:47:02 EDT::12874:CONTEXT:  automatic vacuum of table 
"srm2.public.t_unit_status_log"

That kicked the autovacuum off the table, but it didn't help because you
still had a deadlock condition afterwards:


2016-10-27 09:47:02 EDT:srm2api:9189:ERROR:  deadlock detected at character 8
2016-10-27 09:47:02 EDT:srm2api:9189:DETAIL:  Process 9189 waits for 
RowExclusiveLock on relation 955454549 of database 955447411; blocked by 
process 12968.
  Process 12968 waits for ExclusiveLock on relation 955454518 of 
database 955447411; blocked by process 9189.
  Process 9189: update t_unit_status_log set status_date = now ( ) , 
unit_active = 'y' , last_updated_date = now ( ) , last_updated_by = 
current_user , devices_down = $1  where unit_serial_no = $2
  Process 12968: lock table t_unit in exclusive mode
So I feel pretty confident this is the issue. I guess I should retry the update 
in my application.

Retrying might be a usable band-aid, but really this is an application
logic error.  The code that is trying to do "lock table t_unit in
exclusive mode" must already hold some lower-level lock on t_unit, which
is blocking whatever the "update t_unit_status_log" command wants to do
with t_unit.  Looks like a classic lock-strength-upgrade mistake to me.

regards, tom lane


Oops - I forgot there is another process that runs every minute and takes about 
1 second to run that does an
exclusive lock on t_unit and t_unit_status_log.

I only see this error maybe once or twice a day, so I am thinking of waiting 1 
second and retrying when I see this error.

Thoughts?

--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] deadlock error - version 8.4 on CentOS 6

2016-10-28 Thread Steve Clark

On 10/28/2016 10:25 AM, Tom Lane wrote:

Steve Clark  writes:

On 10/28/2016 09:48 AM, Tom Lane wrote:

Retrying might be a usable band-aid, but really this is an application
logic error.  The code that is trying to do "lock table t_unit in
exclusive mode" must already hold some lower-level lock on t_unit, which
is blocking whatever the "update t_unit_status_log" command wants to do
with t_unit.  Looks like a classic lock-strength-upgrade mistake to me.

Oops - I forgot there is another process that runs every minute and
takes about 1 second to run that does an exclusive lock on t_unit and
t_unit_status_log.

The problem here doesn't seem to be that; it's that whatever transaction
is doing the "lock table" has *already* got a non-exclusive lock on
t_unit.  That's just bad programming.  Take the strongest lock you need
earliest in the transaction.

regards, tom lane


I want to thank all the people that took the time to provide some elucidation 
on my problem.

The original code that was doing the exclusive locks was written in 2003 on ver 
7.x which according to
comments in the code did not provide declaring a cursor for update in ecpg, so 
the programmer at that
time opted to lock the two tables.

I just changed to code to remove the two exclusive locks and use "for update" 
on the cursor and haven't
seen a lock issue**in the**pg_log file since.

Regards,

--
Stephen Clark



[GENERAL] Postgresql 94 from PostgreSQL RPM Repository (with Yum)

2016-11-10 Thread Steve Clark

Hi

I installed the following package:
postgresql94-server-9.4.9-1PGDG.rhel6.x86_64
postgresql94-plperl-9.4.9-1PGDG.rhel6.x86_64
postgresql94-plpython-9.4.9-1PGDG.rhel6.x86_64
postgresql94-contrib-9.4.9-1PGDG.rhel6.x86_64
postgresql94-devel-9.4.9-1PGDG.rhel6.x86_64
postgresql94-libs-9.4.9-1PGDG.rhel6.x86_64
postgresql94-9.4.9-1PGDG.rhel6.x86_64

Then tried to build pmacct and the configure complained it couldn't find the 
libpq library. I looked for
a package-config file for the above but couldn't find one.

What am I missing?

Thanks,
Steve




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Get sum of sums

2016-05-03 Thread Steve Clark

Hi List,

I have the following table that has netflow data. I can get the top ten 
receivers by the query below - but I would also like to get
at the same time a grand total of the RX Bytes. I am not exactly sure how to do 
it. This is with version 8.4.

select ip_dst as "Receiver" ,sum(bytes) as "RX Bytes" from acct_v9 where stamp_inserted >= 
'2016-04-26' and stamp_inserted <= '2016-04-30' and tag=246 group by ip_dst  order by "RX Bytes" desc 
limit 10;
Receiver|  RX Bytes
+-
 172.24.110.93  | 40363536915
 172.24.110.81  |  6496041533
 172.24.110.123 |  4891514009
 172.24.16.10   |  4540333516
 172.24.110.151 |  4101253631
 192.168.198.71 |  3303066724
 172.24.110.121 |  2529532947
 172.24.110.101 |  2506527294
 172.21.64.99   |  2472842640
 172.24.110.83  |  2232550271


Thanks,
Steve
--



[GENERAL] losing db user password going from 7.4 to 8.2

2008-01-17 Thread Steve Clark

Hello List,

I am doing a pg_dumpall -c on 7.4.
I then use psql to load into 8.2 everything seems to be right except my
db user passwords don't work anymore.

What am I missing.

I have already tried starting 8.2 postgres with both
#password_encryption = on
password_encryption = off

then reloading the database dump.

Thanks,
Steve

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] pg_dumpall

2008-01-17 Thread Steve Clark

Hello List,

the man page for pg_dump say:
pg_dump is a utility for backing up a  PostgreSQL  database.  It  makes
   consistent  backups  even  if  the database is being used 
concurrently.


does pg_dumpall make consistent backups if the database is being used 
concurrently?

Even though the man page doesn't say it does.

Thanks,
Steve

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] losing db user password going from 7.4 to 8.2

2008-01-17 Thread Steve Clark

Scott Marlowe wrote:

On Jan 17, 2008 10:35 AM, Steve Clark <[EMAIL PROTECTED]> wrote:


Hello List,

I am doing a pg_dumpall -c on 7.4.
I then use psql to load into 8.2 everything seems to be right except my
db user passwords don't work anymore.

What am I missing.




What error message are you getting?


Duh - my bad - my browser had the wrong password in it - so i thought 
things were broken.


Sorry for the noise - as he hides his head in shame.

Steve

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] pg_dumpall

2008-01-18 Thread Steve Clark

Erik Jones wrote:

On Jan 17, 2008, at 1:08 PM, Greg Smith wrote:



On Thu, 17 Jan 2008, Tom Lane wrote:


There isn't any good way to guarantee time coherence of dumps  
across two databases.


Whether there's a good way depends on what you're already doing.   
If you're going to the trouble of making a backup using PITR  
anyway, it's not hard to stop applying new logs to that replica and  
dump from it to get a point in time backup across all the  
databases.  That's kind of painful now because you have to start  
the server to run pg_dumpall, so resuming recovery is difficult,  
but you can play filesystem tricks to make that easier.



Actually, this exact scenario brings up a question I was thinking of  
last night.  If you stop a PITR standby server and bring it up to  
dump from, will all of the database file have something written to  
them at some point during the dump?  Transactional information is  
what I'd assume would be written, if so, but I'm not really sure of  
the low level details there.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com





Thanks for everyone that replied to my query about pg_dumpall.


Now another question/issue - anytime I usr createdb the resulting db 
ends up
with UTF-8 encoding unless I use the -E switch. Is there a way to make 
the

default be sql_ascii? postgres version is 8.2.5

Thanks again
Steve


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] 7.4.6 to 8.2.5 - ' changes to $_$

2008-01-18 Thread Steve Clark

Bricklen Anderson wrote:

Steve Clark wrote:


function from 7.4.x postgres

CREATE FUNCTION update_dns(text, text) RETURNS integer
AS 'UPDATE domain_details SET domain = $2 WHERE domain = $1;
DELETE from domains where domain = $1;
SELECT 1 AS ignore;'
LANGUAGE sql;

I load it into 8.2.5 - then dump it out and it is changed to


CREATE FUNCTION update_dns(text, text) RETURNS integer
AS $_$UPDATE domain_details SET domain = $2 WHERE domain = $1;
DELETE from domains where domain = $1;
SELECT 1 AS ignore;$_$
LANGUAGE sql;

notice $_$ where the single ' use to be.

Is there some way to keep this from happening?

The reason is we have systems in the field that have configuration 
information stored in 7.4.x.
We want to upload that db info load it into an 8.2.5 db massage it then 
send it back to the unit
in the field. I realize there are things I am going to have to fix up in 
the 8.2.5 dump to be able to load

it back into the 7.4.x db but I want to minimize that as much as possible.

We have some units in the field running 8.1.3 and it does not change the 
' to $_$.



Thanks,
Steve



I think "--disable-dollar-quoting" will work. (pg_dump --help)




Thanks a lot. I missed that option in the man page - but now I see it.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[GENERAL] 7.4.6 to 8.2.5 - ' changes to $_$

2008-01-18 Thread Steve Clark


function from 7.4.x postgres

CREATE FUNCTION update_dns(text, text) RETURNS integer
 AS 'UPDATE domain_details SET domain = $2 WHERE domain = $1;
 DELETE from domains where domain = $1;
 SELECT 1 AS ignore;'
 LANGUAGE sql;

I load it into 8.2.5 - then dump it out and it is changed to


CREATE FUNCTION update_dns(text, text) RETURNS integer
 AS $_$UPDATE domain_details SET domain = $2 WHERE domain = $1;
 DELETE from domains where domain = $1;
 SELECT 1 AS ignore;$_$
 LANGUAGE sql;

notice $_$ where the single ' use to be.

Is there some way to keep this from happening?

The reason is we have systems in the field that have configuration 
information stored in 7.4.x.
We want to upload that db info load it into an 8.2.5 db massage it 
then send it back to the unit
in the field. I realize there are things I am going to have to fix up 
in the 8.2.5 dump to be able to load

it back into the 7.4.x db but I want to minimize that as much as possible.

We have some units in the field running 8.1.3 and it does not change 
the ' to $_$.



Thanks,
Steve

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] best way to query

2008-01-25 Thread Steve Clark

Tom Lane wrote:

Steve Clark <[EMAIL PROTECTED]> writes:


explain shows:




 Aggregate  (cost=4712921585.30..4712921585.31 rows=1 width=0)
   ->  Seq Scan on t_event_ack_log a  (cost=103170.29..4712920878.60 
rows=282677 width=0)

 Filter: (NOT (subplan))
 SubPlan
   ->  Materialize  (cost=103170.29..117301.92 rows=1016163 
width=4)
 ->  Index Scan using pk_tuel_eln on t_unit_event_log 
 (cost=0.00..98184.12 rows=1016163 width=4)



Yeah, that's going to suck.  A brute force solution is to see if you
can get it to switch to a "hashed subplan" by increasing work_mem.

Also, whatever is the ORDER BY for?

regards, tom lane



without the order by it wants to do a seq scan of t_unit_event_log.
see below:
 explain select count(*) from t_event_ack_log where event_log_no not 
in (select event_log_no from t_unit_event_log);

   QUERY PLAN

 Aggregate  (cost=12144872193.82..12144872193.82 rows=1 width=0)
   ->  Seq Scan on t_event_ack_log  (cost=0.00..12144871485.07 
rows=283497 width=0)

 Filter: (NOT (subplan))
 SubPlan
   ->  Seq Scan on t_unit_event_log  (cost=0.00..40286.56 
rows=1021156 width=4)

(5 rows)


Will try increasing work_memory.

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] best way to query

2008-01-25 Thread Steve Clark

Daniel Verite wrote:

Steve Clark wrote:



any way i have 2 table - A and B.
each table has a key field and if a row is in B it should have a 
corresponding row in A - but theres

the problem it doesn't for all the rows in B.

So I want to do something like
delete from B where key not in (select key from A order by key);

The problem is there are about 1,000,000 rows in A and 300,000 rows 


in 


B. I let the above run
all night and it was still running the next morning. Does anyone have 




an idea of a better way.



An outer join is sometimes spectacularly more efficient for this 
particular kind of query.


I'd suggest you try:

delete from B where key in 
 (select B.key from B left outer join A on A.key=B.key

   where A.key is null)


WOW!

this runs in about 10 seconds - thanks Daniel.

 explain select count(*) from t_event_ack_log  where event_log_no in 
(select t_event_ack_log.event_log_no from t_event_ack_log left outer 
join t_unit_event_log a on a.event_log_no=t_event_ack_log.event_log_no 
 where a.event_log_no is null);
   QUERY PLAN 



 Aggregate  (cost=128349.56..128349.57 rows=1 width=0)
   ->  Hash Join  (cost=94512.91..126935.36 rows=565681 width=0)
 Hash Cond: (public.t_event_ack_log.event_log_no = 
public.t_event_ack_log.event_log_no)
 ->  Seq Scan on t_event_ack_log  (cost=0.00..14759.85 
rows=565685 width=4)

 ->  Hash  (cost=92609.85..92609.85 rows=152245 width=4)
   ->  HashAggregate  (cost=91087.40..92609.85 
rows=152245 width=4)
 ->  Hash Left Join  (cost=57337.95..90380.29 
rows=282842 width=4)
   Hash Cond: 
(public.t_event_ack_log.event_log_no = a.event_log_no)

   Filter: (a.event_log_no IS NULL)
   ->  Seq Scan on t_event_ack_log 
(cost=0.00..14759.85 rows=565685 width=4)
   ->  Hash  (cost=40696.09..40696.09 
rows=1014309 width=4)
 ->  Seq Scan on t_unit_event_log a 
(cost=0.00..40696.09 rows=1014309 width=4)




---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


[GENERAL] best way to query

2008-01-25 Thread Steve Clark

Hello List,

this is a noobie question:

I have had to take over an existing system - it was supposed to have 
some contraints that

prevented dangling references - but...

any way i have 2 table - A and B.
each table has a key field and if a row is in B it should have a 
corresponding row in A - but theres

the problem it doesn't for all the rows in B.

So I want to do something like
delete from B where key not in (select key from A order by key);

The problem is there are about 1,000,000 rows in A and 300,000 rows in 
B. I let the above run
all night and it was still running the next morning. Does anyone have 
an idea of a better way.


B = t_event_ack_log
A = t_unit_event_log

explain shows:

 Aggregate  (cost=4712921585.30..4712921585.31 rows=1 width=0)
   ->  Seq Scan on t_event_ack_log a  (cost=103170.29..4712920878.60 
rows=282677 width=0)

 Filter: (NOT (subplan))
 SubPlan
   ->  Materialize  (cost=103170.29..117301.92 rows=1016163 
width=4)
 ->  Index Scan using pk_tuel_eln on t_unit_event_log 
 (cost=0.00..98184.12 rows=1016163 width=4)


OBTW: how do I interpret the cost - the manual says:
planner's guess at how long it will take to run the statement 
(measured in units of disk page fetches)"


Not sure I understand (measured in units of disk page fetches)


Thanks,
Steve

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] best way to query

2008-01-25 Thread Steve Clark

Tom Lane wrote:

Steve Clark <[EMAIL PROTECTED]> writes:


Tom Lane wrote:


Also, whatever is the ORDER BY for?




without the order by it wants to do a seq scan of t_unit_event_log.
see below:
 explain select count(*) from t_event_ack_log where event_log_no not 
in (select event_log_no from t_unit_event_log);

   QUERY PLAN

 Aggregate  (cost=12144872193.82..12144872193.82 rows=1 width=0)
   ->  Seq Scan on t_event_ack_log  (cost=0.00..12144871485.07 
rows=283497 width=0)

 Filter: (NOT (subplan))
 SubPlan
   ->  Seq Scan on t_unit_event_log  (cost=0.00..40286.56 
rows=1021156 width=4)

(5 rows)



Hmm, the big problem with that isn't the seqscan but the lack of a
Materialize step to buffer it; which says to me that you're running a
pretty old Postgres version (8.0 or older).  You should think about an
update if you're running into performance issues.

regards, tom lane




Hi Tom,

Actually this is

 show server_version;
 server_version

 8.2.5


On FreeBSD 6.2

And Daniel gave me a query that does the job in just a few seconds.


Thanks,
Steve




---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] 8.4 Data Not Compatible with 9.0.1 Upgrade?

2010-11-02 Thread Steve Clark

On 11/01/2010 02:49 PM, Carlos Mennens wrote:

On Mon, Nov 1, 2010 at 2:27 PM, Steve Crawford
  wrote:

   

I'm guessing you are missing an initdb. Move your old data directory
somewhere else for now and do a new initdb so you can start up version 9.
 

When you say 'old data' can you be more specific as to the path and
possible files I need to move?

I go to '/var/lib/postgres/data/' directory however I am not sure
where from that folder structure I need to start moving files away
without breaking basic server functionality&  connection data.

   

mv /var/lib/postgres/data  /var/lib/postgres/data.old

You will then have to do an initdb to create the basic 9.x databases.
You can then use psql or pg_restore depending on how you dumped
your data to restore your databases.

With fedora you use either:
/etc/init.d/postgresql initdb
or
service postgresql initdb
to initialize the 9.x database system.

HTH
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] Linux

2010-11-04 Thread Steve Clark

On 11/04/2010 11:10 AM, Bill Moran wrote:

In response to Michael Gould:

   

I know that this is probably a "religion" issue but we are looking to move
Postgres to a Linux server.  We currently have a Windows 2008 R2 active
directory and all of the other servers are virtualized via VMWare ESXi.  One
of the reasons is that we want to use a 64 bit Postgres server and the UUID
processing contrib module does not provide a 64 bit version for Windows.  I
would also assume that the database when properly tuned will probably run
faster in a *inx environment.

What and why should I look at certain distributions?  It appears from what I
read, Ubanta is a good desktop but not a server.
 


I use FreeBSD everywhere, and have over 10 years experience running
PostgreSQL on FreeBSD ... I've been extremely happy with how well
the two work together, including upgrade paths, performance, security,
and customizability.  I currently manage over 20 FreeBSD+PostgreSQL
servers at work.


If you're married to Linux, remember that PostgreSQL has had a pretty
tight relationship with Red Hat for a while now.

Beyond that, I think that any Linux distro that caters to a server
environment will work well for you.

The thing (in my experience) that's going to make you happy or angry
is how well the packaging system works.  Find a distro whos packaging
system keeps up to date with PostgreSQL releases and value adds stuff
to make upgrading, management, and migration easier and you'll probably
have a distro that you'll be happy with.

   
We have used FreeBSD but are moving to CentOS. Main reason is longer 
support window.
FreeBSD usually goes EOL in a year or two. CentOS 5.x is supported thru 
at least 2014.



--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] identifying local connections

2010-11-15 Thread Steve Clark

On 11/15/2010 11:00 AM, Scott Ribe wrote:

On Nov 15, 2010, at 8:50 AM, Tom Lane wrote:

   

netstat will probably work for this, depending on what platform you're on.
 

OS X. I can see the connections, but I don't see an option to display PIDs.

   

netstat -an will do it on linux.

sockstat will do it on FreeBSD.

What's OS X ? ;-)

--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] dotted quad netmask conversion

2010-12-07 Thread Steve Clark

Hello All,

I am working with a postgresql database that has two columns.
One for an ip address and another for the netmask. Both of these
columns are char varying(30). I would like to convert to just
one column as inet. Any one know a slick way to convert the
dotted quad netmask, E.G. 255.255.128.0, to number of bits. I didn't
see any way directly looking at the network functions in the PG 
documentation.


Thanks in advance for your consideration.
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] dotted quad netmask conversion

2010-12-07 Thread Steve Clark

On 12/07/2010 10:30 AM, Jon Nelson wrote:

On Tue, Dec 7, 2010 at 9:17 AM, Steve Clark  wrote:
   

Hello All,

I am working with a postgresql database that has two columns.
One for an ip address and another for the netmask. Both of these
columns are char varying(30). I would like to convert to just
one column as inet. Any one know a slick way to convert the
dotted quad netmask, E.G. 255.255.128.0, to number of bits. I didn't
see any way directly looking at the network functions in the PG
documentation.
 


select '1.2.3.4'::inet&  '255.255.128.0'::inet;
or
select CAST('1.2.3.4' AS INET)&  CAST('255.255.128.0' AS INET);

Be aware that CIDR representation is not as granular as netmask.

http://www.postgresql.org/docs/8.4/interactive/functions-net.html

   
Thanks for the response Jon. I should have stated this PG 8.1.x and '&' 
doesn't exist

for network functions.

select CAST('1.2.3.4' AS INET) & CAST('255.255.128.0' AS INET);
ERROR:  operator does not exist: inet & inet
HINT:  No operator matches the given name and argument type(s). You may 
need to add explicit type casts.




--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] dotted quad netmask conversion

2010-12-07 Thread Steve Clark

On 12/07/2010 11:43 AM, Tom Lane wrote:

Steve Clark  writes:
   

Thanks for the response Jon. I should have stated this PG 8.1.x and '&'
doesn't exist for network functions.
 

I don't think&  does what you want anyway.  It just does a bit AND on
the two addresses, it doesn't change the masklen property.

There's probably only a small number of distinct netmasks you actually
need to handle in this conversion.  What I'd suggest is writing a simple
function with a CASE statement to translate netmask to an integer mask
length, and then you can use set_masklen to merge that result into the
address value.

regards, tom lane

   

Googling on the net I found a couple of functions that with tweaks for 8.1
seem to work.

CREATE OR REPLACE FUNCTION inet_to_longip(v_t INET)
RETURNS BIGINT AS
$inet_to_longip$
DECLARE
t1 TEXT;
t2 TEXT;
t3 TEXT;
t4 TEXT;
i BIGINT;

BEGIN
t1 := SPLIT_PART(HOST(v_t), '.',1);
t2 := SPLIT_PART(HOST(v_t), '.',2);
t3 := SPLIT_PART(HOST(v_t), '.',3);
t4 := SPLIT_PART(HOST(v_t), '.',4);
i := (t1::BIGINT << 24) + (t2::BIGINT << 16) +
(t3::BIGINT << 8) + t4::BIGINT;
RETURN i;
END;
$inet_to_longip$ LANGUAGE plpgsql STRICT IMMUTABLE;

CREATE OR REPLACE FUNCTION netmask_bits(v_i BIGINT)
RETURNS INTEGER AS
$netmask_msb$
DECLARE
n INTEGER;

BEGIN
n := (32-log(2, 4294967296 - v_i ))::integer;
RETURN n;
END;
$netmask_msb$ LANGUAGE plpgsql STRICT IMMUTABLE;

Which seems to do the trick.

select netmask_bits(inet_to_longip('255.255.255.0'));
 netmask_bits
--
   24

select netmask_bits(inet_to_longip('255.255.128.0'));
 netmask_bits
--
   17


Thanks all.




--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] how to vacuum from standalone backend

2010-12-14 Thread Steve Clark

Help!

This is postgresql 8.1.3 also the database debug can easily be recreated 
if I have to drop it

but I can't seem to do that either.

 vacuumdb debug
vacuumdb: could not connect to database debug: FATAL:  database is not 
accepting commands to avoid wraparound data loss in database "debug"
HINT:  Stop the postmaster and use a standalone backend to vacuum 
database "debug".


I am getting the above message. I am not quite sure how to proceed.

I did the following:
postgres -D /usr/local/pgsql/data debug

WARNING:  database "debug" must be vacuumed within 100 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 100 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".


PostgreSQL stand-alone backend 8.1.3
backend> vacuum full debug;

backend> vacuum full debug;
WARNING:  database "debug" must be vacuumed within 99 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

ERROR:  relation "debug" does not exist
backend> vacuum full;
WARNING:  database "debug" must be vacuumed within 98 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 97 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 96 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 95 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 94 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 93 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 92 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 91 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 90 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 89 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 88 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 87 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 86 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 85 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 84 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 83 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

ERROR:  could not access status of transaction 449971277
DETAIL:  could not open file "pg_clog/01AD": No such file or directory


Now what?

Thanks in advance.


--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] how to vacuum from standalone backend

2010-12-14 Thread Steve Clark

On 12/14/2010 01:04 PM, Steve Clark wrote:

Help!

This is postgresql 8.1.3 also the database debug can easily be 
recreated if I have to drop it

but I can't seem to do that either.

 vacuumdb debug
vacuumdb: could not connect to database debug: FATAL:  database is not 
accepting commands to avoid wraparound data loss in database "debug"
HINT:  Stop the postmaster and use a standalone backend to vacuum 
database "debug".


I am getting the above message. I am not quite sure how to proceed.

I did the following:
postgres -D /usr/local/pgsql/data debug

WARNING:  database "debug" must be vacuumed within 100 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 100 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".


PostgreSQL stand-alone backend 8.1.3
backend> vacuum full debug;

backend> vacuum full debug;
WARNING:  database "debug" must be vacuumed within 99 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

ERROR:  relation "debug" does not exist
backend> vacuum full;
WARNING:  database "debug" must be vacuumed within 98 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 97 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 96 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 95 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 94 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 93 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 92 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 91 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 90 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 89 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 88 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 87 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 86 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 85 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 84 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

WARNING:  database "debug" must be vacuumed within 83 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in 
"debug".

ERROR:  could not access status of transaction 449971277
DETAIL:  could not open file "pg_clog/01AD": No such file or directory


Now what?

Thanks in advance.



Please disregard  - figured out how to drop the database.
drop database debug - not dropdb debug - duh!

--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com



--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] 7.4 quoting

2011-03-31 Thread Steve Clark

Hi List,

I am having a problem trying to do the following:
ssh postgres@192.168.198.93 'psql -Atc "select a.interface, a.source_ip,a.dest_ip 
from kernel_gre a, ospfd_interface b where a.interface = b.interface and config ilike 
'%cost 50%';" config.db'

bash turns it into this:
ssh postgres@192.168.198.93 'psql -Atc "select a.interface, a.source_ip,a.dest_ip 
from kernel_gre a, ospfd_interface b where a.interface = b.interface and config ilike 
%cost' '50%;" config.db'

I tried various things using \' or '' or ''' but none work.
I have even tried exchanging the outer " with ' and the inner "select to 
'select but still no success.

Any ideas?

With 8.x databases I use \$\$%cost 50%\$\$ which work great.
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] 7.4 quoting

2011-03-31 Thread Steve Clark

On 03/31/2011 02:06 PM, Arjen Nienhuis wrote:

On Thu, Mar 31, 2011 at 20:05, Arjen Nienhuis  wrote:

On Thu, Mar 31, 2011 at 18:56, Steve Clark  wrote:

Hi List,

I am having a problem trying to do the following:
ssh postgres@192.168.198.93 'psql -Atc "select a.interface,
a.source_ip,a.dest_ip from kernel_gre a, ospfd_interface b where a.interface
= b.interface and config ilike '%cost 50%';" config.db'

bash turns it into this:
ssh postgres@192.168.198.93 'psql -Atc "select a.interface,
a.source_ip,a.dest_ip from kernel_gre a, ospfd_interface b where a.interface
= b.interface and config ilike %cost' '50%;" config.db'

try:

ssh postgres@192.168.198.93 'psql -Atc "select a.interface,
a.source_ip,a.dest_ip from kernel_gre a, ospfd_interface b where
a.interface = b.interface and config ilike '%cost 50%';" config.db'

which bash turns into:

ssh postgres@192.168.198.93 psql -Atc "select a.interface,
a.source_ip,a.dest_ip from kernel_gre a, ospfd_interface b where
a.interface = b.interface and config ilike '%cost 50%'" config.db


Argh!

I mean:

ssh postgres@192.168.198.93 psql -Atc \"select a.interface,
a.source_ip,a.dest_ip from kernel_gre a, ospfd_interface b where
a.interface = b.interface and config ilike \'%cost 50%\'\" config.db


Great! that did it thanks a bunch!!!

Regards,
Steve

--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] order by question

2014-08-07 Thread Steve Clark

Hello,

I am confused by how postgres 8,4..13 is sorting my data.

\d test
Table "public.test"
 Column | Type | Modifiers
+--+---
 data   | text |

select * from test order by data;
   data
--

 -
 --
 1
 11
 11F
 1F
 a
 b
 C
 F
 -F
  Feneric
  Generic
(14 rows)

The first row is a single space, the next row a single -, the next two -- .
What I don't understand is why the '-F', the ' Feneric' and the ' Generic' sort 
where they do.

I would expect the output to be like this:

   data
--

  Feneric
  Generic
 -
 --
 -F
 1
 11
 11F
 1F
 a
 b
 C
 F
(14 rows)

client_encoding
-
 SQL_ASCII

 lc_collate
-
 en_US.UTF-8

foxboxconfig=# show lc_ctype;
  lc_ctype
-
 en_US.UTF-8


Thanks for any clarification.


--
Stephen Clark







--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] order by question

2014-08-07 Thread Steve Clark

On 08/07/2014 11:36 AM, Kevin Grittner wrote:

Steve Clark  wrote:


I am confused by how postgres 8,4..13 is sorting my data.
select * from test order by data;
  data
--

-
--
1
11
11F
1F
a
b
C
F
-F
Feneric
Generic
(14 rows)

The first row is a single space, the next row a single -, the next two -- .
What I don't understand is why the '-F', the ' Feneric' and
the ' Generic' sort where they do.

I would expect the output to be like this:

  data
--

Feneric
Generic
-
--
-F
1
11
11F
1F
a
b
C
F
(14 rows)
lc_collate
-
en_US.UTF-8

PostgreSQL uses the OS collations.  What you are getting matches my
Ubuntu 14.04 machine:

kgrittn@Kevin-Desktop:~/pg/master$ echo $LANG
en_US.UTF-8
kgrittn@Kevin-Desktop:~/pg/master$ sort <
Feneric
Generic
-
--
-F
1
11
11F
1F
a
b
C
F
XXX

   -
   --
   1
   11
   11F
   1F
   a
   b
   C
   F
   -F
   Feneric
   Generic

--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Hi Kevin,

Thanks for the response. I get the same thing as postgres if I sort from the 
command line too. But I don't understand why.

I would expect '-F' to sort immediately after '-' and '--' not after 'F' as
well as ' Feneric' (spaceFeneric) and ' Generic' (spaceGeneric) to sort 
immediately after ' ' (space).

It is like the space character and the - in -Letter is ignored.


--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] noobie question

2013-01-24 Thread Steve Clark

Hi list,

This may be really simple - I usually do it using a procedural language such as 
php or a bash script.

Say I have a table that has 2 columns like
create table "foo" (
  id integer not null,
  name text
);
CREATE UNIQUE INDEX "foo_pkey" on "foo" using btree ( "id" "int4_ops" );

with 10 rows of data where id is 1 to 10.

Now I want to insert a new row ahead of id 5 so I have to renumber the rows
from 5 to 10 increasing each by one.

Or the opposite I want to delete a row an renumber to close up the gap.

Is there an easy way to do this in postgresql without resulting to some 
external language?

Thanks for your consideration.



--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] noobie question

2013-01-24 Thread Steve Clark

On 01/24/2013 12:36 PM, Jeff Janes wrote:

On Thu, Jan 24, 2013 at 8:53 AM, Chris Angelico  wrote:

On Fri, Jan 25, 2013 at 3:47 AM, Steve Clark  wrote:

Say I have a table that has 2 columns like
create table "foo" (
   id integer not null,
   name text
);
CREATE UNIQUE INDEX "foo_pkey" on "foo" using btree ( "id" "int4_ops" );

with 10 rows of data where id is 1 to 10.

Now I want to insert a new row ahead of id 5 so I have to renumber the rows
from 5 to 10 increasing each by one.

Or the opposite I want to delete a row an renumber to close up the gap.

Is there an easy way to do this in postgresql without resulting to some
external language?

This is sounding, not like an ID, but like a "position" marker or
something. It's most certainly possible; all you need is a searched
update:

UPDATE foo SET id=id+1 WHERE id>=5;
INSERT INTO foo VALUES (5,'new item at pos 5');

To do this reliably, you would have to set the unique constraint to
DEFERRABLE INITIALLY DEFERRED, otherwise you will get errors due to
transient duplicates.

If his design requires that this kind of update be done regularly, he
should probably reconsider that design.

Cheers,

Jeff



Thanks All,

This is for a few very small tables, less 100 records each, that a user can delete and 
insert records into based on the "id"
which is displayed in a php generated html screen. The tables are rarely 
updated and when they are updated only one person
is accessing them at a time.

I have seen several answers on inserting what about deleting?

--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] noobie question

2013-01-24 Thread Steve Clark

On 01/24/2013 01:06 PM, Chris Angelico wrote:

On Fri, Jan 25, 2013 at 4:45 AM, Steve Clark  wrote:

Thanks All,

This is for a few very small tables, less 100 records each, that a user can
delete and insert records into based on the "id"
which is displayed in a php generated html screen. The tables are rarely
updated and when they are updated only one person
is accessing them at a time.

I have seen several answers on inserting what about deleting?

Deleting works exactly the same way; you just subtract instead of adding.

And thanks Jeff, I forgot about that requirement. Still, searched
update is the easiest solution.

However, do seriously rethink your design. At very least, the "id"
field is misnamed; it's not the record's identity if it changes. If
your only two operations are "insert" and "delete" (with inserts
permitted at either end of the list as well as in the middle), one way
you could do it is to have a serially-numbered ID, and a 'pos'. Adding
to the end means inserting a row with a pos one higher than the
current highest. Inserting a record before another one means inserting
a row with the same pos - no renumbering needed. Deleting a row is
done by its id, not its position. And when you query the table, just
ask for them "ORDER BY POS, ID DESC" - this will show them in the
right order. This doesn't, however, handle arbitrary reordering of
records. For that, you will ultimately need to renumber the positions.

ChrisA



Hi Chris,

It is really called rule_num and relates to "in what order firewall rules are 
applied". And it used
to allow the user to place the firewall rules where they want them in relation 
to other rules.

This is an old design, of which I had no input, but am now maintaining. Like I 
said initially I have
php, bash or C code to do the reordering and was just wondering if there was a 
slick way to
do it without having to resort to some external mechanism.

Thanks to all who responded.

--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] limit based on count(*)

2013-02-22 Thread Steve Clark

Hello List,

I have a query that counts how many time an event occurs in our event_log these
are grouped by the serial number of the device that created the event. I would
like to show only the rows where the number of events exceeds some threshold.

simplified query:
select serial_no, count(*) as "restarts" from event_log where event_mesg ilike 
'system sta%' and event_date > current_date - 7
group by serial_no order by restarts;

So what I would like to see is only the serial_nos that had more than X 
restarts.

Any ideas would be appreciated.

--
Stephen Clark



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] limit based on count(*)

2013-02-22 Thread Steve Clark

On 02/22/2013 11:14 AM, Russell Keane wrote:



> select serial_no, count(*) as "restarts" from event_log where event_mesg ilike 
'system sta%' and event_date > current_date - 7
> group by serial_no

> having count(*) > X

> order by restarts

I think having is the better option.


Thanks all, didn't know about having - I am noobie with SQL.

--
Stephen Clark



Re: [GENERAL] Deploying PostgreSQL on CentOS with SSD and Hardware RAID

2013-05-10 Thread Steve Clark

On 05/10/2013 12:46 PM, Merlin Moncure wrote:

On Fri, May 10, 2013 at 11:34 AM, Evan D. Hoffman
 wrote:

I'd expect to use a RAID controller with either BBU or NVRAM cache to handle
that, and that the server itself would be on UPS for a production DB.  That
said, a standby replica DB on conventional disk is definitely a good idea in
any case.

Sadly, NVRAM cache doesn't help (unless the raid controller is
managing drive writes down to the flash level and no such products
exist that I am aware of).  The problem is that provide guarantees the
raid controller still needs to be able to tell the device to flush
down to physical storage.  While flash drives can be configured to do
that (basically write-through mode), it's pretty silly to do so as it
will ruin performance and quickly destroy the drive.

Trusting UPS is up to you, but if your ups does, someone knocks the
power cable, etc you have data loss.  With on-drive capacitor you only
get data loss via physical damage or corruption on the drive.

merlin


Well we have dual redundant power supplies on separate UPS so could something 
go wrong yes, but a tornado could
come along and destroy the building also.


--
Stephen Clark
*NetWolves*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] what \ command actually run

2013-08-16 Thread Steve Clark

Hello,

I seem to recall seeing somewhere that you can turn on an option that
will let you see what the \ command actually run, but googling and doing
a quick scan of the docs didn't turn it up. Could someone assist me on
this?

Thanks,
--
Stephen Clark
*NetWolves*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] psql - looking in wrong place for socket

2017-03-17 Thread Steve Clark
Hi List,

I am running postgresql 8.4.20 on CentOS 6. Things have been running fine for a 
long time
then I rebooted. Postgres came up but when I tried to connect with psql on the 
local machine
I got

psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

The socket is actually being created (and always has been) in /tmp

I worked around the problem temporarily by
ln -s /tmp/.s.PGSQL.5432 /var/run/postgresql/.s.PGSQL.5432

What controls where psql looks for the socket?

Thanks,
Steve




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql - looking in wrong place for socket

2017-03-17 Thread Steve Clark
On 03/17/2017 09:49 AM, Adrian Klaver wrote:
> On 03/17/2017 06:42 AM, Steve Clark wrote:
>> Hi List,
>>
>> I am running postgresql 8.4.20 on CentOS 6. Things have been running fine 
>> for a long time
>> then I rebooted. Postgres came up but when I tried to connect with psql on 
>> the local machine
>> I got
>>
>> psql: could not connect to server: No such file or directory
>> Is the server running locally and accepting
>> connections on Unix domain socket 
>> "/var/run/postgresql/.s.PGSQL.5432"?
>>
>> The socket is actually being created (and always has been) in /tmp
> So you built Postgres from source?
No - it is installed using yum.

It looks like the problem is someone loaded postgresql-9.5 and psql was using:
psql14971 postgres  memREG8,4   193296   950807 
/usr/pgsql-9.5/lib/libpq.so.5.8

the libpq from 9.5.

By removing the ln and using

# rm /var/run/postgresql/.s.PGSQL.5432
rm: remove symbolic link `/var/run/postgresql/.s.PGSQL.5432'? y

$ psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

$ LD_LIBRARY_PATH=/usr/lib64 psql
psql (8.4.20)
Type "help" for help.

postgres=#

It works OK again.

So now I know what caused the problem.

Thanks,
Steve
>> I worked around the problem temporarily by
>> ln -s /tmp/.s.PGSQL.5432 /var/run/postgresql/.s.PGSQL.5432
>>
>> What controls where psql looks for the socket?
> https://www.postgresql.org/message-id/23876.1488949292%40sss.pgh.pa.us
>
> "With the default configure options you used, the postmaster would have 
> put its Unix socket file into /tmp, not /var/run.  I wonder whether your
> problem is that you're trying to connect to it with distro-supplied
> psql+libpq that expects to find the Unix socket in /var/run."
>
>
>
>> Thanks,
>> Steve
>>
>>
>>
>>
>


-- 
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] psql - looking in wrong place for socket

2017-03-17 Thread Steve Clark
On 03/17/2017 10:14 AM, Adrian Klaver wrote:
> On 03/17/2017 06:58 AM, Steve Clark wrote:
>> On 03/17/2017 09:49 AM, Adrian Klaver wrote:
>>> On 03/17/2017 06:42 AM, Steve Clark wrote:
>>>> Hi List,
>>>>
>>>> I am running postgresql 8.4.20 on CentOS 6. Things have been running fine 
>>>> for a long time
>>>> then I rebooted. Postgres came up but when I tried to connect with psql on 
>>>> the local machine
>>>> I got
>>>>
>>>> psql: could not connect to server: No such file or directory
>>>> Is the server running locally and accepting
>>>> connections on Unix domain socket 
>>>> "/var/run/postgresql/.s.PGSQL.5432"?
>>>>
>>>> The socket is actually being created (and always has been) in /tmp
>>> So you built Postgres from source?
>> No - it is installed using yum.
>>
>> It looks like the problem is someone loaded postgresql-9.5 and psql was
>> using:
>> psql14971 postgres  memREG8,4   193296   950807
>> /usr/pgsql-9.5/lib/libpq.so.5.8
>>
>> the libpq from 9.5.
>>
>> By removing the ln and using
>>
>> # rm /var/run/postgresql/.s.PGSQL.5432
>> rm: remove symbolic link `/var/run/postgresql/.s.PGSQL.5432'? y
>>
>> $ psql
>> psql: could not connect to server: No such file or directory
>> Is the server running locally and accepting
>> connections on Unix domain socket
>> "/var/run/postgresql/.s.PGSQL.5432"?
>>
>> $ LD_LIBRARY_PATH=/usr/lib64 psql
>> psql (8.4.20)
> You probably already know, but 8.4 is ~2 1/2 past EOL. Also the last 
> release in that series was 8.4.22.
>
>
Hmm... maybe you missed the fact I am running CentOS 6. It appears 8.20 is the 
latest official release.

$ rpm -qa|grep postgres
postgresql-server-8.4.20-6.el6.x86_64

$ sudo yum update postgresql-server
Loaded plugins: fastestmirror, refresh-packagekit
Setting up Update Process
Loading mirror speeds from cached hostfile
 * atomic: www6.atomicorp.com
 * base: repo1.dal.innoscale.net
 * elrepo: iad.mirror.rackspace.com
 * epel: mirror.nodesdirect.com
 * extras: mirror.sjc02.svwh.net
 * updates: mirror.millry.co
No Packages marked for Update





[GENERAL] Why so long?

2017-04-19 Thread Steve Clark
Hello,

I am confused. I have a table that has an incrementing primary key id.

When I select max(id) from table is returns almost instantly but
when I select min(id) from table it takes longer than I want to wait.

Shouldn't postgresql be able to quickly find the minimum id value in the index?


pmacct=# explain select max(id) from netflow;
QUERY PLAN  
 
--
 Result  (cost=1.13..1.14 rows=1 width=0)
   InitPlan 1 (returns $0)
 ->  Limit  (cost=0.71..1.13 rows=1 width=8)
   ->  Index Only Scan Backward using netflow_pkey on netflow  
(cost=0.71..3799108784.10 rows=9123246080 width=8)
 Index Cond: (id IS NOT NULL)
(5 rows)

pmacct=# explain select min(id) from netflow;
   QUERY PLAN   

-
 Result  (cost=1.13..1.14 rows=1 width=0)
   InitPlan 1 (returns $0)
 ->  Limit  (cost=0.71..1.13 rows=1 width=8)
   ->  Index Only Scan using netflow_pkey on netflow  
(cost=0.71..3799108784.10 rows=9123246080 width=8)
 Index Cond: (id IS NOT NULL)
(5 rows)

pmacct=# \timing
Timing is on.
pmacct=# select max(id) from netflow;
 max
-
 17547256873
(1 row)

Time: 0.626 ms



pmacct=# select min(id) from netflow;
^CCancel request sent
ERROR:  canceling statement due to user request
Time: 339114.334 ms


   Table "public.netflow"
 Column |Type |  
Modifiers 
+-+-
 id | bigint  | not null default 
nextval('netflow_id_seq'::regclass)
 agent_id   | bigint  |
 bytes  | bigint  |
 stamp_inserted | timestamp without time zone | not null default '0001-01-01 
00:00:00'::timestamp without time zone
 stamp_updated  | timestamp without time zone |
 packets| integer | default 0
 port_src   | integer | default 0
 port_dst   | integer | default 0
 ip_proto   | smallint| default 0
 tos| smallint| default 0
 ip_src | inet| not null default '0.0.0.0'::inet
 ip_dst | inet| not null default '0.0.0.0'::inet
Indexes:
"netflow_pkey" PRIMARY KEY, btree (id)
"netflow_ts_key" btree (stamp_inserted)
"netflow_tsu_idx" btree (stamp_updated)
Triggers:
netflow_import AFTER INSERT OR UPDATE ON netflow FOR EACH STATEMENT EXECUTE 
PROCEDURE netflow_update()



-- 



Re: [GENERAL] Why so long?

2017-04-19 Thread Steve Clark
Should add this is version 9.4.10 of postgresql

On 04/19/2017 11:24 AM, Steve Clark wrote:
> Hello,
>
> I am confused. I have a table that has an incrementing primary key id.
>
> When I select max(id) from table is returns almost instantly but
> when I select min(id) from table it takes longer than I want to wait.
>
> Shouldn't postgresql be able to quickly find the minimum id value in the 
> index?
>
>
> pmacct=# explain select max(id) from netflow;
> QUERY PLAN
>
> --
>  Result  (cost=1.13..1.14 rows=1 width=0)
>InitPlan 1 (returns $0)
>  ->  Limit  (cost=0.71..1.13 rows=1 width=8)
>->  Index Only Scan Backward using netflow_pkey on netflow  
> (cost=0.71..3799108784.10 rows=9123246080 width=8)
>  Index Cond: (id IS NOT NULL)
> (5 rows)
>
> pmacct=# explain select min(id) from netflow;
>QUERY PLAN 
>   
> -
>  Result  (cost=1.13..1.14 rows=1 width=0)
>InitPlan 1 (returns $0)
>  ->  Limit  (cost=0.71..1.13 rows=1 width=8)
>->  Index Only Scan using netflow_pkey on netflow  
> (cost=0.71..3799108784.10 rows=9123246080 width=8)
>  Index Cond: (id IS NOT NULL)
> (5 rows)
>
> pmacct=# \timing
> Timing is on.
> pmacct=# select max(id) from netflow;
>  max
> -
>  17547256873
> (1 row)
>
> Time: 0.626 ms
>
>
>
> pmacct=# select min(id) from netflow;
> ^CCancel request sent
> ERROR:  canceling statement due to user request
> Time: 339114.334 ms
>
>
>Table "public.netflow"
>  Column |Type |  
> Modifiers 
> +-+-
>  id | bigint  | not null default 
> nextval('netflow_id_seq'::regclass)
>  agent_id   | bigint  |
>  bytes  | bigint  |
>  stamp_inserted | timestamp without time zone | not null default '0001-01-01 
> 00:00:00'::timestamp without time zone
>  stamp_updated  | timestamp without time zone |
>  packets| integer | default 0
>  port_src   | integer | default 0
>  port_dst   | integer | default 0
>  ip_proto   | smallint| default 0
>  tos| smallint| default 0
>  ip_src | inet| not null default 
> '0.0.0.0'::inet
>  ip_dst | inet| not null default 
> '0.0.0.0'::inet
> Indexes:
> "netflow_pkey" PRIMARY KEY, btree (id)
> "netflow_ts_key" btree (stamp_inserted)
> "netflow_tsu_idx" btree (stamp_updated)
> Triggers:
> netflow_import AFTER INSERT OR UPDATE ON netflow FOR EACH STATEMENT 
> EXECUTE PROCEDURE netflow_update()
>
>
>
> -- 
>


-- 
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] Why so long?

2017-04-19 Thread Steve Clark
On 04/19/2017 11:57 AM, Jeff Janes wrote:
> On Wed, Apr 19, 2017 at 8:24 AM, Steve Clark  <mailto:steve.cl...@netwolves.com>> wrote:
>
> Hello,
>
> I am confused. I have a table that has an incrementing primary key id.
>
> When I select max(id) from table is returns almost instantly but
> when I select min(id) from table it takes longer than I want to wait.
>
> Shouldn't postgresql be able to quickly find the minimum id value in the 
> index?
>
>
> Not if the low end of the index is stuffed full of obsolete entries, which 
> haven't been cleaned up because it is not being vacuumed often enough.
>
> Do you have autovacuum on?  Have you manually vacuumed the table recently?
>
> Cheers,
>
> Jeff
Hi Jeff,

Autovacuum is turned on.

 schemaname |relname| last_vacuum |last_autovacuum  
  | vacuum_count | autovacuum_count
+---+-+---+--+--
 public | netflow   | | 2017-04-11 
01:18:53.261221-04 |0 |1


It is a large table.
select pg_size_pretty(pg_relation_size('netflow'));
 pg_size_pretty

 1267 GB

select pg_size_pretty(pg_total_relation_size('netflow_pkey'));
 pg_size_pretty

 287 GB

Regards,
Steve


[GENERAL] Schedule

2017-06-20 Thread Steve Clark
Hello,

We have customers whose equipment we monitor. Some of the customers don't run a 
24/7 operation
and turn their equipment off when the go home. We need to create a schedule for 
them of when we
can ignore alerts from their equipment. We use postgresql in our monitoring 
environment to maintain
alerts and equipment to be monitored. Each piece of equipment has a unique unit 
serial number so
the schedule would be tied to this unit serial no.

I would be very interested in what might be the best was to organize a 
scheduling table(s) in postgresql.

Thanks in advance,
Steve
-- 



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schedule

2017-06-20 Thread Steve Clark
On 06/20/2017 09:02 AM, Adrian Klaver wrote:
> On 06/20/2017 05:35 AM, Steve Clark wrote:
>> Hello,
>>
>> We have customers whose equipment we monitor. Some of the customers don't 
>> run a 24/7 operation
>> and turn their equipment off when the go home. We need to create a schedule 
>> for them of when we
>> can ignore alerts from their equipment. We use postgresql in our monitoring 
>> environment to maintain
>> alerts and equipment to be monitored. Each piece of equipment has a unique 
>> unit serial number so
>> the schedule would be tied to this unit serial no.
>>
>> I would be very interested in what might be the best was to organize a 
>> scheduling table(s) in postgresql.
> Some questions:
>
> 1) What happens if someone ignores the schedule and the alert is real?
That is up in the air for now, probably if our NOC wasn't informed by the 
customer they
were working outside of the schedule the alert would be ignored, but then the 
customer
would probably call us because something wasn't working.
>
> 2) What are the alerts and how many are there?
Device not pingable, as an example. The alerts continue to be sent to our
monitoring system, typically at 2 minute intervals, the monitoring system would 
look at the schedule for that
unit a decide whether or not to ignore the alert.
>
> 3) How is planned downtime during scheduled work times handled?
They would get a call from our NOC if the unit was down during scheduled 
uptimes.
>
> 4) Do you want to organize by customers or equipment or both?
We have one piece of equipment at each customer that monitors one to many 
devices at the customer.
>
> 5) What is the equipment and do you or the customer provide it?
We provide the monitoring equipment, we or the customer could provide the 
equipment being monitored.
>> Thanks in advance,
>> Steve
>>
>

Thanks for response.
Steve


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Schedule

2017-06-20 Thread Steve Clark
On 06/20/2017 10:38 AM, Adrian Klaver wrote:
> On 06/20/2017 07:00 AM, Steve Clark wrote:
>> On 06/20/2017 09:02 AM, Adrian Klaver wrote:
>>> On 06/20/2017 05:35 AM, Steve Clark wrote:
>>>> Hello,
>>>>
>>>> We have customers whose equipment we monitor. Some of the customers don't 
>>>> run a 24/7 operation
>>>> and turn their equipment off when the go home. We need to create a 
>>>> schedule for them of when we
>>>> can ignore alerts from their equipment. We use postgresql in our 
>>>> monitoring environment to maintain
>>>> alerts and equipment to be monitored. Each piece of equipment has a unique 
>>>> unit serial number so
>>>> the schedule would be tied to this unit serial no.
>>>>
>>>> I would be very interested in what might be the best was to organize a 
>>>> scheduling table(s) in postgresql.
>>> Some questions:
>>>
>>> 1) What happens if someone ignores the schedule and the alert is real?
>> That is up in the air for now, probably if our NOC wasn't informed by the 
>> customer they
>> were working outside of the schedule the alert would be ignored, but then 
>> the customer
>> would probably call us because something wasn't working.
> It might be just me, but looks like a finger pointing exercise in the 
> making. The classic '(Person 1)I thought you had it. (Person 2)No, I 
> thought you had it'. The whole idea of ignoring an alert makes me 
> nervous anyway. It seems that it should be possible to have the 
> equipment produce an manual off state and the monitoring to acknowledge 
> that. That being said, see more below.
>
>>> 2) What are the alerts and how many are there?
>> Device not pingable, as an example. The alerts continue to be sent to our
>> monitoring system, typically at 2 minute intervals, the monitoring system 
>> would look at the schedule for that
>> unit a decide whether or not to ignore the alert.
>>> 3) How is planned downtime during scheduled work times handled?
>> They would get a call from our NOC if the unit was down during scheduled 
>> uptimes.
> Could they not schedule a downtime?
Yes that would certainly be an option.
>
>>> 4) Do you want to organize by customers or equipment or both?
>> We have one piece of equipment at each customer that monitors one to many 
>> devices at the customer.
> So when you where talking about unique serial numbers where you talking 
> about the monitoring equipment only or does that include the monitored 
> equipment?
>
>>> 5) What is the equipment and do you or the customer provide it?
>> We provide the monitoring equipment, we or the customer could provide the 
>> equipment being monitored.
> My first draft of an idea(I'm guessing some of this exists already):
>
> 1) Location/customer table. Not sure if a customer can have more then 
> one location.
>
> 2) Table of alerts and what they mean.
>
> 3) Schedule table keyed to location.
> To make life a good simpler I would use range types for the schedule:
> https://www.postgresql.org/docs/9.6/static/rangetypes.html
>
> Then you could use the range type operators and functions:
>
> https://www.postgresql.org/docs/9.6/static/functions-range.html#RANGE-OPERATORS-TABLE
>
> to verify whether an alert occurs in or out of the schedule.
>
> What I have not taken into account is whether a location has multiple 
> schedules e.g. weekday vs weekend. Then there is the holidays issue. Is 
> this something that needs to be dealt with?
>
> 4) Equipment table keyed to location.
We already have a monitoring system in place that has been in operation circa 
2003. Just recently we have
added a new class of customer whose operation is not 24/7.

I envision the schedule could be fairly complicated
including WE and holidays, plus the enduser might shut down for lunch etc. I am 
looking for more on how to organize the
schedule, EG a standard weekly schedule then exceptions for holidays etc, or a 
separate individual schedule for
each week, also need to consider how easy it is to maintain the schedule, etc.

Thanks,
Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] make postgresql 9.5 default on centos 7

2017-08-18 Thread Steve Clark
Hi List,

I loaded 9.5 on CentOS 7 but by default every thing wants to use the default
9.2 version that comes with CentOS 7.

Is there a simple way to fix this so the 9.5 version of tools and libraries are 
used.

Thanks,
Steve



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] warm standby examples.

2008-01-31 Thread Steve Clark

Hello List,

I am going to be setting up a warm standby postgresql 8.2.5 high 
availability 2 server system. I was
wondering if anybody that has done this can share some scripts, 
pertinent postgresql.conf entries,
etc so I don't have to reinvent the wheel. I have read the manual a 
couple of times and it is a lot to

pull together.

Anything would be greatly appreciated.

Thanks,
Steve

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [GENERAL] warm standby examples.

2008-01-31 Thread Steve Clark

Erik Jones wrote:

On Jan 31, 2008, at 10:04 AM, Steve Clark wrote:



Hello List,

I am going to be setting up a warm standby postgresql 8.2.5 high  
availability 2 server system. I was
wondering if anybody that has done this can share some scripts,  
pertinent postgresql.conf entries,
etc so I don't have to reinvent the wheel. I have read the manual a  
couple of times and it is a lot to

pull together.

Anything would be greatly appreciated.



The complexity in the docs comes from explaining what everything is  
and how it all works.  There are a couple available options to you:   
use the walmgr.py portion of the Skype's SkyTools package with will  
handle PITR backups from a primary to a single slave or manually,  
I'll cover manually here.  To actually get  a warm standby up is  
actually a pretty simple process.


Pre-process recommendations:
a.) Use pg_standby for your restore_command in the recovery.conf file  
on the standby
b.) Set up your standby host's environment and directory structure  
exactly the same as your primary.  Otherwise you'll need to spend  
time changing any symlinks you've created on the primary for xlogs,  
tablespaces, or whatnot which is really just opportunity for error.
c.) Pre-configure both the postgresql.conf and recovery.conf files  
for your standby.  I usually keep all of my different config files  
for all of my different servers in a single, version-controlled  
directory that I can then check out and symlink to.  Again,  
consistent environment & directory setups make symlinks your best  
friend.
d.) Use ssh keys for simply, and safely, transferring files between  
hosts.

e.) Follow all of the advice in the manual wrt handling errors.

1.  Set archive_command in your postgresql.conf,  rysnc is a popular  
choice or you can just use one of the examples from the docs.  I use:  
rsync -a %p [EMAIL PROTECTED]:/path/to/wal_archive/%f
2.  Reload your config -- either: SELECT pg_reload_conf(); from psql  
or: pg_ctl reload -D data_dir/

3.  Verify that the WALs are being shipped to their destination.
4.  In psql, SELECT pg_start_backup('some_label');
5.  Run your base backup.  Again, rsync is good for this with  
something as simple as: rsync -a --progress /path/to/data_dir/*  
[EMAIL PROTECTED]:/path/to/data_dir/
 I'd suggest running this in a screen term window, the --progress  
flag will let you watch to see how far along the rsync is. The -a  
flag will preserve symlinks as well as all file permissions & ownership.

6.  In psql, SELECT pg_stop_backup();
	-- this drops a file to be archived that will have the same name as  
the first WAL shipped after the call to pg_start_backup() with  
a .backup suffix.  Inside will be the start & stop WAL records  
defining the range of WAL files needed to be replayed before you can  
consider bringing the standby out of recovery.
7.  Drop in, or symlink, your recovery.conf file in the standby's  
data_dir.
	-- The restore command should use pg_standby (it's help/README are  
simple and to the point).  I'd recommend redirecting all output from  
pg_standby to a log file that you can then watch to verify that  
everything is working correctly once you've started things.

8.  Drop in, or symlink, your standby's postgresql.conf file.
8 a.) If you don't symlink your pg_xlog directory to write WALs to a  
separate drive, you can safely delete everything under data_dir/ 
pg_xlog on the standby host.
9. Start the standby db server with a normal: pg_ctl start -D /path/ 
to/data_dir/
10. run a: tail -f on your standby log and watch to make sure that  
it's replaying logs.  If everything's cool you'll see some info on  
each WAL file, in order, that the standby looks for along with  
'success' messages.  If it can't find the files for some reason,  
you'll see repeated messages like: 'WAL file not present yet.  
Checking for trigger file...' (assuming you set up pg_standby to look  
for a trigger file in your recovery_command).


Execute this entire process at least a couple times, bringing up the  
standby into normal operations mode once it's played through all of  
the necessary WAL files (as noted in the .backup file) so that you  
can connect to it and verify that everything looks good, before doing  
all of this and leaving it running indefinitely.  Once you do it a  
couple times, it becomes dirt simple.  If you have any questions  
about any of this, don't hesitate to ask.


Erik Jones

DBA | Emma®
[EMAIL PROTECTED]
800.595.4401 or 615.292.5888
615.292.0777 (fax)

Emma helps organizations everywhere communicate & market in style.
Visit us online at http://www.myemma.com





Thanks much Erik - this is exactly what I was looking for.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


Re: [GENERAL] ecpg problem

2008-02-29 Thread Steve Clark

Steve Clark wrote:

Hello List,

Don't know whether anyone here can help but... We have some code that 
has compiled and ran just
fine from postgresql 7.3.x thru 8.2.6. It uses embedded sql. I just 
yesterday upgraded our test system to
8.3.0 and this code will no longer compile. Below is a standalone code 
fragment in which I have removed
everything but what is causing the problem. Any help or ideas would be 
appreciated.


exec sql include sqlca;

EXEC SQL WHENEVER NOT FOUND CONTINUE;

// this array is grown as messages from new units are received
// it is sorted by unit_serial_no ascending to support binary searches

int getUnitData()
{

 exec sql begin declare section;

 VARCHAR h_unit_serial_no  [ 15+1];
 // we do an array fetch on these 30 is the max number for these
 int h_remote_int_netmask[4096];
 int h_local_int_netmask [4096];
 VARCHAR h_tunnel_active [4096][   1+1];
 VARCHAR h_tunnel_config_type[4096][   1+1];
 VARCHAR h_local_vpn_int_ip  [4096][  20+1];
 VARCHAR h_local_vpn_ext_ip  [4096][  20+1];
 VARCHAR h_remote_vpn_int_ip [4096][  20+1];
 VARCHAR h_remote_vpn_ext_ip [4096][  20+1];

 exec sql end declare section;

 exec sql select tunnel_active,
 tunnel_config_type,
 host(local_int_gw_ip),
 host(local_ext_gw_ip),
 host(remote_int_gw_ip),
 host(remote_ext_gw_ip),
 masklen(remote_int_gw_ip),
 masklen(local_int_gw_ip)
 into:h_tunnel_active,
 :h_tunnel_config_type,
 :h_local_vpn_int_ip,
 :h_local_vpn_ext_ip,
 :h_remote_vpn_int_ip,
 :h_remote_vpn_ext_ip,
 :h_remote_int_netmask,
 :h_local_int_netmask
 from t_vpn_tunnel_status
 where unit_serial_no = :h_unit_serial_no
 order by oid;
 /*limit 30;*/
 return TRUE;
}
gmake -k ecpg_test.o
/usr/local/bin/ecpg -I/usr/local/include/pgsql -I/usr/local/include 
ecpg_test.pgc
mkdep -O2 -Wall -DDEBUG -I../include -I/usr/local/include/pgsql 
-I/usr/local/include -I../common crypt_file.c mailuser.c 
srm2_monitor_server.c putfiles.c srm2_server_funcs.c escalate.c 
packet_loss.c srm2_cron.c srm2_db_funcs.c srm2_monitor_db.c ecpg_test.c
g++ -O2 -Wall -DDEBUG -I../include -I/usr/local/include/pgsql 
-I/usr/local/include -I../common -c ecpg_test.c

ecpg_test.pgc: In function `int getUnitData()':
ecpg_test.pgc:36: error: invalid application of `sizeof' to incomplete 
type `varchar_h_tunnel_active'
ecpg_test.pgc:38: error: invalid application of `sizeof' to incomplete 
type `varchar_h_tunnel_config_type'
ecpg_test.pgc:40: error: invalid application of `sizeof' to incomplete 
type `varchar_h_local_vpn_int_ip'
ecpg_test.pgc:42: error: invalid application of `sizeof' to incomplete 
type `varchar_h_local_vpn_ext_ip'
ecpg_test.pgc:44: error: invalid application of `sizeof' to incomplete 
type `varchar_h_remote_vpn_int_ip'
ecpg_test.pgc:46: error: invalid application of `sizeof' to incomplete 
type `varchar_h_remote_vpn_ext_ip'

gmake: *** [ecpg_test.o] Error 1

Compilation exited abnormally with code 2 at Fri Feb 29 09:59:10


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq




Actually it appears to work in 8.2.5 but be broken in 8.2.6 and 8.3.0.

Steve

---(end of broadcast)---
TIP 4: Have you searched our list archives?

  http://archives.postgresql.org/


[GENERAL] ecpg problem

2008-02-29 Thread Steve Clark

Hello List,

Don't know whether anyone here can help but... We have some code that 
has compiled and ran just
fine from postgresql 7.3.x thru 8.2.6. It uses embedded sql. I just 
yesterday upgraded our test system to
8.3.0 and this code will no longer compile. Below is a standalone code 
fragment in which I have removed
everything but what is causing the problem. Any help or ideas would be 
appreciated.


exec sql include sqlca;

EXEC SQL WHENEVER NOT FOUND CONTINUE;

// this array is grown as messages from new units are received
// it is sorted by unit_serial_no ascending to support binary searches

int getUnitData()
{

exec sql begin declare section;

VARCHAR h_unit_serial_no  [ 15+1];
// we do an array fetch on these 30 is the max number for these
int h_remote_int_netmask[4096];
int h_local_int_netmask [4096];
VARCHAR h_tunnel_active [4096][   1+1];
VARCHAR h_tunnel_config_type[4096][   1+1];
VARCHAR h_local_vpn_int_ip  [4096][  20+1];
VARCHAR h_local_vpn_ext_ip  [4096][  20+1];
VARCHAR h_remote_vpn_int_ip [4096][  20+1];
VARCHAR h_remote_vpn_ext_ip [4096][  20+1];

exec sql end declare section;

exec sql select tunnel_active,
tunnel_config_type,
host(local_int_gw_ip),
host(local_ext_gw_ip),
host(remote_int_gw_ip),
host(remote_ext_gw_ip),
masklen(remote_int_gw_ip),
masklen(local_int_gw_ip)
into:h_tunnel_active,
:h_tunnel_config_type,
:h_local_vpn_int_ip,
:h_local_vpn_ext_ip,
:h_remote_vpn_int_ip,
:h_remote_vpn_ext_ip,
:h_remote_int_netmask,
:h_local_int_netmask
from t_vpn_tunnel_status
where unit_serial_no = :h_unit_serial_no
order by oid;
/*limit 30;*/
return TRUE;
}
gmake -k ecpg_test.o
/usr/local/bin/ecpg -I/usr/local/include/pgsql -I/usr/local/include 
ecpg_test.pgc
mkdep -O2 -Wall -DDEBUG -I../include -I/usr/local/include/pgsql 
-I/usr/local/include -I../common crypt_file.c mailuser.c 
srm2_monitor_server.c putfiles.c srm2_server_funcs.c escalate.c 
packet_loss.c srm2_cron.c srm2_db_funcs.c srm2_monitor_db.c ecpg_test.c
g++ -O2 -Wall -DDEBUG -I../include -I/usr/local/include/pgsql 
-I/usr/local/include -I../common -c ecpg_test.c

ecpg_test.pgc: In function `int getUnitData()':
ecpg_test.pgc:36: error: invalid application of `sizeof' to incomplete 
type `varchar_h_tunnel_active'
ecpg_test.pgc:38: error: invalid application of `sizeof' to incomplete 
type `varchar_h_tunnel_config_type'
ecpg_test.pgc:40: error: invalid application of `sizeof' to incomplete 
type `varchar_h_local_vpn_int_ip'
ecpg_test.pgc:42: error: invalid application of `sizeof' to incomplete 
type `varchar_h_local_vpn_ext_ip'
ecpg_test.pgc:44: error: invalid application of `sizeof' to incomplete 
type `varchar_h_remote_vpn_int_ip'
ecpg_test.pgc:46: error: invalid application of `sizeof' to incomplete 
type `varchar_h_remote_vpn_ext_ip'

gmake: *** [ecpg_test.o] Error 1

Compilation exited abnormally with code 2 at Fri Feb 29 09:59:10


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] ecpg problem

2008-03-03 Thread Steve Clark

Michael Meskes wrote:

On Fri, Feb 29, 2008 at 11:27:25AM -0500, Steve Clark wrote:


Actually it appears to work in 8.2.5 but be broken in 8.2.6 and 8.3.0.



Are you really sure? It appears to me that there was no change between
8.2.5 and 8.2.6 that could affect ecpg's handling of arrays of varchar.

Michael

I am pretty sure - but I upgraded my test system so I can't say for sure.

Steve

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] ecpg problem

2008-03-03 Thread Steve Clark

Michael Meskes wrote:

I just committed the attached small fix to CVS HEAD and the 8.3 branch.
This should fix your problem.

Michael





diff --exclude CVS -ru /home/postgres/pgsql-ecpg/preproc/type.c preproc/type.c
--- /home/postgres/pgsql-ecpg/preproc/type.c2007-12-21 15:33:20.0 
+0100
+++ preproc/type.c  2008-03-02 11:49:11.0 +0100
@@ -259,7 +259,7 @@
 
 	ECPGdump_a_simple(o, name,

  
type->u.element->type,
- 
type->u.element->size, type->size, NULL, prefix, type->lineno);
+ 
type->u.element->size, type->size, NULL, prefix, type->u.element->lineno);
 
 	if (ind_type != NULL)

{

Thank Michael,

I'll give it a try.

Steve

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


[GENERAL] cast problem 8.3.1

2008-03-20 Thread Steve Clark

Hello List,

I can't seem to figure out what is this code that worked on 7.4.x. 
I've added cast to everything but still

get:

postgres error log:
ERROR:  operator does not exist: text = integer
HINT:  No operator matches the given name and argument type(s). You 
might need to add explicit type casts.
STATEMENT:  update T_MON_DEVICE_STATUS set device_ip  =  $1  :: inet 
 , status  =  $2  :: integer  , status_date  = now () , 
last_event_log_no  = case  $3  when 0 then null else  $4  :: integer 
end  where unit_serial_no =  $5  :: text   and device_name =  $6  :: text


pgc code:
EXEC SQL UPDATE T_MON_DEVICE_STATUS
SET device_ip = :h_device_ip::inet,
   status = :h_status::integer,
  status_date = now(),
last_event_log_no =
  case :h_event_log_no when 0 then null
  else
   :h_event_log_no::integer end 
 WHERE unit_serial_no = :h_unit_serial_no::text
  AND device_name = :h_device_name::text;

table:
\d t_mon_device_status
Table "public.t_mon_device_status"
  Column   |   Type   | Modifiers
---+--+---
 unit_serial_no| character varying(15)| not null
 device_name   | character varying(64)| not null
 device_ip | inet | not null
 status| integer  | not null
 status_date   | timestamp with time zone | not null
 last_event_log_no | integer  |
Indexes:
"pk_tmds_usn_dn" PRIMARY KEY, btree (unit_serial_no, device_name)
Foreign-key constraints:
"fk_tmds_usn" FOREIGN KEY (unit_serial_no) REFERENCES 
t_unit_status_log(unit_serial_no) ON DELETE CASCADE


maybe someone else can see where I am going wrong.

Thanks in advance,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] cast problem 8.3.1

2008-03-21 Thread Steve Clark

Charles Simard wrote:


|> 
|> postgres error log:

|> ERROR:  operator does not exist: text = integer
|> HINT:  No operator matches the given name and argument type(s). You 
|> might need to add explicit type casts.
|> STATEMENT:  update T_MON_DEVICE_STATUS set device_ip  =  $1  :: inet 
|>   , status  =  $2  :: integer  , status_date  = now () , 
|> last_event_log_no  = case  $3  when 0 then null else  $4  :: integer 
|> end  where unit_serial_no =  $5  :: text   and device_name = 
|>  $6  :: text
|> 



You're not casting your $3.



thanks Charles and Rodrigo - that fixed it.

Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] ecpg program getting stuck

2008-03-21 Thread Steve Clark

Hello List

I have a program that worked fine in 7.4.19. I am in process of upgrading
to 8.3.1 and my program runs for a while and then hangs. I used gdb
to attach to the process and it shows the following backtrace which 
shows it

going into the libpq library and getting stuck. Anybody have any ideas on
how to proceed?

I don't see anything in the postgres error log file.

(gdb) bt
#0  0x282e2de3 in poll () from /lib/libc.so.6
#1  0x280af646 in pqPutMsgStart () from /usr/X11R6/lib/libpq.so.5
#2  0x280afa0b in pqWaitTimed () from /usr/X11R6/lib/libpq.so.5
#3  0x280afa65 in pqWait () from /usr/X11R6/lib/libpq.so.5
#4  0x280ad595 in PQgetResult () from /usr/X11R6/lib/libpq.so.5
#5  0x280ad6b3 in PQgetResult () from /usr/X11R6/lib/libpq.so.5
#6  0x280c3655 in ECPGdo () from /usr/X11R6/lib/libecpg.so.6
#7  0x08054378 in updateTCTS (pUnit=0x8081800, pMsg=0xbfbf8850 "")
at srm2_monitor_db.pgc:2313
#8  0x0804f8ae in main (argc=3, argv=0xbfbf7fc0) at 
srm2_monitor_server.c:3356


Thanks,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ecpg program getting stuck

2008-03-21 Thread Steve Clark

Tom Lane wrote:

Steve Clark <[EMAIL PROTECTED]> writes:


I have a program that worked fine in 7.4.19. I am in process of upgrading
to 8.3.1 and my program runs for a while and then hangs. I used gdb
to attach to the process and it shows the following backtrace which 
shows it

going into the libpq library and getting stuck.



Well, gdb is lying to you to some extent (you'd probably get a better
backtrace if you had built libpq with debug symbols), but I think it's
simply waiting for a query response.  Is the connected backend busy?

If the problem is that some query is taking way longer than you were
expecting, the first thought that comes to mind is "did you ANALYZE
your tables after reloading", and the second is "check for
configuration settings that you forgot to propagate into the new
installation".

regards, tom lane




Hi Tom,

I was testing with 8.2.6 before 8.3.0 came out and I don't think I saw 
this problem.
I recompiled with debugging turned on and I'll do a closer inspection 
when/if it happens

again.

Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] --enable-thread-safety bug

2008-03-21 Thread Steve Clark

Hello List,

I am running 8.3.1 on FreeBSD 6.2 patch-7.

The ports for Freebsd turn on --enable-thread-safety during configure 
of pg.


When running my app after some time I have been getting a core dump - 
sig 11.


#0  0x28333b96 in memcpy () from /lib/libc.so.6
(gdb) bt
#0  0x28333b96 in memcpy () from /lib/libc.so.6
#1  0x280d0122 in ecpg_init_sqlca (sqlca=0x0) at misc.c:100
#2  0x280d0264 in ECPGget_sqlca () at misc.c:145
#3  0x280d056c in ecpg_log (
format=0x280d1d78 "free_params line %d: parameter %d = %s\n") at 
misc.c:243
#4  0x280c9758 in free_params (paramValues=0x836fe00, nParams=104, 
print=1 '\001',

lineno=3303) at execute.c:1045
#5  0x280c9f08 in ecpg_execute (stmt=0xa726f00) at execute.c:1298
#6  0x280ca978 in ECPGdo (lineno=3303, compat=0, force_indicator=1,
connection_name=0x0, questionmarks=0 '\0', st=0,
query=0x806023c "update T_UNIT_STATUS_LOG set ip_address  =  $1 
:: inet   , last_ip_address  =  $2  :: inet   , unit_date  =  $3  :: 
timestamp with time zone  , unit_raw_time  =  $4  , status_date  = now 
() , unit_ac"...) at execute.c:1636

#7  0x08057a46 in UpdateTUSL (pCachedUnit=0x807b680, msg=0xbfbf8850 "",
p_threshold=80, p_actualIP=0xbfbfe880 "24.39.85.226")
at srm2_monitor_db.pgc:3303
#8  0x0804f174 in main (argc=3, argv=0xbfbf7fc0) at 
srm2_monitor_server.c:3265

(gdb) f 2
#2  0x280d0264 in ECPGget_sqlca () at misc.c:145
145 ecpg_init_sqlca(sqlca);
(gdb) p sqlca
$1 = (struct sqlca_t *) 0x0

in looking in the code in misc.c

I see:

struct sqlca_t *
ECPGget_sqlca(void)
{
#ifdef ENABLE_THREAD_SAFETY
struct sqlca_t *sqlca;

pthread_once(&sqlca_key_once, ecpg_sqlca_key_init);

sqlca = pthread_getspecific(sqlca_key);
if (sqlca == NULL)
{
sqlca = malloc(sizeof(struct sqlca_t));
^
ecpg_init_sqlca(sqlca);
pthread_setspecific(sqlca_key, sqlca);
}
return (sqlca);
#else
return (&sqlca);
#endif
}

The return from malloc should be checked to make sure it succeeds - 
right???


Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] --enable-thread-safety bug

2008-03-22 Thread Steve Clark

Tom Lane wrote:

Steve Clark <[EMAIL PROTECTED]> writes:

The return from malloc should be checked to make sure it succeeds - 
right???



Probably, but what do you expect the code to do if it doesn't succeed?
This function seems not to have any defined error-return convention.

regards, tom lane


Retry - the malloc - maybe there is a memory leak when 
--enable-thread-saftey is enabled,
send an out of memory message to the postgres log, abort the 
transaction - I don't know I am
not a postgres developer so I don't know all the issues. I all I know 
as a user having a program
like postgres just sig 11 is unacceptable! As a commercial developer 
of software for over 30 years

I would never just do nothing.

My $.02
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] --enable-thread-safety bug

2008-03-22 Thread Steve Clark

Tom Lane wrote:

Martijn van Oosterhout <[EMAIL PROTECTED]> writes:


Note this is your in application, not the server. Only your program
died. Ofcourse the transaction got aborted, since the client (you)
disconnected. There is no way for this to write to the server log,
since it may be one another machine...



Right.  And note that if we don't have enough memory for the struct
that was requested, we *certainly* don't have enough to do anything
interesting.  We could try

fprintf(stderr, "out of memory\n");
exit(1);

but even that I would give only about 50-50 odds of success; and more
to the point, how is this any better for an application than a core
dump?  It's still summary termination.



Do you create and destroy a lot of threads since it seems this memory
won't be freed?



The OP's program isn't threaded at all, since he was apparently running
with a non-threaded ecpg/libpq before.  This means that the proposal of
looping till someone else frees memory is at least as silly as allowing
the core dump to happen.

regards, tom lane


I guess the real question is why we are running out of memory when 
this option is enabled.
Since my app doesn't use threads that points to a memory leak in the 
ecpg library when enable thread

safety is turned on.


Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] --enable-thread-safety bug

2008-03-22 Thread Steve Clark

Martijn van Oosterhout wrote:

On Sat, Mar 22, 2008 at 11:28:24AM -0400, Steve Clark wrote:

Retry - the malloc - maybe there is a memory leak when 
--enable-thread-saftey is enabled,
send an out of memory message to the postgres log, abort the 
transaction - I don't know I am
not a postgres developer so I don't know all the issues. I all I know 
as a user having a program
like postgres just sig 11 is unacceptable! As a commercial developer 
of software for over 30 years

I would never just do nothing.



Note this is your in application, not the server. Only your program
died. Ofcourse the transaction got aborted, since the client (you)
disconnected. There is no way for this to write to the server log,
since it may be one another machine...

As to the issue at hand: it looks like your program ran out of memory.
Can you confirm the memory was running low? Even if it handled it by
returning NULL, the caller will die because it also needs memory.

Do you create and destroy a lot of threads since it seems this memory
won't be freed?

Have a nice day,
My program had no threads - as I pointed out if I change the default 
Makefile in the FreeBSD ports
system to not enable thread safety my programs runs just fine for days 
on end. It appears to me
without any kind of close examination that there is a memory leak in 
the ecpg library when enable

thread safety is turned on.

I had an earlier problem in 8.2.6 where if enable-thread-safety was 
turned on sqlca would always be zero

no matter if there was an error or not.

This appears to me to be a problem in the ecpg library when thread 
safety is enabled.


Have a nice day.

Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] --enable-thread-safety bug

2008-03-22 Thread Steve Clark

Michael Meskes wrote:

On Sat, Mar 22, 2008 at 12:51:30PM -0400, Steve Clark wrote:

My program had no threads - as I pointed out if I change the default  
Makefile in the FreeBSD ports
system to not enable thread safety my programs runs just fine for days  
on end. It appears to me
without any kind of close examination that there is a memory leak in the 
ecpg library when enable

thread safety is turned on.



There are just a few variables covered by ENABLE_THREAD_SAFETY. I wonder
how the program manages to spend so much time allocating memory to eat
all of it. Could you give us some more info about your source code? Do
you use descriptors? Auto allocating? 


Michael


Hi Michael,

Not exactly sure what you are asking about - descriptors and auto 
allocating.


The program processes about 80 packets a day, which can update 
several tables.
It runs continously reading udp packets from systems at remote 
locations coming in over the internet.


It has a global
exec sql include sqlca;

then a number of functions that get called with each function having 
it own


xxx( args,... )
{
EXEC SQL BEGIN DECLARE SECTION;
a bunch of variable
EXEC SQL END DECLARE SECTION;

with various EXEC SQL inserts, updates and selects.
with checks of sqlca.sqlcode to determine if the sql statement succeeded.

}

Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] --enable-thread-safety bug

2008-03-22 Thread Steve Clark

Steve Clark wrote:

Michael Meskes wrote:


On Sat, Mar 22, 2008 at 12:51:30PM -0400, Steve Clark wrote:


My program had no threads - as I pointed out if I change the default  
Makefile in the FreeBSD ports
system to not enable thread safety my programs runs just fine for days  
on end. It appears to me
without any kind of close examination that there is a memory leak in the 
ecpg library when enable

thread safety is turned on.



There are just a few variables covered by ENABLE_THREAD_SAFETY. I wonder
how the program manages to spend so much time allocating memory to eat
all of it. Could you give us some more info about your source code? Do
you use descriptors? Auto allocating? 


Michael



Hi Michael,

Not exactly sure what you are asking about - descriptors and auto 
allocating.


The program processes about 80 packets a day, which can update 
several tables.
It runs continously reading udp packets from systems at remote 
locations coming in over the internet.


It has a global
exec sql include sqlca;

then a number of functions that get called with each function having 
it own


xxx( args,... )
{
EXEC SQL BEGIN DECLARE SECTION;
a bunch of variable
EXEC SQL END DECLARE SECTION;

with various EXEC SQL inserts, updates and selects.
with checks of sqlca.sqlcode to determine if the sql statement succeeded.

}

Steve


to further illustrate our code below is a typical exec sql statement:
exec sql insert into t_unit_event_log
   (event_log_no,
unit_serial_no,
event_type,
event_category,
event_mesg,
event_severity,
event_status,
event_ref_log_no,
event_logged_by,
event_date,
alarm,
last_updated_by,
last_updated_date)
values (nextval('seq_event_log_no'),
:h_serial_no,
'ALERT',
:h_category,
:h_mesg,
:h_sev,
3,
NULL,
current_user,
now(),
:h_alarm,
current_user,
now());

if (sqlca.sqlcode != 0) 


{
VARLOG(INFO, LOG_LEVEL_DBG4, "could not insert into 
T_UNIT_EVENT_LOG\n");

VARLOG(INFO, LOG_LEVEL_DBG4, "insertTUEL returns %d\n", ret);
return ret;
}


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] --enable-thread-safety bug

2008-03-23 Thread Steve Clark

Craig Ringer wrote:

Steve Clark wrote:


I guess the real question is why we are running out of memory when 
this option is enabled.
Since my app doesn't use threads that points to a memory leak in the 
ecpg library when enable thread

safety is turned on.



It might be worth building ecpg with debug symbols then running your 
app, linked to that ecpg, under Valgrind. If you are able to produce 
more specific information about how the leak occurs in the context of 
your application people here may be more able to help you.


--
Craig Ringer




Hi Craig,

I could do that - but in my situation I am not using threads so I 
really don't need --enable-thread-safety
turned on. The freebsd ports maintainer for postgresql decided 
everybody should have it whether they
needed it or not. I simply deleted the option from the freebsd 
makefile rebuilt the port - relinked my app
and no more problem. I just thought the postgresql developers would 
want to know there was a bug. If

they don't care to investigate or trouble shoot the bug it is fine by me.

I just find it is interesting that a non-threaded program causes a 
memory leak when used with postgres
libraries that are compiled with --enable-thread-safety - doesn't seem 
to safe to me.


Have a nice day.

Steve

-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] --enable-thread-safety bug

2008-03-23 Thread Steve Clark

Michael Meskes wrote:

On Sat, Mar 22, 2008 at 04:58:28PM -0400, Steve Clark wrote:

Not exactly sure what you are asking about - descriptors and auto  
allocating.



So I guess you don't use either feature. :-)


The program processes about 80 packets a day, which can update  
several tables.
It runs continously reading udp packets from systems at remote locations 
coming in over the internet.



But the code for processing all thoss statements is the same, with and
without threading enabled.

One code that differs is allocation of sqlca, but given that this
structure has a mere 215 bytes (about). Even if it was allocated 80
times it would make up for a memory loss of about 164MB. Which brings up
the question how long the application runs until it segfaults.

As Tom already pointed out, without more information there simply is no
way for us to find out what's going on. We are more than willing to dig
into it, but we need more to be able to.

Michael


Ok I tryed valgrind and after a while it dies with a valgrind 
assertion error before providing any

useful data.

So I tried linking with -lc_r and it appears to have stopped the leak. 
Without -lc_r
using "top" my app quickly climbed over 150mbyte in memory size - it 
is now staying steady
at about 8mb - which is about what it ran when I compiled the ecpg lib 
without --enable-thread-safety

enabled.

Now why does this make a difference in ecpg?

HTH,
Steve

If anyone cares below is the valgrind assertion failure:
valgrind: vg_malloc2.c:1008 (vgPlain_arena_malloc): Assertion `new_sb 
!= ((void*)0)' failed.

==4166==at 0xB802BE1F: (within /usr/local/lib/valgrind/stage2)
==4166==by 0xB802BE1E: (within /usr/local/lib/valgrind/stage2)
==4166==by 0xB802BE5D: vgPlain_core_assert_fail (in 
/usr/local/lib/valgrind/stage2)
==4166==by 0xB8028091: vgPlain_arena_malloc (in 
/usr/local/lib/valgrind/stage2)


sched status:

Thread 1: status = Runnable, associated_mx = 0x0, associated_cv = 0x0
==4166==at 0x3C03894B: calloc (in 
/usr/local/lib/valgrind/vgpreload_memcheck.so)



Note: see also the FAQ.txt in the source distribution.
It contains workarounds to several common problems.

If that doesn't help, please report this bug to: valgrind.kde.org

In the bug report, send all the above text, the valgrind
version, and what Linux distro you are using.  Thanks.

-
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] renumber table

2008-06-18 Thread Steve Clark

Hello List,

I have acquired the task of maintaining and existing application that uses 
postgresql. I am only lightly versed
in sql and have the following problem I need to solve.

I have a table in which each row has a column - row_number. The row_numbers 
need to be sequential.
Everything is fine until a row in the middle of the table gets deleted, now I 
need to renumber the remaining
rows so they are sequential again. The table is small, typically less than a 
100 rows. Is there a simple way
to do this in sql. I know I can dump the table to an external file and then use 
'C' or a bash script to fix up the
row_numbers and then reload the table - but it seems there is probably a more 
elegant way to do this in sql.

Anyway thanks for any tips.

Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] renumber table

2008-06-18 Thread Steve Clark

Scott Marlowe wrote:

On Wed, Jun 18, 2008 at 2:58 PM, Steve Clark <[EMAIL PROTECTED]> wrote:


Hello List,

I have acquired the task of maintaining and existing application that uses
postgresql. I am only lightly versed
in sql and have the following problem I need to solve.

I have a table in which each row has a column - row_number. The row_numbers
need to be sequential.



I question this.  Someone may have arbitrarily decided it needs to be
sequential, but it's quite likely that your business process does NOT
need it to be sequential, at least not in the database.

Any solution that involves actually renumbering the rows is a
guaranteed performance loser, and a good way to scramble your data in
wonderful, horrible, scary new ways.

It's generally better to use a separate table with a numbering schema
in it that points to the rows so you don't have to actually update the
rows themselves, only the numbering scheme.  But even that is fraught
with horrible possibilities in terms of social engineering problems
that show up.

Update number 154929!  I was working on it last month and it needs x
added to it!

Someone updates 154929 only to find out that they should have updated
154924 instead because of the renumbering.



Thanks Scott,

I realize this is certainly not the best design - but at this point in time it 
can't be changed. The table
is rarely updated and never concurrently and is very small, typically less than 
100 rows so there really is
no performance impact.

Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] renumber table

2008-06-19 Thread Steve Clark

Scott Marlowe wrote:

On Wed, Jun 18, 2008 at 3:50 PM, Steve Clark <[EMAIL PROTECTED]> wrote:


I realize this is certainly not the best design - but at this point in time
it can't be changed. The table
is rarely updated and never concurrently and is very small, typically less
than 100 rows so there really is
no performance impact.



Then the easiest way to renumber a table like that is to do something like:

create temp sequence myseq;
update table set idfield=nextval('myseq');

and hit enter.
and pray.  :)



Hi Scott,

I am not sure that will do what I want. As an example
suppose I have 5 rows and the idfield is 1,2,3,4,5
now row 1 is updated, not the idfield but another column, then row 3 is deleted.
Now I would like to renumber  them such that 1 is 1, 2 is 2, 4 is 4 , 5 is 4.

I don't think what you wrote will necessarily keep them in the same relative 
order that they
were before.

Regards,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] renumber table

2008-06-20 Thread Steve Clark

David Spadea wrote:

Steve,

I'd just like to add that I agree with Scott that this is asking for 
trouble if the field being renumbered is used as a foreign key 
somewhere. If you have no way of changing this logic, you should at 
least look into 'on delete cascade' and 'on update cascade' on your 
dependent tables. You can expect performance to suffer if the dependent 
tables are large, but at least you don't lose referential integrity.


Dave



On Thu, Jun 19, 2008 at 7:07 PM, David Spadea <[EMAIL PROTECTED] 
<mailto:[EMAIL PROTECTED]>> wrote:


Steve,

Here's your problem and its solution as I understand it:

-- Given an example table like this (data isn't too important --
just the sequencing)
create table meh
(
  idserial primary key
, word   varchar(10)
);

-- Populate it with data

insert into meh (word) values
('one'),
('two'),
('three'),
('four'),
('five'),
('six'),
('seven'),
('eight'),
('nine'),
('ten');

-- Delete a row from the middle of the table
delete from meh where id = 5;

-- Renumber all of the rows ABOVE the deleted row
-- This will maintain sequencing. This assumes that no gaps existed
prior to the delete of this row,
-- and that only one row was deleted.

update meh
set id = id - 1
where id > 5;

At this point, if you've got a sequence that populates the id field,
you'll need to set its nextval.


Dave


On Thu, Jun 19, 2008 at 7:54 AM, Steve Clark <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>> wrote:

Scott Marlowe wrote:

On Wed, Jun 18, 2008 at 3:50 PM, Steve Clark
<[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>> wrote:

I realize this is certainly not the best design - but at
this point in time
it can't be changed. The table
is rarely updated and never concurrently and is very
small, typically less
than 100 rows so there really is
no performance impact.



Then the easiest way to renumber a table like that is to do
something like:

create temp sequence myseq;
update table set idfield=nextval('myseq');

and hit enter.
and pray.  :)


Hi Scott,

I am not sure that will do what I want. As an example
suppose I have 5 rows and the idfield is 1,2,3,4,5
now row 1 is updated, not the idfield but another column, then
row 3 is deleted.
Now I would like to renumber  them such that 1 is 1, 2 is 2, 4
is 4 , 5 is 4.

I don't think what you wrote will necessarily keep them in the
same relative order that they
were before.

Regards,
Steve


-- 
Sent via pgsql-general mailing list

(pgsql-general@postgresql.org <mailto:pgsql-general@postgresql.org>)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general




Thanks to all that replied.

I agree the design wasn't the best and we had a function similar to what
you describe to keep things in order. Problem was we had a foreign key 
constraint that caused a row
to be deleted, because the foreign key was deleted when it shouldn't have been. 
So now the table
row numbering was messed up. It really doesn't cause a problem but when the 
table information gets
displayed it uses the row num for access to the table and looked wierd with the 
gaps in the numbering.

I took the easy way out and before displaying the table I check to see if 
max(row_num) is not equal to
count(*) then I renumber it in the php script that displays it using a loop.

Thanks again.
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] When to use cascading deletes?

2009-06-11 Thread Steve Clark

David wrote:

Thanks for the tips, those make sense.

I was thinking through this some more after sending my mail, and came
to similar conclusions.

It would be nice though if this info was more available to people
doing research on the subject. Where did you pick up these ideas? At
least this thread should start turning up in Google at some point :-)

Also, probably some of my confusion comes from being used to
app-centric logic, where the app (and programmer/sysadmin/etc) likes
to be in control of all the database updates, rather than additional
deletes/updates/etc taking place behind the scenes. I guess it's a
kind of trade-off. Handing off more logic to the db to simplify db
interaction, at the expense of the database sometimes doing
unexpected/unwanted things.

For instance, it's possible to make the db rules *too* clever. That
kind of thing can bite you later, when the db makes updates in the
background that you don't want for some reason. Then you need to work
around them, and can't take those rules out permanently, since it
breaks other apps that depend on the db being intelligent. You need to
think really carefully about when to add db intelligence, which could
have unwanted consequences later...

For people like me, I think it helps to think of the data as living in
it's own world inside the DB, with extra rules there that apps don't
always know or care about, and the users & apps need to be aware of
the conventions around each table. As opposed to being a simple data
store for apps (with integrity checks). For similar reasons I've
avoided stored procedures and rules, preferring simpler database
schema, but more complex logic to handle them in apps.

Another consideration, is revision controlling of the extra rules/etc.
My current approach is to have all db schema setup & updates etc in a
Python script during development, which is revision controlled (and
then later re-used for remote db installs/upgrades).

David.


Hi David,

The advantage of putting business logic in the database means it is in only one 
place.
If you leave it to the apps then you have no consistency unless you have a 
library
of the more complex logic that all the apps use. The problem with this is I find
programmers that always want to reinvent the wheel and do their own thing so
they ignore library and do it "better" themselves.

My $.02
Steve 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] checkpoints/bgwriter tuning verification

2009-10-30 Thread Steve Clark

On 10/29/2009 04:42 PM, Scott Marlowe wrote:

On Thu, Oct 29, 2009 at 12:46 PM, Vick Khera  wrote:

On my primary DB I'm observing random slowness which just doesn't make
sense to me.  The I/O system can easily do 40MB/sec writes, but I'm
only seeing a sustained 5MB/sec, even as the application is stalling
waiting on the DB.


Just one point on top of everything else you'll hear.  40 MB/sec
sequential throughput does not equal 40MB/sec random PLUS checkpoint
throughput.  Random access is gonna lower that 40MB/sec way down real
fast.

First step to speed things up is putting pg_xlog on its own disk(s).

Hi Scott,

How exactly do you do this? By creating a link to the new location or
is there a config option somewhere that says where the pg_xlog resides?

Thanks,
Steve


Since pg_xlog is mostly sequentially access, it's much faster when
it's not competing with the rest of the db.  THEN if you need faster
disks you can buy them and throw them at path/data/base/.  However,
often just splitting things out like this will make a world of
difference on a write heavy database.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Steve Clark

Hello,

Don't know if this is better asked on the CentOS ML or here, but...

CentOS 6 supplies 8.4.20 but I want to use hot standby - the issue is that the 
PGDG
packages don't install into the "usual place" they are installed in version 
specific directories,
including the data, binaries, libraries etc. How do people deal with this when 
for years they
have been using postgres and "stuff" is in standard directories not version 
specific directories?

Thanks for any tips.
Steve

--
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] CentOS 6 and Postgresql 9.3.4 from PGDG

2014-04-14 Thread Steve Clark

On 04/14/2014 09:02 AM, Moshe Jacobson wrote:


On Mon, Apr 14, 2014 at 8:24 AM, Steve Clark mailto:scl...@netwolves.com>> wrote:

CentOS 6 supplies 8.4.20 but I want to use hot standby - the issue is that 
the PGDG
packages don't install into the "usual place" they are installed in version 
specific directories,
including the data, binaries, libraries etc. How do people deal with this 
when for years they
have been using postgres and "stuff" is in standard directories not version 
specific directories?


It's actually nicer that it uses a version specific directory, IMO, since you 
can have two versions installed simultaneously for upgrade purposes.
I just create symlinks data/ and backups/ in /var/lib/pgsql to point to the 
dirs of the same names under the 9.3/ directory.


How did you deal with binaries and libraries, as well as third party apps like 
perl modules or php/apache modules?

--
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] reindexdb

2014-04-29 Thread Steve Clark

Hello,

We have a small (2GB) 8.4.20 database. The prior maintainer ran a reindexdb
twice a day (originally started with 7.3.x). Can anybody see a reason why we 
need to
continue to do this?

Thanks,


--
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] Trouble finding libpq on Centos 6.5

2014-05-08 Thread Steve Clark

On 05/08/2014 02:54 AM, Aqz wrote:

Hi.

I'm not sure if I should write here, or on a Centos mailing list...

I have fresh, vanilla Centos 6.5 with postgresql yum repository added.
I've successfully installed postgresql93 and postgresql93-libs packages, but 
still :

$ ld -lpq
ld: cannot find -lpq

$ ldconfig -p
libpq.so.5 (libc6,x86-64) => /usr/pgsql-9.3/lib/libpq.so.5
libpq.so (libc6,x86-64) => /usr/pgsql-9.3/lib/libpq.so


What did I do wrong?

--
Wojtek


They put the postgres stuff in versioned directories.
there is a program you use to find out where stuff is.
do a man on pg_config

$ pg_config --bindir
/usr/pgsql-9.3/bin

$ pg_config

BINDIR = /usr/pgsql-9.3/bin
DOCDIR = /usr/share/doc/pgsql
HTMLDIR = /usr/share/doc/pgsql
INCLUDEDIR = /usr/pgsql-9.3/include
PKGINCLUDEDIR = /usr/pgsql-9.3/include
INCLUDEDIR-SERVER = /usr/pgsql-9.3/include/server
LIBDIR = /usr/pgsql-9.3/lib
PKGLIBDIR = /usr/pgsql-9.3/lib
LOCALEDIR = /usr/pgsql-9.3/share/locale
MANDIR = /usr/pgsql-9.3/share/man
SHAREDIR = /usr/pgsql-9.3/share
SYSCONFDIR = /etc/sysconfig/pgsql
PGXS = /usr/pgsql-9.3/lib/pgxs/src/makefiles/pgxs.mk
CONFIGURE = '--disable-rpath' '--prefix=/usr/pgsql-9.3' 
'--includedir=/usr/pgsql-9.3/include' '--mandir=/usr/pgsql-9.3/share/man' 
'--datadir=/usr/pgsql-9.3/share' '--with-perl' '--with-python' '--with-tcl' 
'--with-tclconfig=/usr/lib64' '--with-openssl' '--with-pam' '--with-krb5' 
'--with-gssapi' '--with-includes=/usr/include' '--with-libraries=/usr/lib64' 
'--enable-nls' '--with-ossp-uuid' '--with-libxml' '--with-libxslt' 
'--with-ldap' '--with-system-tzdata=/usr/share/zoneinfo' 
'--sysconfdir=/etc/sysconfig/pgsql' '--docdir=/usr/share/doc' 'CFLAGS=-O2 -g 
-pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector 
--param=ssp-buffer-size=4 -m64 -mtune=generic -I/usr/include/et' 'CPPFLAGS= 
-I/usr/include/et'
CC = gcc
CPPFLAGS = -I/usr/include/et -D_GNU_SOURCE -I/usr/include/libxml2 -I/usr/include
CFLAGS = -O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions 
-fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic 
-I/usr/include/et -Wall -Wmissing-prototypes -Wpointer-arith 
-Wdeclaration-after-statement -Wendif-labels -Wmissing-format-attribute 
-Wformat-security -fno-strict-aliasing -fwrapv
CFLAGS_SL = -fpic
LDFLAGS = -L../../../src/common -L/usr/lib64 -Wl,--as-needed
LDFLAGS_EX =
LDFLAGS_SL =
LIBS = -lpgport -lpgcommon -lxslt -lxml2 -lpam -lssl -lcrypto -lgssapi_krb5 -lz 
-lreadline -lcrypt -ldl -lm
VERSION = PostgreSQL 9.3.4



--
Stephen Clark
*NetWolves Managed Services, LLC.*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


[GENERAL] Ideas for query

2011-10-13 Thread Steve Clark

Hello List,

I am a not very experienced writing sql and I have a problem I can't readily 
solve, so
I was hoping to get some help from this great list.

Here is my problem I have a table that has event data about the status of units 
in the field. It has
many kinds of events one of which has down time information. I have written a 
query to extract that
information and calculate the % downtime. The problem I am having is that if 
the unit was never down
I don't see it in my query, I would like to be able to somehow report it as 
100% up.

Below is an example of the event data and my query is below it. Any suggestions 
would be greatly appreciated.
t_unit has base information about the unit

v_unit_event_info
 group_id | unit_serial_no | event_category |   
event_mesg|  
event_date
--+++-+---
 146  | K101334| UNIT   | Unit is Up. Last packet 
received:2010-12-10 22:56:18.330648-05, Total down time:00:09:17.757409 | 
2010-12-10 22:56:18.330648-05

select unit_serial_no,
sum(down_time),
round((100-(extract(epoch from sum(down_time))/extract(epoch from (timestamp '2011-09-31 
23:59:59' - timestamp '2011-09-01 00:00:00')::interval))*100)::numeric,2) as "Up 
Time %"
from (select unit_serial_no, down_time
from (select unit_serial_no, substring(event_mesg from
strpos(event_mesg,'Total')+16 for 40)::interval as down_time
from v_unit_event_info where event_category='UNIT'
and event_mesg like '%Total %'
and event_date >= '2011-09-01 00:00:00'
and event_date <= '2011-09-30 23:59:59' and unit_serial_no in
(select unit_serial_no from t_unit where group_id='199' and activated='y')
order by unit_serial_no) as foo where down_time < '5 days')
as foo2 group by unit_serial_no;
--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] Ideas for query

2011-10-13 Thread Steve Clark

On 10/13/2011 08:31 AM, Raymond O'Donnell wrote:

On 13/10/2011 12:17, Steve Clark wrote:

Hello List,

I am a not very experienced writing sql and I have a problem I can't
readily solve, so
I was hoping to get some help from this great list.

Here is my problem I have a table that has event data about the status
of units in the field. It has
many kinds of events one of which has down time information. I have
written a query to extract that
information and calculate the % downtime. The problem I am having is
that if the unit was never down
I don't see it in my query, I would like to be able to somehow report it
as 100% up.

The way I'd approach this is to do a LEFT OUTER JOIN between the units
table and the events table, with the units on the left of the join: this
way any particular unit will always appear in the result set, and if
there are no corresponding rows in the events table then you know that
the unit had 100% uptime.

HTH.

Ray.



Hi Ray,

Thanks for the response, I am afraid I don't know enough on how to formulate 
the left outer join
so I have attacked the problem from a different direction. Creating a temporary 
table with all
the units set to 100% then running my existing query and using the results to 
update my
temporary table where the unit serial no's match.

Steve

--
Stephen Clark
*NetWolves*
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] Need help for import of text file

2012-12-16 Thread Steve Clark

On 12/16/2012 01:12 PM, Peter Bex wrote:

On Sun, Dec 16, 2012 at 06:48:35PM +0100, Andreas wrote:

With sed as startingpoint I figured it out.
Those 3 steps make the input files consumable for COPY

1. dos2unix
2. sed -i 's/[ \t]*$//'
3. sed -i 's/  / /g'

You can reduce this to one invocation by separating the commands
by a semicolon (or by passing multiple -e flags)

sed -i 's/[ \t]*$//;s/  / /g'


The input files get created by a simple windows batch where I can't
change anything.
It uses echo to attach a line of 4 parameters to those textfiles.

How would you manage if one or more of those parameters contained blanks
in some cases?
This doesn't appear, yet. But I consider this as luck.   :}

The real column formats are ( TEXT, TEXT, DATE, TIME ).

Well, that's a bit trickier and my sed skills are rather rusty.
I'd probably use awk for these more complex tasks:

awk '/\(.*\)/ { gsub(/ +/, " "); } { print $0 }'

The "gsub" command acts like sed's "s" command with the "g" modifier.
By prefixing the block with the gsub command with a regex, it only
acts on that regex.  The regex in this example only looks for an opening
and a closing paren anywhere on the line; you might need to tweak it
to more closely match your case.  Alternatively, you could implement
a counter that skips the four lines (which can be done with both sed
and awk).

If it gets more complex than this, you can always write a proper
program in a "real" language to do it.  This can be easier to maintain.

Cheers,
Peter

why not use the squeeze option of tr.

tr -s " "

--
Stephen Clark
*NetWolves*
Director of Technology
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
http://www.netwolves.com


Re: [GENERAL] postgreSQL not working after upgrade to Ubuntu 10.4

2010-05-07 Thread Steve Clark

On 05/07/2010 12:10 PM, AllieH wrote:


Yesterday I upgraded from Ubuntu 9.10 to 10.4. Unfortunately this has caused
me serious problems with everything related to postgreSQL. It seems as
though the new version of ubuntu is requiring me to use PostgreSQL 8.4
instead of 8.3 which I was previously using. I tried installing 8.4 but I
can't seem to get the database to start. Has anyone had a similar problems?

You are probably going to have to do an initdb. 8.4 db is not compatible
 8.3 db.
I ran into this with Fedora when they went from 8.3 to 8.4 in the middle of
the life of Fedora 12 - luckily I had been doing daily pg_dumps so all I had
to do was initdb under 8.4 and psql < my backup.
--
Stephen Clark
NetWolves
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
www.netwolves.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] postgreSQL not working after upgrade to Ubuntu 10.4

2010-05-07 Thread Steve Clark

On 05/07/2010 01:22 PM, Alan Hodgson wrote:

On Friday 07 May 2010, AllieH  wrote:

Yesterday I upgraded from Ubuntu 9.10 to 10.4. Unfortunately this has
  caused me serious problems with everything related to postgreSQL. It
  seems as though the new version of ubuntu is requiring me to use
  PostgreSQL 8.4 instead of 8.3 which I was previously using. I tried
  installing 8.4 but I can't seem to get the database to start. Has anyone
  had a similar problems?



Major OS version upgrades don't transparently do database upgrades. You have
to take care of things like that yourself.

Yes but the install procedure should be smart enough to look and see if you are 
using a previous
version of Postgres and prompt you if you want to continue installing the
new database.



In particular, for PostgreSQL, you would need to dump all your databases
using 8.3, then upgrade, then create a new 8.4 installation and restore the
databases into it. Sadly PostgreSQL does not do in-place major version
upgrades.

In your case, you need to stop and make a couple good filesystem-level
backups of your database. Then you need to find a way to either get 8.3
working on that machine, or move the database to another machine where you
can get it working, so you can do the PostgreSQL backup. Then you can think
about getting 8.4 setup on your upgraded machine and restore that backup
into it. You should probably also read through the 8.4 release notes to see
if you will require changes in any applications using those databases.





--
Stephen Clark
NetWolves
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
www.netwolves.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG_DUMP very slow because of STDOUT ??

2010-07-13 Thread Steve Clark

On 07/13/2010 07:29 AM, Andras Fabian wrote:

Now, I have found an unorthodox way, to make a slow machine run COPY-to-STDOUT fast. I empty the cache memory 
of the server, which makes "free" in "free -m" jump up to 14 GBytes (well, I just 
noticed, that most of the memory on the server is in "cache" ... up to 22 GBytes). I just entered:

" sync;echo 3>  /proc/sys/vm/drop_caches"

Running the COPY-to-STDOUT test after this immediately went through in a snap (2 1/2 minutes). I 
also see, that something in relation with the file is "nicely" mapped into cache memory, 
because as soon as I delete the file (with "rm"), that immediately frees up 3 GBytes of 
the cache.

This seems to prove, that a memory issue is/was behind the slow down. But still the 
question remains, why and how this can happen? I mean, at some point the memory manager 
most have taken a very wrong decision, if this is the result of its "normal" 
work ... And how the writing trough the socket affects this, I don't understand (because 
I still see the case, when a normal COPY-to-FILE didn't slow down at the same time when 
COPY-to-STDOUT was crouching).

So, potentially, maybe ... as a quick fix I could clean caches in my backup 
script that starts each night. But is this a safe idea at all? Or could there 
be adverse implications (yes, of course, some queries that got their data from 
the file cache before would now need to repopulate it) ?

Or is there a way to influence the memory manager Linux in a way, that it 
behaves a bit more conservative (or just different in a positive way)?

Andras Fabian


-Ursprüngliche Nachricht-
Von: Craig Ringer [mailto:cr...@postnewspapers.com.au]
Gesendet: Dienstag, 13. Juli 2010 12:51
An: Andras Fabian
Cc: pgsql-general@postgresql.org
Betreff: Re: AW: AW: AW: [GENERAL] PG_DUMP very slow because of STDOUT ??

On 13/07/2010 6:26 PM, Andras Fabian wrote:

Wait, now, here I see some correlation! Yes, it seems to be the memory! When I start my COPY-to-STDOUT 
experiment I had some 2000 MByte free (well ,the server has 24 GByte ... maybe other PostgreSQL processes 
used up the rest). Then, I could monitor via "ll -h" how the file nicely growed (obviously no 
congestion), and in parallel see, how "free -m" the "free" memory went down. Then, it 
reached a level below 192 MByte, and congestion began. Now it is going back and forth around 118-122-130 ... 
Obviously the STDOUT thing went out of some memory resources.
Now I "only" who and why is running out, and how I can prevent that.


  >  Could there be some extremely big STDOUT buffering in play 

Remember, "STDOUT" is misleading. The data is sent down the network
socket between the postgres backend and the client connected to that
backend. There is no actual stdio involved at all.

Imagine that the backend's stdout is redirected down the network socket
to the client, so when it sends to "stdout" it's just going to the
client. Any buffering you are interested in is in the unix or tcp/ip
socket (depending on how you're connecting), in the client, and in the
client's output to file/disk/whatever.

--
Craig Ringer


Have you posted this problem/issue to the linux kernel mailing list? You may 
get some rude responses about
the posting but someone will give you a tip or what is causing the problem.

I know there are some sysctls that affect memory management. I use the 
following 2 based on recommendation
of Linus to help interactivity for a desktop user.
/sbin/sysctl vm.dirty_background_ratio=3
/sbin/sysctl vm.dirty_ratio=5




--
Stephen Clark
NetWolves
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
www.netwolves.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG_DUMP very slow because of STDOUT ??

2010-07-13 Thread Steve Clark

On 07/13/2010 10:35 AM, Andras Fabian wrote:

Hi Greg,

hmmm, thats true. Thos settings for example were much higher too (on the Ubuntu 
server), than on our old machine.
New machine has:
- dirty_ratio = 20 (old has 10)
- dirty_background_ratio = 10 (old has 5)

But obviously setting vm.zone_reclaim_mode=0 "fixes" the problem to (which was "1" on new 
machine and "0" on old). See my latest post to Craig.

I hope using vm.zone_reclaim_mode=0 doesn't have other dire consequences :-)

It looks to me that vm.zone_reclaim_mode value is related to NUMA machines that
have "local" memory per node and shouldn't be used at all in your environment.



Andras Fabian

-Ursprüngliche Nachricht-
Von: Greg Smith [mailto:g...@2ndquadrant.com]
Gesendet: Dienstag, 13. Juli 2010 16:29
An: Andras Fabian
Cc: Craig Ringer; Tom Lane; pgsql-general@postgresql.org
Betreff: Re: [GENERAL] PG_DUMP very slow because of STDOUT ??

Andras Fabian wrote:

So the kernel function it is always idling on seems to be congestion_wait ...


Ugh, not that thing again.  See
http://www.westnet.com/~gsmith/content/linux-pdflush.htm ; that chunk of
code has cost me weeks worth of "why isn't the kernel writing things the
way I asked it?" trouble in the past.  I know the kernel developers have
been fiddling with pdflush again recently, they might have introduced
yet another bug into how it handles heavy write volume.  You can reduce
dirty_ratio and dirty_background_ratio to try and improve things, but
the congestion code will thwart any attempt to make them really low.

You might monitor what shows up as "Dirty:" in /proc/meminfo to see if
that lines up with the slow periods; example of what bad output looks
like at
http://notemagnet.blogspot.com/2008/08/linux-write-cache-mystery.html




--
Stephen Clark
NetWolves
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
www.netwolves.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Feature proposal

2010-08-25 Thread Steve Clark

On 08/25/2010 12:30 PM, Joshua D. Drake wrote:

On Wed, 2010-08-25 at 12:20 -0400, Eric Comeau wrote:



Without even changing any line of data or code in sql !

Incredible, isn't it ?



Curious- what postgresql.conf settings did you change to improve it?


The most obvious would be to turn fsync off, sychronous_commit off,
increase work_mem, increase checkpoint_timeout, increase wal_segments.

JD








can these be changed on the fly via set commands or does the config file have 
to be
changed and postgres stopped and restarted.

postgres 8.3.7 on freebsd.

--
Stephen Clark
NetWolves
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
www.netwolves.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] alter column to inet get error.

2010-09-03 Thread Steve Clark

Hello List,

I want to change some columns in a database
that were originally created as char varying to
inet.

When I try I get an error. Is there anyway to work
around this?

See below for table definition.

Table "public.kernel_gre"
 Column  | Type  |   Modifiers
-+---+---
 interface   | character varying(15) | not null
 source_ip   | character varying(16) |
 dest_ip | character varying(16) |
 physical_ip | character varying(16) |
 status  | boolean   | default false
 physical_src_ip | character varying(16) |
 tunnel_netmask  | character varying(16) |
 key | character varying(32) |
 state   | boolean   | default false
 broadcast   | boolean   | default false

alter TABLE kernel_gre ALTER COLUMN  source_ip TYPE inet;
ERROR:  column "source_ip" cannot be cast to type "inet"

current table contents:
 interface | source_ip |  dest_ip  | physical_ip | status | physical_src_ip | 
tunnel_netmask  | key | state | broadcast
---+---+---+-++-+-+-+---+---
 gre2  | 10.1.1.2  | 10.1.1.1  | 1.1.1.1 | t  | 1.1.1.2 | 
255.255.255.255 | | f | f
 gre3  | 10.1.1.4  | 10.1.1.3  | 1.1.1.3 | t  | 1.1.1.4 | 
255.255.255.255 | | f | f
 gre4  | 10.1.1.6  | 10.1.1.5  | 1.1.1.5 | t  | 1.1.1.6 | 
255.255.255.255 | | f | f
 gre5  | 10.1.1.8  | 10.1.1.7  | 1.1.1.7 | t  | 1.1.1.8 | 
255.255.255.255 | | f | f
 gre6  | 10.1.1.10 | 10.1.1.9  | 1.1.1.9 | t  | 1.1.1.10| 
255.255.255.255 | | f | f
 gre7  | 10.1.1.12 | 10.1.1.11 | 1.1.1.11| t  | 1.1.1.12| 
255.255.255.255 | | f | f
 gre8  | 10.1.1.14 | 10.1.1.13 | 1.1.1.13| t  | 1.1.1.14| 
255.255.255.255 | | f | f
 gre9  | 10.1.1.16 | 10.1.1.15 | 1.1.1.15| t  | 1.1.1.16| 
255.255.255.255 | | f | f
 gre10 | 10.1.1.18 | 10.1.1.17 | 1.1.1.17| t  | 1.1.1.18| 
255.255.255.255 | | f | f
 gre11 | 10.1.1.20 | 10.1.1.19 | 1.1.1.19| t  | 1.1.1.20| 
255.255.255.255 | | f | f
 gre12 | 10.1.1.22 | 10.1.1.21 | 1.1.1.21| t  | 1.1.1.22| 
255.255.255.255 | | f | f
 gre13 | 10.1.1.24 | 10.1.1.23 | 1.1.1.23| t  | 1.1.1.24| 
255.255.255.255 | | f | f
 gre14 | 10.1.1.26 | 10.1.1.25 | 1.1.1.25| t  | 1.1.1.26| 
255.255.255.255 | | f | f
 gre15 | 10.1.1.28 | 10.1.1.27 | 1.1.1.27| t  | 1.1.1.28| 
255.255.255.255 | | f | f
 gre16 | 10.1.1.30 | 10.1.1.29 | 1.1.1.29| t  | 1.1.1.30| 
255.255.255.255 | | f | f
 gre17 | 10.1.1.32 | 10.1.1.31 | 1.1.1.31| t  | 1.1.1.32| 
255.255.255.255 | | f | f
 gre18 | 10.1.1.34 | 10.1.1.33 | 1.1.1.33| t  | 1.1.1.34| 
255.255.255.255 | | f | f
 gre19 | 10.1.1.36 | 10.1.1.35 | 1.1.1.35| t  | 1.1.1.36| 
255.255.255.255 | | f | f
 gre20 | 10.1.1.38 | 10.1.1.37 | 1.1.1.37| t  | 1.1.1.38| 
255.255.255.255 | | f | f
 gre21 | 10.1.1.40 | 10.1.1.39 | 1.1.1.39| t  | 1.1.1.40| 
255.255.255.255 | | f | f
(20 rows)




Thanks in advance,
--
Stephen Clark
NetWolves
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
www.netwolves.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] alter column to inet get error.

2010-09-03 Thread Steve Clark

On 09/03/2010 09:38 AM, A. Kretschmer wrote:

In response to Steve Clark :

Hello List,

I want to change some columns in a database
that were originally created as char varying to
inet.

When I try I get an error. Is there anyway to work
around this?

See below for table definition.

 Table "public.kernel_gre"
  Column  | Type  |   Modifiers
-+---+---
  interface   | character varying(15) | not null
  source_ip   | character varying(16) |
  dest_ip | character varying(16) |
  physical_ip | character varying(16) |
  status  | boolean   | default false
  physical_src_ip | character varying(16) |
  tunnel_netmask  | character varying(16) |
  key | character varying(32) |
  state   | boolean   | default false
  broadcast   | boolean   | default false

alter TABLE kernel_gre ALTER COLUMN  source_ip TYPE inet;
ERROR:  column "source_ip" cannot be cast to type "inet"


Try this with explicet cast:

test=# create table ip (ip text);
CREATE TABLE
Zeit: 247,763 ms
test=*# copy ip from stdin;
Geben Sie die zu kopierenden Daten ein, gefolgt von einem Zeilenende.
Beenden Sie mit einem Backslash und einem Punkt alleine auf einer Zeile.

127.0.0.1
\.

Zeit: 5199,184 ms
test=*# alter table ip alter column ip type inet using ip::inet;
ALTER TABLE
Zeit: 242,569 ms
test=*# \d ip
 Tabelle »public.ip«
  Spalte | Typ  | Attribute
+--+---
  ip | inet |

test=*#


Regards, Andreas


Thanks guys, that seems to do the trick. Postgresql ROCKS!!!


--
Stephen Clark
NetWolves
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
www.netwolves.com


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Transposing rows and columns

2010-09-17 Thread Steve Clark

On 09/16/2010 05:26 PM, Aram Fingal wrote:


On Sep 16, 2010, at 4:37 PM, John R Pierce wrote:


On 09/16/10 10:44 AM, Aram Fingal wrote:

I have thought about that but later on, when we do the full sized
experiments, there will be too many rows for Excel to handle.


if you insist on this transposing, won't that mean you'll end up with
more columns than SQL can/should handle?


No. The organization in Excel is much more efficient of the total number
of cells used but not much good for querying. When I transpose it for
use in the database (or pivot it in Excel), it actually multiplies the
number of rows. So, if the version with separate columns for each
subject has X rows and Y columns, you get X * Y rows in the database
version. For example, If there are 100 subjects, and 1000 drug/dose
combinations. Then the Excel version has 102 columns (drug, dose and a
column for each subject) and 1000 rows. The database (or pivoted)
version would have 4 columns (subject, drug, dose and response) and
100,000 rows. Excel maxes out at 65,535 rows and PostgreSQL has no limit.

I think excel 2007 can handle more than 65,535 rows.


The subjects, by the way, are not people, they are cancer cell tissue
cultures in 384-well plates, handled by robots. That's how we can do so
many drug/dose combinations. We'll do even more in the future.

-Aram



--
Stephen Clark
NetWolves
Sr. Software Engineer III
Phone: 813-579-3200
Fax: 813-882-0209
Email: steve.cl...@netwolves.com
www.netwolves.com

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] default postgresql.conf

2008-08-19 Thread Steve Clark

Hello List,

When an initdb is done where do the default postgresql.conf and pg_hba.conf 
come from? Are they
copied from some template file?


Thanks,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 8.3.1 query plan

2008-08-27 Thread Steve Clark

Hello List,

I have inherited an existing application. It has a table of events and some 
events may reference an earlier
event. We have a script that saves the first N number of events for each 
device, but we also want to save
any additional event that is referenced by one of the first N events. The 
following query was developed
to do this. It seemed to work ok for a while but one time when it was run it 
never finished after running
for a day.

So I did an explain and I see it is doing a seq scan in the last sub plan - 
there are about 375,000 rows
in myevent - why isn't it using the index instead of doing a seq scan?

create unique index indx1myevents on myevents (event_log_no)
CREATE INDEX
vacuum analyze
VACUUM

explain insert into myevents select * from t_unit_event_log a where exists
   (select b.event_log_no from myevents b
where a.event_status = 1 and a.event_ref_log_no IS NOT NULL
   and a.event_ref_log_no = b.event_log_no and a.event_log_no not in
   (select event_log_no from myevents)
   )
Seq Scan on t_unit_event_log a  (cost=0.00..25863477934.49 rows=645692 
width=145)
  Filter: (subplan)
  SubPlan
->  Result  (cost=20019.39..20027.70 rows=1 width=4)
  One-Time Filter: (($1 = 1) AND ($2 IS NOT NULL) AND (NOT (subplan)))
  ->  Index Scan using indx1myevents on myevents b  
(cost=20019.39..20027.70 rows=1 width=4)
Index Cond: ($2 = event_log_no)
  SubPlan
->  Materialize  (cost=16579.16..22379.12 rows=432196 width=4)
  ->  Seq Scan on myevents  (cost=0.00..14668.96 rows=432196 
width=4)

Why wouldn't the planner use the index instead of doing a seq scan?

Any advice would be greatly appreciated.

Thanks,
Steve
 


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.1 query plan

2008-08-27 Thread Steve Clark

Martijn van Oosterhout wrote:

On Wed, Aug 27, 2008 at 11:00:54AM -0400, Steve Clark wrote:

So I did an explain and I see it is doing a seq scan in the last sub plan - 
there are about 375,000 rows

in myevent - why isn't it using the index instead of doing a seq scan?

create unique index indx1myevents on myevents (event_log_no)
CREATE INDEX



Are there NULLs in this subtable? NOT IN must check the entire table
for NULLs. Try changing the query to use NOT EXISTS.

Have a nice day,


Hi Martijn,

I guess I misunderstand what you are saying because this is what I get now:

srm2=# explain insert into myevents select * from t_unit_event_log a where 
exists
srm2-# (select b.event_log_no from myevents b
srm2(#  where a.event_status = 1 and a.event_ref_log_no IS NOT NULL
srm2(# and a.event_ref_log_no = b.event_log_no and 
a.event_log_no not exists
srm2(# (select event_log_no from myevents)
srm2(# );
ERROR:  syntax error at or near "exists"
LINE 4: ...nt_ref_log_no = b.event_log_no and a.event_log_no not exists

Regards,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.1 query plan

2008-08-28 Thread Steve Clark

Tom Lane wrote:

Steve Clark <[EMAIL PROTECTED]> writes:


explain insert into myevents select * from t_unit_event_log a where exists
   (select b.event_log_no from myevents b
where a.event_status = 1 and a.event_ref_log_no IS NOT NULL
   and a.event_ref_log_no = b.event_log_no and a.event_log_no not in
   (select event_log_no from myevents)
   )



Consider testing the conditions on A at the top level, instead of
redundantly checking them inside the sub-query on B.  I'm not certain
exactly how much that's hurting you (EXPLAIN ANALYZE output would've
been more informative), but it can't be good.

regards, tom lane



Thanks for the response Tom, I am a SQL neophyte, so I'll try to rework the 
query. The explain analyze
runs forever, or longer than a day anyway.

Regards,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 8.3.1 query plan

2008-08-29 Thread Steve Clark

Tom Lane wrote:

Steve Clark <[EMAIL PROTECTED]> writes:


Tom Lane wrote:


Consider testing the conditions on A at the top level, instead of
redundantly checking them inside the sub-query on B.




Thanks for the response Tom, I am a SQL neophyte, so I'll try to
rework the query.



What I meant to suggest was just

explain insert into myevents select * from t_unit_event_log a where
a.event_status = 1 and a.event_ref_log_no IS NOT NULL
and a.event_log_no not in (select event_log_no from myevents)
and exists (select b.event_log_no from myevents b
where a.event_ref_log_no = b.event_log_no)

ie, pull everything out of the subquery that doesn't depend on B.

Although, looking at it in this form, it seems like you'd be well
advised to then replace the EXISTS with an IN:

... and a.event_ref_log_no in (select b.event_log_no from myevents b)

Although those two forms should be equivalent, reality is that current
releases of PG are generally smarter about optimizing IN than EXISTS.
(The difference should largely go away as of 8.4.)

regards, tom lane



Thanks again Tom.

It appears to me that in both cases the planner ends up doing a seq scan of the 
myevents table
for each proposed new insertion from the t_unit_event_log table which make the 
query take to
long to be feasible. What I need to do is somehow immediately eliminate the 
candidate row from
the t_unit_event_log if the row is already in the myevents table, but I am at a 
loss as to how to do
that.

Regards,
Steve

srm2=# explain insert into myevents select * from t_unit_event_log a where
srm2-# a.event_status = 1 and a.event_ref_log_no IS NOT NULL
srm2-# and a.event_log_no not in (select event_log_no from myevents)
srm2-# and exists (select b.event_log_no from myevents b
srm2(# where a.event_ref_log_no = b.event_log_no);
  QUERY PLAN
-
Index Scan using indx_tuel_usn_ec_es on t_unit_event_log a  
(cost=31711.73..3597826539.34 rows=51479 width=145)
  Index Cond: (event_status = 1)
  Filter: ((event_ref_log_no IS NOT NULL) AND (subplan) AND (NOT (subplan)))
  SubPlan
->  Materialize  (cost=31711.73..42857.85 rows=830612 width=4)
  ->  Seq Scan on myevents  (cost=0.00..28041.12 rows=830612 width=4)
->  Index Scan using indx1myevents on myevents b  (cost=0.00..8.37 rows=1 
width=4)
  Index Cond: ($1 = event_log_no)
(8 rows)

srm2=# explain insert into myevents select * from t_unit_event_log a where
a.event_status = 1 and a.event_ref_log_no IS NOT NULL
and a.event_log_no not in (select event_log_no from myevents)
and a.event_ref_log_no in (select b.event_log_no from myevents b);
   QUERY PLAN   
---

Merge IN Join  (cost=40821629.90..3265326013.41 rows=39021 width=145)
  Merge Cond: (a.event_ref_log_no = b.event_log_no)
  ->  Index Scan using indx_tuel_erln on t_unit_event_log a  
(cost=31711.73..8616020218.25 rows=102959 width=145)
Filter: ((event_ref_log_no IS NOT NULL) AND (event_status = 1) AND (NOT 
(subplan)))
SubPlan
  ->  Materialize  (cost=31711.73..42857.85 rows=830612 width=4)
->  Seq Scan on myevents  (cost=0.00..28041.12 rows=830612 
width=4)
  ->  Materialize  (cost=119646.12..130028.77 rows=830612 width=4)
->  Sort  (cost=119646.12..121722.65 rows=830612 width=4)
  Sort Key: b.event_log_no
  ->  Seq Scan on myevents b  (cost=0.00..28041.12 rows=830612 
width=4)
(11 rows)


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MERGE: performance advices

2008-09-02 Thread Steve Clark

Richard Broersma wrote:

On Tue, Sep 2, 2008 at 4:19 AM, Ivan Sergio Borgonovo
<[EMAIL PROTECTED]> wrote:




insert into d (pk, c1, c2, ...) select pk, c1, c2, c3 from s
where s.pk not in (select pk from d);



This insert statement might be faster:

INSERT INTO d (pk, c1, c2, ... )
SELECT pk, c1, c2, ...
  FROM s
LEFT JOIN d ON s.pk = d.pk
WHERE d.pk IS NULL;



Hello Richard,

Is there a way to do something similar with the following? I am an SQL noob and 
the
following takes longer to run than is reasonable, on the order of hours.

insert into myevents select * from t_unit_event_log a where exists
   (select b.event_log_no from myevents b
where a.event_status = 1 and a.event_ref_log_no IS NOT NULL
   and a.event_ref_log_no = b.event_log_no and a.event_log_no not in
   (select event_log_no from myevents)
   )

Thanks,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MERGE: performance advices

2008-09-02 Thread Steve Clark

Richard Broersma wrote:

On Tue, Sep 2, 2008 at 8:10 AM, Steve Clark <[EMAIL PROTECTED]> wrote:



Is there a way to do something similar with the following? I am an SQL noob
and the
following takes longer to run than is reasonable, on the order of hours.

insert into myevents select * from t_unit_event_log a where exists
 (select b.event_log_no from myevents b
  where a.event_status = 1 and a.event_ref_log_no IS NOT NULL
 and a.event_ref_log_no = b.event_log_no and a.event_log_no not
in
 (select event_log_no from myevents)
 )




To start off with, this SQL statement can be refined a bit.  Many of
the sub-query WHERE clause constraints have nothing to do with the
Correlated sub-query.  The refinement would look like so:

INSERT INTO Myevents
 SELECT *
   FROM T_unit_event_log AS A
  WHERE A.event_status = 1
AND A.event_ref_log_no IS NOT NULL
AND A.event_log_no NOT IN ( SELECT event_log_no
  FROM Myevents)
AND EXISTS ( SELECT B.event_log_no
   FROM Myevents AS B
  WHERE A.event_ref_log_no = B.event_log_no );


The next step would be to rework the NOT IN sub-query into a LEFT JOIN
WHERE IS NULL;

INSERT INTO Myevents
 SELECT *
   FROM T_unit_event_log AS A
  LEFT JOIN Myevents AS C
 ON A.event_log_no = C.event_log_no
  WHERE A.event_status = 1
AND A.event_ref_log_no IS NOT NULL
AND C.event_log_no IS NULL
AND EXISTS ( SELECT B.event_log_no
   FROM Myevents AS B
  WHERE A.event_ref_log_no = B.event_log_no );

There is one possible alteration that may or many not improve
performance.  This would be to replace the EXISTS with a LEFT JOIN
WHERE IS NOT NULL;

INSERT INTO Myevents
 SELECT *
   FROM T_unit_event_log AS A
  LEFT JOIN Myevents AS C
 ON A.event_log_no = C.event_log_no
  LEFT JOIN Myevents AS B
 ON A.event_ref_log_no = B.event_log_no
  WHERE C.event_log_no IS NULL
AND B.event_log_no IS NOT NULL
AND A.event_status = 1
AND A.event_ref_log_no IS NOT NULL;




Hi Richard and thanks for the response. When I try the last two queries i get 
and error. I have listed
the results of explain on all three.

srm2=# explain
srm2-# INSERT INTO Myevents
srm2-#  SELECT *
srm2-#FROM T_unit_event_log AS A
srm2-#   WHERE A.event_status = 1
srm2-# AND A.event_ref_log_no IS NOT NULL
srm2-# AND A.event_log_no NOT IN ( SELECT event_log_no
srm2(#   FROM Myevents)
srm2-# AND EXISTS ( SELECT B.event_log_no
srm2(#FROM Myevents AS B
srm2(#   WHERE A.event_ref_log_no = B.event_log_no );
  QUERY PLAN
-

Index Scan using indx_tuel_usn_ec_es on t_unit_event_log a  
(cost=31711.73..3607445990.61 rows=51844 width=146)
  Index Cond: (event_status = 1)
  Filter: ((event_ref_log_no IS NOT NULL) AND (subplan) AND (NOT (subplan)))
  SubPlan
->  Materialize  (cost=31711.73..42857.85 rows=830612 width=4)
  ->  Seq Scan on myevents  (cost=0.00..28041.12 rows=830612 width=4)
->  Index Scan using indx1myevents on myevents b  (cost=0.00..8.37 rows=1 
width=4)
  Index Cond: ($1 = event_log_no)
(8 rows)

srm2=#
srm2=#
srm2=# explain
srm2-# INSERT INTO Myevents
srm2-#  SELECT *
srm2-#FROM T_unit_event_log AS A
srm2-#   LEFT JOIN Myevents AS C
srm2-#  ON A.event_log_no = C.event_log_no
srm2-#   WHERE A.event_status = 1
srm2-# AND A.event_ref_log_no IS NOT NULL
srm2-# AND C.event_log_no IS NULL
srm2-# AND EXISTS ( SELECT B.event_log_no
srm2(#FROM Myevents AS B
srm2(#   WHERE A.event_ref_log_no = B.event_log_no );
ERROR:  INSERT has more expressions than target columns
srm2=#
srm2=# explain
srm2-# INSERT INTO Myevents
srm2-#  SELECT *
srm2-#FROM T_unit_event_log AS A
srm2-#   LEFT JOIN Myevents AS C
srm2-#  ON A.event_log_no = C.event_log_no
srm2-#   LEFT JOIN Myevents AS B
srm2-#  ON A.event_ref_log_no = B.event_log_no
srm2-#   WHERE C.event_log_no IS NULL
srm2-# AND B.event_log_no IS NOT NULL
srm2-# AND A.event_status = 1
srm2-# AND A.event_ref_log_no IS NOT NULL;
ERROR:  INSERT has more expressions than target columns

I really appreciate your help.

Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MERGE: performance advices

2008-09-02 Thread Steve Clark

Richard Broersma wrote:

On Tue, Sep 2, 2008 at 9:47 AM, Steve Clark <[EMAIL PROTECTED]> wrote:


srm2=# explain
srm2-# INSERT INTO Myevents
srm2-#  SELECT *




ERROR:  INSERT has more expressions than target columns




srm2=# explain
srm2-# INSERT INTO Myevents
srm2-#  SELECT *
ERROR:  INSERT has more expressions than target columns



Oops, replace SELECT * with SELECT A.*.


Richard,

This is AWESOME! This now only takes seconds where before it was taking longer 
than
I wanted to wait, I had let it run all night before aborting it.

Thanks so much Richard,

Regards,
Steve

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] MERGE: performance advices

2008-09-02 Thread Steve Clark

Tom Lane wrote:

"Richard Broersma" <[EMAIL PROTECTED]> writes:


There is one possible alteration that may or many not improve
performance.  This would be to replace the EXISTS with a LEFT JOIN
WHERE IS NOT NULL;



That changes the behavior, doesn't it?  Or is event_log_no a unique
key for Myevents?  I think what you want is to make the EXISTS an IN
instead.

regards, tom lane



Yes event_log_no is a unique key for myevents.

--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Backend timeout

2008-09-18 Thread Steve Clark

Scott Marlowe wrote:

On Tue, Sep 16, 2008 at 7:21 AM, Dragan Ciric <[EMAIL PROTECTED]> wrote:


Hi!

We need some help.
Sometimes we have broken connections with backend ( postgresql server ).
When this occurs, we have "idle in transaction" indication on server side. Can 
you
answer us, how long will server stay in this state and what happens with this
broken connection ( client started transaction, but can't send commit or 
rollback )?



If the client socket on the other end has simply disappeared, then the
connection will be harvested approximately net.ipv4.tcp_keepalive_time
+ net.ipv4.tcp_keepalive_probes * net.ipv4.tcp_keepalive_intvl seconds
later.  On default setups, this is something like 7200 + 90 * 9 for a
total of 8010 seconds.  i.e. just over an hour.

Not to be picky but 60 sec * 60 min = 3600 sec = 1 hour so
the above timeout would be just over 2 hours.


On later model postgresql's you can change these settings for just the
pgsql server to something more sane, like

net.ipv4.tcp_keepalive_time = 300
net.ipv4.tcp_keepalive_probes =  3
net.ipv4.tcp_keepalive_intvl = 30

which get it down to 6.5 minutes or so before stale connections are harvested.

The advantage to using tcp_keepalive is it won't kill living but idle
connections.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


  1   2   >