Re: [GENERAL] Performance slowing down when doing same UPDATE many times

2015-03-09 Thread Chris Mair
> Hi,
> 
>  
> does no one have an idea?
> 
> It may be a rare case doing the same UPDATE a thousand times. But I´m really 
> interested why this is not happening when doing DIFFERENT updates. And, of 
> course,  if something could be done on the database side to prevent this 
> behavior in case some application developer does the same “mistake” again.
> 
>  
> Thanks
> 
> Jan


Hi,

in an UPDATE operation PostgreSQL has to create a new tuple and marking the old 
as unread.

A long time ago, what you see here was very common: for a heavy update load 
frequent vaccum
was recommended. Then, in 8.3, a feature called HOT (heap-only tuples) was 
introduced that
made away with this problem.

I'm not 100% sure what happens in your case, but I think the problem is the 
updates
are all in the *same* transaction. That is indeed a rare situation.

Bye,
Chris.


> 
>  
>  
> From: Jan Strube 
> Sent: Tuesday, February 10, 2015 12:03 PM
> To: 'pgsql-general@postgresql.org'
> Subject: Performance slowing down when doing same UPDATE many times
> 
>  
> Hi,
> 
>  
> we recently found a bug in one of our applications which was doing exactly 
> the same UPDATE operation a few thousand times inside a transaction. This 
> caused the UPDATEs to become slower and slower from some milliseconds to some 
> seconds. We already fixed the application but I am wondering if this might be 
> a PostgreSQL bug, too.
> 
>  
> Here is a simple test case that performs and benchmarks 100,000 UPDATEs 
> (benchmarking only every 10,000th to reduce output):
> 
>  
> BEGIN;
> 
> CREATE TEMP TABLE test (id integer PRIMARY KEY, flag boolean DEFAULT false);
> 
> INSERT INTO test (id) SELECT generate_series(1, 10);
> 
>  
> DO $$
> 
> DECLARE
> 
>   s timestamp;
> 
>   e timestamp;
> 
> BEGIN
> 
>   FOR i IN 1..10 LOOP
> 
> SELECT clock_timestamp() INTO s;
> 
> UPDATE test SET flag = true WHERE id = 12345;
> 
> SELECT clock_timestamp() INTO e;
> 
>  
> IF i%1 = 0 THEN
> 
>   RAISE NOTICE '%', e-s;
> 
> END IF;
> 
>   END LOOP;
> 
> END $$;
> 
> ROLLBACK;
> 
>  
> The output looks like this:
> 
>  
> NOTICE:  00:00:00.000525
> 
> NOTICE:  00:00:00.000992
> 
> NOTICE:  00:00:00.001404
> 
> NOTICE:  00:00:00.001936
> 
> NOTICE:  00:00:00.002374
> 
> NOTICE:  00:00:00.002925
> 
> NOTICE:  00:00:00.003525
> 
> NOTICE:  00:00:00.004015
> 
> NOTICE:  00:00:00.00453
> 
> NOTICE:  00:00:00.004976
> 
>  
> The problem only occurs inside a transaction and if the same dataset is 
> updated. I´m using PostgreSQL 9.1.15.
> 
>  
> Jan
> 
>  
> 




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] unexpected (to me) sorting order

2015-04-08 Thread Chris Mair

> select * from T_SORT order by NAME ;
> 
> rollback;
>  id |name
> +
>   1 | FINISH_110_150_1
>   2 | FINISH_110_200_1
>   3 | FINISH_1.10_20.0_3
>   4 | FINISH_1.10_20.0_4
>   5 | FINISH_1.10_30.0_3
>   6 | FINISH_1.10_30.0_4
>   7 | FINISH_120_150_1
>   8 | FINISH_120_200_1
> (8 rows)

Hi,

PostreSQL relies on the OS's C lib. So this kind
of ordering problems depend on the OS' idea about
collations.

I get the exact same order on 9.4.1 running on Centos 7.1:

chris=# select * from T_SORT order by NAME ;
 id |name
+
  1 | FINISH_110_150_1
  2 | FINISH_110_200_1
  3 | FINISH_1.10_20.0_3
  4 | FINISH_1.10_20.0_4
  5 | FINISH_1.10_30.0_3
  6 | FINISH_1.10_30.0_4
  7 | FINISH_120_150_1
  8 | FINISH_120_200_1
(8 rows)

But I get this on 9.3.5 running on OS X 10.8

chris=# select * from T_SORT order by NAME ;
 id |name
+
  3 | FINISH_1.10_20.0_3
  4 | FINISH_1.10_20.0_4
  5 | FINISH_1.10_30.0_3
  6 | FINISH_1.10_30.0_4
  1 | FINISH_110_150_1
  2 | FINISH_110_200_1
  7 | FINISH_120_150_1
  8 | FINISH_120_200_1

with both databases having Collate = en_US.UTF-8.

If I put your data in a file and use the command sort
from the shell I get the same effect (this is on
the Centos 7.1 box):

[chris@mercury ~]$ cat x
FINISH_1.10_20.0_3
FINISH_1.10_20.0_4
FINISH_1.10_30.0_3
FINISH_1.10_30.0_4
FINISH_110_150_1
FINISH_110_200_1
FINISH_120_150_1
FINISH_120_200_1

[chris@mercury ~]$ sort x
FINISH_110_150_1
FINISH_110_200_1
FINISH_1.10_20.0_3
FINISH_1.10_20.0_4
FINISH_1.10_30.0_3
FINISH_1.10_30.0_4
FINISH_120_150_1
FINISH_120_200_1
[chris@mercury ~]$

I don't know what's the rationale behin this,
but it looks like Linux ignores the . when doing the sort.


Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] DB on mSATA SSD

2015-04-23 Thread Chris Mair
> Dear Postgresql mailing list,
> 
> we use Postgresql 8.4.x on our Linux firewall distribution.
> Actually, we are moving from standard SATA disk to mSATA SSD solid drive, and 
> we noticed that the DB, using lots of indexes, is writing a lot.
> 
> In some monthes, two test machine got SSD broken, and we are studying how to 
> reduce write  impact for DB.
> 
> Are there some suggestions with SSD drives?
> Putting the DB into RAM and backing up periodically to disk is a valid 
> solutions?
> 
> Or, is storing indexes on a ram drive possible?
> 
> Thank you in advance for your appreciated interest!
> 
> Best regards,
> Francesco

Hi,

I don't think that today's SSDs - and certainly not the server-grade ones -
will break due to write intensive loads.

Have a look at the SMART data for you drives, there should be some metrics
called "wear level count" or similar that gives some indications. I wouldn't
be surprised if you find that your broken drives had failures not related to
wear level.

If you're on Linux use smartctl.

Also, as others have pointed out 8.4 is out of support, so consider upgrading.

Bye,
Chris.

















-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Invalid memory alloc

2015-04-23 Thread Chris Mair
> Hello, I'm processing  a 100Million row table.
> I get error message about memory and I'ld like to know what can cause this 
> issue.
> 
> ...
> psql:/home/ubuntu/create_topo.sql:12: NOTICE:  104855000 edges processed
> psql:/home/ubuntu/create_topo.sql:12: NOTICE:  104856000 edges processed
> psql:/home/ubuntu/create_topo.sql:12: NOTICE:  104857000 edges processed
> psql:/home/ubuntu/create_topo.sql:12: NOTICE:  invalid memory alloc request 
> size 1677721600
> psql:/home/ubuntu/create_topo.sql:12: NOTICE:  UPDATE public.way_noded SET 
> source = 88374866,target = 88362922 WHERE id =  142645362
>  pgr_createtopology 
> 
>  FAIL
> (1 row)
> 
> The server  has a 10Gb of shared_buffer.
> Do you thing this quantity of memory allowed should normaly be enough to 
> process the data?
> 
> Thanks
> Marc

Hi,

what version of Postgres are you using? Any extensions? I guess you're using 
PostGIS, right?

This error indicates something is trying to allocate 1600 MB of memory in the 
backend - that
should never happen, as data chunks that are larger than 1 GB are broken down 
in smaller pieces.

I hope you're not suffering data corruption, what happens if you do select * 
from public.way_noded
where id =  142645362 ?

Any other hints? Log messages (from Linux or the postgres backend)?

Bye,
Chris.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Invalid memory alloc

2015-04-24 Thread Chris Mair
> I use Postgis and PGrouting extension.
> The error come when I use a pgrouting function pgr_createtopology()

It appears pgrouting violates the 1GB per chunk limit in the postgres backend
when processing large datasets:

https://github.com/pgRouting/pgrouting/issues/291

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] psql 8 warm standby strong start, weak finish

2015-04-30 Thread Chris Mair
> psql (8.4.7)

Uhm the last update to 8.4 was 8.4.22: besides using an unsupported
version, you're missing three and a half years of patches in 8.4.x :|

Bye,
Chris











-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] 9.4.2 -> 9.4.3

2015-05-28 Thread Chris Mair
Hi,

quick question regarding

https://wiki.postgresql.org/wiki/May_2015_Fsync_Permissions_Bug

Will 9.4.3  be exactly like 9.4.2 except for the permission
bug, or will there be other fixes too?

Bye,
Chris.

PS: yes, I've read the section "Should I not apply the updates?".



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] 9.4.2 -> 9.4.3

2015-05-28 Thread Chris Mair
>> Will 9.4.3  be exactly like 9.4.2 except for the permission
>> bug, or will there be other fixes too?
> 
> There are a few more fixes available in the queue, including another
> multixact fix.

Ok,

good to know, thanks!

Bye,
Chris.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] select count(*);

2015-06-11 Thread Chris Mair
> select *;
> --
> ERROR:  SELECT * with no tables specified is not valid
> 
> select count(*);
> 
> 1
> 
> Is this a must? and why 1?

Hi,

regarding the "why 1" part:

I think that if we accept that

chris=> select 'foo';
 ?column?
--
 foo
(1 row)

returns 1 row, then naturally

chris=> select count('foo');
 count
---
 1
(1 row)

should give a count of 1...

The * might be a bit tricky, though,
since 'select *;' doesn't work.

Bye,
Chris.








-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Counting the occurences of a substring within a very large text

2015-06-24 Thread Chris Mair
> Hello,
> 
> I'd like to count the number  linebreaks within a string, 
> but I get a memory allocation error when using regexp_matches or 
> regexp_split_to_table.
> 
> Any idea for an alternative to this problem  ?
> 
> select count(*)-1 from
> (  select regexp_split_to_table(full_message,'(\n)', 'g') 
>from mytable 
>where id =-2146999703
> )foo;
> 
> ERROR:  invalid memory alloc request size 1447215584
> 
> regards,
> 
> Marc Mamin
> 

Hi,

what's the size of full_message from mytable where id =-2146999703?

Also: Postgres version? OS? any extensions installed?

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] trouble converting several serial queries into a parallel query

2015-07-05 Thread Chris Mair
> I have a very simple query that is giving me some issues due to the size of 
> the database and the number of requests I make to it in order to compile the 
> report I need:
> 
> A dumbed down version of the table and query:
> 
>   CREATE TABLE a_to_b (
>   id_a INT NOT NULL REFERENCES table_a(id), 
>   id_b INT NOT NULL REFERENCES table_b(id),
>   PRIMARY KEY (id_a, id_b)
>   );
>   SELECT id_a, id_b FROM a_2_b WHERE id_a = 1 LIMIT 5;
> 
> The problem is that the table has a few million records and I need to query 
> it 30+ times in a row.  
> 
> I'd like to improve this with a parallel search using `IN()`
> 
>   SELECT id_a, id_b FROM a_2_b WHERE id_a = IN 
> (1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26.27,28,29,30);
> 
> That technique has generally fixed a lot of bottlenecks for us.
> 
> However I can't wrap my head around structuring it so that I can apply a 
> limit based on the column -- so that I only get 5 records per id_a.
> 
> The table has columns that I would use for ordering in the future, but I'm 
> fine with just getting random values right now .
> 
> Can anyone offer some suggestions?  Thanks in advance.

Hi,

I had exactly the same problem some time ago and came up with this:


select * from (
select *, rank() over (partition by id_a order by id_b) as r
from a_to_b where id_a in (1, 2)
) as subsel where r <= 5;

Note the ordering is already there (by id_b), you can pick other columns
of course).

It looks a bit complicated, though. If anybody knows a more
straitforward way I'd be glad to hear it :)

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] regexp_matches for digit

2015-07-09 Thread Chris Mair
> Hi,
>   in oracle regexp_like(entered
> date,'[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}','i')
> 
> for postgres i have regexp_matches ,But i need how to match [:digit:] in
> postgres when we pass date..?
> any help

[:digit:] is Posix syntax, supported by Postgres.

Looks good to me:

graal=# select regexp_matches('2015-07-09',
'[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}','i');
 regexp_matches

 {2015-07-09}
(1 row)

graal=# select regexp_matches('2015-x7-09',
'[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}','i');
 regexp_matches

(0 rows)

What do you need, exactly?

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] xmltable in postgres like in oracle..?

2015-07-11 Thread Chris Mair
> Hi,
>   is xmltable available in postgres..?,if not please give me a
> advice to replace the xmtable in postgres..?

Hi,

PostgreSQL has a native XML type and related functions:

http://www.postgresql.org/docs/9.4/static/datatype-xml.html
http://www.postgresql.org/docs/9.4/static/functions-xml.html

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reach pg server using -h localhost

2015-07-21 Thread Chris Mair
> So that's all good. If I use -h it doesn't work:
> 
> psql -Umyuser -d mydb -h localhost

> listen_addresses = 'localhost' # also '*', '127.0.0.1', '::1'


Use netstat to look what IP postgres actually binds to.
OS X uses the BSD syntax:

netstat -an

For example on my Mac (not homebrew):

tcp4   0  0  127.0.0.1.5432 *.*
LISTEN
tcp6   0  0  ::1.5432   *.*
LISTEN

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reach pg server using -h localhost

2015-07-21 Thread Chris Mair
>> Use netstat to look what IP postgres actually binds to.
> 
> Nothing unusual:
> 
> ~ netstat -an | grep 5432
> tcp4   0  0  *.5432 *.*LISTEN
> tcp6   0  0  *.5432 *.*LISTEN
> 9767b1c9fd5d8ab1 stream  0  0 9767b1ca01d63a21
> 000 /tmp/.s.PGSQL.5432
> 

and if you have it listen to 127.0.0.1, you see 127.0.0.1 in netstat?

and psql -h 127.0.0.1 still fails?

is lo0 still there (ifconfig)?

Bye,
Chris.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reach pg server using -h localhost

2015-07-21 Thread Chris Mair
Hi,

just to be 100% sure everything works upt to the TCP layer...

0. Stop postgres.

1. Open a terminal, enter:

   nc -l 5432

   and leave that running.

2. Open another terminal and enter:

nc 127.0.0.1 5432

   follow up with some text such as "hello" and then hit CTRL-D

So... did "hello" show up in the first Terminal or not? (it should)

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reach pg server using -h localhost

2015-07-22 Thread Chris Mair
> This is interesting, I tried nc a few times. Inconsistent results:
> 
> - most of the time the first line doesn't get through, even after a long wait
> - on sending a second line from the client both lines appear
> instantly, and it's 100% from there on
> - or I can send a line from the server. The first line from client is
> still lost but both ends are 100% after that
> - 1/10 "sessions" behave perfectly

The plot thickens...

Here, on OS X 10.10.4 it works every time and every line entered shows
up in terminal 1 instantly (as it should).

If you ping 127.0.0.1 for some time you do have 0% loss, right?

At this point I'd try to look at the OS X logs. Firewall is off right?
Try to stop other software (any virtualization software, obscure network
tools?)...

What happens if you use the IP of en0, en1, ..., enX...  instead of
127.0.0.1?

Rebooting doesn't help (Windows docet)?

When the first line doesn't get through, does it show up in the
Recv-Q Send-Q fields in netstat -an (provided the connection shows
up at all)?

When you keep a third terminal open running sudo tcpdump -n -i lo0
while you do the nc thing, do you see anything?


> BTW ^D just ends the session without sending anything, I'm hitting
> enter to send instead.

Oh yes, I wasn't clear on that...


Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot reach pg server using -h localhost

2015-07-22 Thread Chris Mair
> SOLVED!

> Glad to see my macbook is not haunted.

Calling a priest would have been my next suggestion... ;)

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Sorting CSV string and removing Duplicates

2015-07-27 Thread Chris Mair
> Hello,
> 
> I have a csv string in a text field that is unsorted and contains
> duplicates.
> Is there a simple way to remove these and sort the string.
> 
> E.g 
> 2,18,20,23,1,27,1,2,8,16,17,18,20,22,23,27 
> 
> i tried string to array and unique but that did not work...
> Any suggestions on how to do this without writing a function?
> 
> Any help is appreciated.
> 
> Thanks
> A


chris=# SELECT distinct x::int from
unnest(string_to_array('2,18,20,23,1,27,1,2,8,16,17,18,20,22,23,27',
',')) x order by x::int;

 x

  1
  2
  8
 16
 17
 18
 20
 22
 23
 27
(10 rows)

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronous replication and read consistency

2015-07-29 Thread Chris Mair
> Does sync replication guarantee that any inserted data on primary is
> immediately visible for read on standbys with no lag.

Basically yes. Of course there is *some* latency, at the very least
from the network.

If I run a process on a standby machine that displays a value every
0.1 sec and update the value on the master, I see the standby updating
with a lag that feels less than 0.2 sec or so.

You might have lag, however, in situations where you have so much
write into the master that the network or standby is not able to
catch up. After the write burst is over, the stanby will catch up
as it quickly as possible, though.

Also, you use the word "consistency", that would be something else...
Of course you always get consistent data, lag or not. This is Postgres
after all :)

Bye,
Chris.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Synchronous replication and read consistency

2015-07-29 Thread Chris Mair
> Chris/Joshua
> 
> I would like to know more details.
> 
> As per this:
> 
> http://www.postgresql.org/docs/current/static/warm-standby.html#SYNCHRONOUS-REPLICATION
> 
> "When requesting synchronous replication, each commit of a write
> transaction will wait until confirmation is received that the commit
> has been written to the transaction log on disk of both the primary
> and standby server."
> 


Ah, sorry I misread sync replication as streaming replication...


> Does it mean that, on the standby, when PG writes the transaction log
> on the disk, it also updates the data buffers to make the transaction
> visible for all sessions.
> 
> Eg:
> 
>   On the primary
>  A big transaction committed
>   Now if I issue a select on the primary looking for the transaction I
> committed above, I will get what I want.
> Will I get the same result if instead of primary I issue the select on
> the standby.
> 
> Hope it is clear.


Synchronous replication is slower by nature. It will slow down the
master as well because each commit has to wait for a standby to ack it.

The answer to your question is still yes, you will get the same result
on the standby.

You will actually see less lag than with normal streaming replication
in the sense that the standby lagging several transactions behind due to
a commit/write burst on the master is not possible anymore. This
is of course at the expense of master-performance.


Bye,
Chris.






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stack depth

2015-08-14 Thread Chris Mair
> Hi All,
> 
> I'm tuning up my database and need to increase the max_stack_depth
> parameter in postgresql.conf.
> 
> I've edited the /etc/security/limits.conf and added
> *   softstack   12288
> *   hardstack   12288
> 
> but I noticed the comments at the top of the file -
> #This file sets the resource limits for the users logged in via PAM.
> #It does not affect resource limits of the system services.
> 
> After saving my changes and rebooting postgresql will not start, I know
> that the configuration change I have made will only affect users
> authenticated by PAM but I need to amend the stack depth for ALL system
> processes.
> 
> Can anybody point me in the right direction?
> 
> I'm also asking on the CentOS mailing list but have been pointed to the
> file above which is not the setting I require.
> 
> while logged in I can run ulimit -s 12288 but this only affects the
> current shell.

Hi,

on a CentOS 6.7 box I can confirm that adding those lines to
/etc/security/limits.conf DOES increase the limit to 12288
(after a reboot).

I don't see the PAM line, though. What version of CentOS are
you using?

Bye,
Chris.











-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stack depth

2015-08-17 Thread Chris Mair
>>> I've edited the /etc/security/limits.conf and added
>>> *   softstack   12288
>>> *   hardstack   12288

>> on a CentOS 6.7 box I can confirm that adding those lines to
>> /etc/security/limits.conf DOES increase the limit to 12288
>> (after a reboot).
>>
>> I don't see the PAM line, though. What version of CentOS are
>> you using?

> I'm using CentOS7.1, here's the heading from my /etc/security/limit.conf
> [...]
> #It does not affect resource limits of the system services.
> well what does!??

I don't have yet a lot of experience with Centos7, but so far
whenever things that used to work in Centos6 didn't in Centos7
for me the guilty part was systemd. Systemd is the new replacement
for init and a bunch of other things.

If I google

systemd security/limits.conf

I get hits that your problem too is due to systemd.

For example here is a guide to change one of the ulimits in centos7:

https://ma.ttias.be/increase-open-files-limit-in-mariadb-on-centos-7-with-systemd/

I don't know if that works, but it is in line with the
"systemd broke things" idea...

Bye,
Chris.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] stack depth

2015-08-17 Thread Chris Mair
> I don't know if that works, but it is in line with the
> "systemd broke things" idea...

Rereading this, I realize this might come over as too harsh.

What I meant was "the introduction of systemd broke things".
So this wasn't meant as anti-systemd or anything. No flames
intented ;)

Bye,
Chris.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] retrieve subset of a jsonb object with a list of keys

2015-08-19 Thread Chris Mair
On 19/08/15 13:37, Tom Smith wrote:
> Hi:
> 
> I have a jsonb columne with json object like belo
> {"a": 1, "b":2, "c":3}
> 
> I'd like to get subset of the object with key list ["a","c"]
> so it retruns json object of
> 
> {"a": 1,  "c":3}
> 
> something like
> 
> select '{"a": 1, "b":2, "c":3}'::jsob ->'["a","c"]'
> 
> what would be the most efficient (and simplest if possible) to get the
> subset with the key list?

Hi,

I came up with this:

select json_object_agg(key, value)
  from jsonb_each_text('{"a": 1, "b":2, "c":3}'::jsonb)
  where key in ('a', 'c');

IDK if there is a shorter way. You might want to wrap this in a function.

Bye,
Chris.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] master/master replication with load balancer in front

2015-08-22 Thread Chris Mair
> I see there are many different ways to build a PG cluster. What would be
> the best choice in my case?

Hi,

a few keywords in your mail hint at the fact you're using AWS?

If that's the case, you might want to look into their managed
PostgreSQL hosting: it's called Amazon RDS for PostgreSQL and
supports failover ("Multi AZ") and master-slave replication
("Read Replicas").

There's no master-master support, though. If you need that,
you might want to look into BDR, but then you need to patch...

As load balancer, PgPool-II might be what you're looking for.

Bye,
Chris.







-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Float Array

2015-09-21 Thread Chris Mair
On 21/09/15 11:55, Alex Magnum wrote:
> Hello,
> 
> I have a float array holding geo location information.
> 
>  geoloc
> ---
>  {5.3443133704554,100.29457569122}
>  {5.3885574294704,100.29601335526}
>  {3.1654978750403,101.60915851593}
>  {5.3766154817748,100.31472444534}
>  {3.1545014704258,101.70036971569}
> (5 rows)
> 
> Is there an easy way to round all values to 4 decimals.
> 
> I can round the individual values and return them seperately but I need to 
> return them as an array. 
> 
>  lat|   long
> -+---
>  5.34431 | 100.29458
>  5.38856 | 100.29601
>  3.16550 | 101.60916
>  5.37662 | 100.31472
>  3.15450 | 101.70037
> (5 rows)
> 
> Any suggestion is highly appreciated.


This might work for you.

Bye,
Chris.

chris=# select * from geoloc;
  geoloc   
---
 {5.3443133704554,100.29457569122}
 {5.3885574294704,100.29601335526}
 {3.1654978750403,101.60915851593}
 {5.3766154817748,100.31472444534}
 {3.1545014704258,101.70036971569}
(5 rows)

chris=# select (select array_agg(to_char(x, '999.')::float) from 
unnest(geoloc) as x) from geoloc;;
 array_agg 
---
 {5.3443,100.2946}
 {5.3886,100.296}
 {3.1655,101.6092}
 {5.3766,100.3147}
 {3.1545,101.7004}
(5 rows)






-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Rounding Float Array

2015-09-21 Thread Chris Mair
>> Sure:
>>
>> (depesz@[local]:5960) 12:15:46 [depesz] 
>> $ select geoloc::numeric(8,4)[] from alex;
>>   geoloc
>> ---
>>  {5.3443,100.2946}
> 
> Nice!

Indeed :)

Bye,
Chris.





-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Selecting pairs of numbers

2015-10-05 Thread Chris Mair

I then might want to extract a list from, say, (1, 3) to (3, 2), giving:

x | y
-
1 | 3
1 | 4
2 | 1
2 | 2
2 | 3
2 | 4
3 | 1
3 | 2

For the life of me, I can't figure out how to do this.


Hi,

starting from this:


chris=# select * from t order by x,y;
 x | y
---+---
 1 | 1
 1 | 2
 1 | 3
 1 | 4
 2 | 1
 2 | 2
 2 | 3
 2 | 4
 3 | 1
 3 | 2
 3 | 3
 3 | 4
(12 rows)

one trick that might help is this:

chris=# select * from t where x*1000+y >= 1003 and x*1000+y <= 3002 order by 
x,y;
 x | y
---+---
 1 | 3
 1 | 4
 2 | 1
 2 | 2
 2 | 3
 2 | 4
 3 | 1
 3 | 2
(8 rows)

watch out, if you have y values bigger than 1000, though...


Bye,
chris.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] the installer of postgresql 9.4.1208 and 8.4-703 doesn't start

2016-07-16 Thread Chris Mair

I dowloaded the installer


The strings "9.4-1208" and "8.4-703" look suspiciously like the PostgreSQL JDBC 
drivers
(drivers for the Java Programming Language). Perhaps that's not what you wanted?

To install PostgreSQL itself start here:

https://www.postgresql.org/download/

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] PG vs ElasticSearch for Logs

2016-08-19 Thread Chris Mair

On 19/08/16 10:57, Thomas Güttler wrote:



What do you think?


I store most of my logs in flat textfiles syslog style, and use grep for adhoc 
querying.

 200K rows/day, thats 1.4 million/week, 6 million/month, pretty soon you're 
talking big tables.

in fact thats several rows/second on a 24/7 basis


There is no need to store them more then 6 weeks in my current use case.



Hi,

to me this kind of data looks like something Postgres can handle with ease.

We're talking about 8.4M rows here.

Coincidentally, I was trying out the new parallel query feature in the
9.6 beta just now and decided to use your numbers as a test case :)

I can create 8.4M records having a timestamp and a random ~ 250 character string
in ~ 31 seconds:

pg96=# select now() + (sec / 20.0 * 86400.0 || ' seconds')::interval as ts,
pg96-#repeat(random()::text, 15) as msg
pg96-# into t1
pg96-# from generate_series(1, 6 * 7 * 20) as sec;
SELECT 840
Time: 30858.274 ms

Table size is 2.4 GB.

This gives about 6 weeks. A query to scan the whole thing on the narrow column
takes ~ 400 msec, like this:

pg96=# select min(ts), max(ts) from t1;
  min  |  max
---+---
 2016-08-19 20:17:24.921333+00 | 2016-09-30 20:17:24.489333+00
(1 row)

Time: 409.468 ms

Even running an unanchored regular expression (!) on the wider column is doable:

pg96=# select count(*) from t1 where msg ~ '12345';
 count
---
   955
(1 row)

Time: 3146.838 ms

If you have some filter, not everything needs to be regexped and this gets 
pretty fast:

pg96=# select count(*) from t1 where ts between '2016-08-25' and '2016-08-26' 
and msg ~ '12345';
 count
---
24
(1 row)

Time: 391.577 ms

All this is without indices. Your data is more structured than my test, so 
undoubtly you will
get some gain from indices...

Here is something more analytical - basically same as the count(*) above:

pg96=# select ts::date, count(*) from t1 where msg ~ '12345' group by ts::date 
order by ts::date;
 ts | count
+---
 2016-08-19 |26
 2016-08-20 |28
 [...]
 2016-09-28 |21
 2016-09-29 |33
(42 rows)

Time: 3157.010 ms

Note, however, that I'm using 9.6 beta with the parallel query feature: the 
sequential scans with the regexp is
run in parallel on 6 workers... this gives me a speed-up of a factor 4-5 
(machine has 8 logical CPUs) and
the whole table fits in cache. For a use case as this, the parallel query 
feature in 9.6 is so good it's almost
like cheating ;)

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Query killed with Out of memory

2016-10-04 Thread Chris Mair

With a heavy query, when line number results raise over 600k query hangs with 
out of memory.

Here is the explain analyze:
[...]
Work_mem is.512mb, shared buffers 3084mb and system Ram 10Gb. Postgres version 
is 8.4.8 and for some months i cannot upgrade.

Is there a way to solve the problem?


Hi,

a few ideas:

 - what goes out of memory? The client? if you query from a programming 
language you should set the fetch/batch size
   to some value so that it won't fetch the whole 600k rows into memory... for 
the psql client you can do
   \set FETCH_COUNT 1000

 - work_mem 512 MB is high-ish unless you have a small value for 
max_connection...

 - 8.4.8 was released in 2011, the latest 8.4 release is 8.4.22, you'r missing 
lots of patches (and 8.4 was EOLed more
   than two years ago)

Bye,
Chris.










--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to return Random rows from a table with non-repeatability of rows

2016-10-29 Thread Chris Mair

Dear folks,

I have a table with thousands of rows ( currently 15 thousand but will grow 
very fast).
I need to return from the query rows which are random and non-repeating.
 I know there is random() function, but would like to know from postgresql 
practitioners before embarking that path.
Please let me know what is best way to handle this type of queries.

regards
Kiran


Hi,

if you're using Postgres >= 9.5 what you are looking for is TABLESAMPLE.

Syntax is here:
https://www.postgresql.org/docs/9.5/static/sql-select.html

Google tablesample+postgres to get some examples on how to use it.

Bye,
Chris.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High load average every 105 minutes

2016-11-07 Thread Chris Mair



with AWS, your system is sharing the vendors virtual machine environment with 
other customers, and performance is pretty much out of your control.



I found no strange processes or queries while load average was at peak. IO also 
didn't change. Some more slow queries were logged, but not many.
I think sharing the VM with other customers doesn’t have much to do with this. 
I checked my other servers too, and only those that have postgresql have the 
load average issue. Generally it doesn’t impact my system much, but when there 
are slow queries, this issue just makes everything worse.


Hi,

generally speaking AWS is pretty good at isolating users (and you can request 
single tenancy machines or
dedicated machines as well if you're concerned about this).

However, if you're running t1 or t2 instances, you get the concept of CPU 
credits. When those run out, your
system is slowed down until the credits recover. I could imagine that this way 
some cyclic load patterns
emerge, if there is constant load on the machines.

Nhan, what instance types are you running?

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High load average every 105 minutes

2016-11-07 Thread Chris Mair



with AWS, your system is sharing the vendors virtual machine environment with 
other customers, and performance is pretty much out of your control.



I found no strange processes or queries while load average was at peak. IO also 
didn't change. Some more slow queries were logged, but not many.
I think sharing the VM with other customers doesn’t have much to do with this. 
I checked my other servers too, and only those that have postgresql have the 
load average issue. Generally it doesn’t impact my system much, but when there 
are slow queries, this issue just makes everything worse.


Hi,

generally speaking AWS is pretty good at isolating users (and you can request 
single tenancy machines or
dedicated machines as well if you're concerned about this).

However, if you're running t1 or t2 instances, you get the concept of CPU 
credits. When those run out, your
system is slowed down until the credits recover. I could imagine that this way 
some cyclic load patterns
emerge, if there is constant load on the machines.

Nhan, what instance types are you running?

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] High load average every 105 minutes

2016-11-07 Thread Chris Mair



with AWS, your system is sharing the vendors virtual machine environment with 
other customers, and performance is pretty much out of your control.



I found no strange processes or queries while load average was at peak. IO also 
didn't change. Some more slow queries were logged, but not many.
I think sharing the VM with other customers doesn’t have much to do with this. 
I checked my other servers too, and only those that have postgresql have the 
load average issue. Generally it doesn’t impact my system much, but when there 
are slow queries, this issue just makes everything worse.


Hi,

generally speaking AWS is pretty good at isolating users (and you can request 
single tenancy machines or
dedicated machines as well if you're concerned about this).

However, if you're running t1 or t2 instances, you get the concept of CPU 
credits. When those run out, your
system is slowed down until the credits recover. I could imagine that this way 
some cyclic load patterns
emerge, if there is constant load on the machines.

Nhan, what instance types are you running?

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres 9.6.1 big slowdown by upgrading 8.4.22

2017-01-08 Thread Chris Mair

I would also like to add this:

The explain analyze show that the index on that numeric field *is not* being 
used.
I also try to set the seqscan off but that index continues not to be used.

Maybe the problem is this?

Thank you again!
/F


Hi,

maybe I missed something, but it appears to me you did NOT yet show the 
complete actual query + plan.
All we saw is the explain analyze of the call to the procedure 
function_cloud_view_orari(), but we
don't know what's happening inside the procedure.

Bye,
Chris.








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Queries are taking way longer in 9.6 than 9.5

2017-01-17 Thread Chris Mair

Hi.


explain analyze

SELECT count(*)

[...]



 QUERY PLAN

-

Aggregate  (cost=1177.25..1177.26 rows=1 width=0)

[...]


I'm not seeing the "(actual ... )" part here.
THe plan you show is from an explain, not an explain analyze...

Can you provide the explain analyze output?

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Queries are taking way longer in 9.6 than 9.5

2017-01-17 Thread Chris Mair

On 17/01/17 23:21, Tomas Vondra wrote:

Hi,

after looking at the explain plans again, I very much doubt those come from the 
same query. The reason is the 9.5 plan contains this part:

->  HashAggregate  (cost=67.54..68.12 rows=192 width=4)
  Group Key: d.vip_patient_id
  ->  Nested Loop  (cost=0.17..67.44 rows=192 width=4)
->  Index Scan using unq_user_name on tblcnaccounts ...
  Index Cond: ((user_name)::text = 'dd'::text)
->  Index Only Scan using idx_tblcndoctorpatientmap ...
  Index Cond: (master_user_id = a.master_user_id)

while the 9.6 plan does not include anything like that, i.e. there's only a 
single aggregate at the top level, without any group keys. Also, the SQL query 
you've provided does not include any GROUP BY clause, so I claim that those 
plans are from two different queries.


There's also a user_name = 'dd' that has become a user_name = 'rdoyleda' ...

Ravi, could you please send the current query you're testing and the explain 
analyze of that query on 9.5 and 9.6?

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] SSD gives disappointing speed up on OSX

2016-02-03 Thread Chris Mair

I have a program that inserts 50M records of about 30 bytes each [..]




Now I suspect the limit is OSX throttling per-process CPU.
Does this sound right?


Mmm... I don't think so.

How do you perform the inserts?

- Single inserts per transaction?
- Bundled inserts in transactions (with or without prepared statements?)
- COPY?

Also,
have you tried doing the inserts without the indexes and create the indexes 
afterwards?

And finally you might want to try synchronous_commit = off
( 
http://www.postgresql.org/docs/9.4/static/runtime-config-wal.html#GUC-SYNCHRONOUS-COMMIT
 )
.

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble installing PostGIS on Amazon Linux server

2016-02-14 Thread Chris Mair



Error: Package: gdal-libs-1.9.2-6.rhel6.x86_64 (pgdg93)
Requires: libpoppler.so.5()(64bit)
  You could try using --skip-broken to work around the problem
  You could try running: rpm -Va --nofiles --nodigest


Hi,

what happens if you try to install libpoppler (it is in the standard Amazon 
repo)?

yum install poppler poppler-devel poppler-cpp poppler-cpp-devel

Bye,
Chris.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Trouble installing PostGIS on Amazon Linux server

2016-02-15 Thread Chris Mair

I was told that "The amazon linux is compatible with Centos 6.x".  Does that 
correspond to RHEL 6? Is there a command I could use to find out?


Not quite.

Amazon Linux is RHEl/CentOS/Fedora derived, but it's not based on exactly 
RHEl/CentOS 6 or
exactly RHEl/CentOS 7.

This is its current libpoppler:

$ rpm -qf /usr/lib64/libpoppler.so.37
poppler-0.22.5-6.15.amzn1.x86_64

I don't know if this is an option to you, but CentOS 6 is available as an AMI 
on AWS,
so you could directly run that? Also AWS offers RDBMS as a service and they do 
have
PostgreSQL + PostGIS in their offerings (see "RDS").

Bye,
Chris.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] decoding BLOB's

2016-02-22 Thread Chris Mair

Can someone point me in the right direction per how I would remove the
first 25 bytes and the last 2 bytes from a bytea column?


http://www.postgresql.org/docs/9.3/static/functions-binarystring.html

Substring might do it for you.


won't doing it in SQL still result in a BYTEA result which will be wrapped when 
dumped via COPY ?

instead, I think this needs to be done externally to sql, like by piping the 
resulting file through something like ...

tail -c +25 infile.dat | head -c -2 > outfile.dat

or doing something in perl or whatever.


Hi,

I think it's easier if you go via base64 and then use the shell command 
"base64" to
decode it back to binary.

I start from this sample blob with 5 bytes (values 0, 1, 2, 4, 5), just a sample
(I actually failed to count to 4 correctly ;):

chris=# select * from t;
 id | blob
+--
  1 | \x0001020405
(1 row)

In the shell now I can do:

chris$ psql -A -t -c "select encode(blob, 'base64') from t where id = 1"
AAECBAU=

To save this in binary I add base64 -d (Linux) or base64 -D (OS X, FreeBSD?):

chris$ psql -A -t -c "select encode(blob, 'base64') from t where id = 1" | base64 
-D > blob.dat

blob.dat is now the 5 bytes, nothing else:

chris$ od -tx1 blob.dat
00000  01  02  04  05
005

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Unexpected result using floor() function

2016-03-14 Thread Chris Mair

Hi,

maybe this is a late reply, but also note that 4.725 alone already cannot be
represented in floating point exactly (and this has nothing to do with 
Postgres).

Just sum it up 100 times to "see" the round off error becoming visible:

chris=# select sum(4.725::double precision) from generate_series(1,100);
   sum
--
 472.5001
(1 row)

vs.

chris=# select sum(4.725::numeric) from generate_series(1,100);
   sum
-
 472.500
(1 row)

Bye,
Chris.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] parallel query in 9.6.6 + oracle_fdw -> ERROR: invalid cache ID: 41

2017-11-19 Thread Chris Mair

Hi!

I've found a problem in either PostgreSQL 9.6.6 or oracle_fdw 2.0.0.

Background:

I was testing a setup on a current CentOS 7 system with PostgreSQL 9.6.6
installed from the PGDG repositories and oracle_fdw 2.0.0 installed via pgxn
install. Everything went absolutely fine until I enabled the parallel query
feature (I set max_worker_processes and max_parallel_workers_per_gather both to
8).

Bug:

Whenever a session has performed a query on a foreign table, any subsequent
query on a local table big enough to use the parallel query feature exits with
an error:

ERROR:  invalid cache ID: 41
CONTEXT:  parallel worker

Whenever a session has NOT performed a query on a foreign table, parallel
queries are executed all OK. No problem.

The problem is not related to what kind of foreign query was executed.

Reproducability is 100%!

Some debugging work:

I've seen this was reported some time ago here:

https://www.postgresql.org/message-id/29113.1488910563%40sss.pgh.pa.us

I've followed Tom Lanes advice and did - on the same machine - a debug enabled
build of the 9.6.6 source code with

./configure --enable-cassert --enable-debug CFLAGS="-ggdb -Og -g3 
-fno-omit-frame-pointer" --prefix=/home/centos/pgbuild

and changed the ERROR in PANIC in src/backend/utils/cache/syscache.c.

I then redid the pgxn install oracle_fdw with paths pointing to the debug
build.  The problem was easily reproduced. A count(*) on a bigish table with 4
workers produced 4 core dumps!

Below is the back trace from gdb from each of the cores.

Let me know if there's anything else I can do or check!

Thanks :)
Chris.



[centos@asia data]$ ls -l core*
-rw---. 1 centos centos 152059904 Nov 19 17:43 core.30430
-rw---. 1 centos centos 152059904 Nov 19 17:43 core.30431
-rw---. 1 centos centos 152059904 Nov 19 17:43 core.30432
-rw---. 1 centos centos 152059904 Nov 19 17:43 core.30433

[centos@asia data]$ gdb ../bin/postgres core.30430
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /home/centos/pgbuild/bin/postgres...done.
[New LWP 30430]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/libthread_db.so.1".
Core was generated by `postgres: bgworker: parallel worke'.
Program terminated with signal 6, Aborted.
#0  0x7f16a0f4d1f7 in raise () from /lib64/libc.so.6
Missing separate debuginfos, use: debuginfo-install glibc-2.17-196.el7.x86_64 
libaio-0.3.109-13.el7.x86_64
(gdb) bt
#0  0x7f16a0f4d1f7 in raise () from /lib64/libc.so.6
#1  0x7f16a0f4e8e8 in abort () from /lib64/libc.so.6
#2  0x008094b4 in errfinish (dummy=dummy@entry=0) at elog.c:557
#3  0x0080aea2 in elog_finish (elevel=elevel@entry=22, fmt=fmt@entry=0x9d9965 
"invalid cache ID: %d") at elog.c:1378
#4  0x007ffd82 in SearchSysCacheList (cacheId=cacheId@entry=41, 
nkeys=nkeys@entry=2, key1=key1@entry=139734905138463, key2=, 
key3=key3@entry=0, key4=key4@entry=0) at syscache.c:1210
#5  0x7f169161a59f in _PG_init () at oracle_fdw.c:709
#6  0x0080c476 in internal_load_library (libname=, 
libname@entry=0x7f16a1bfdde8 ) at dfmgr.c:276
#7  0x0080c708 in RestoreLibraryState (start_address=0x7f16a1bfdde8 
) at dfmgr.c:741
#8  0x004e72cf in ParallelWorkerMain (main_arg=) at 
parallel.c:1069
#9  0x0069b0bf in StartBackgroundWorker () at bgworker.c:742
#10 0x006a70bd in do_start_bgworker (rw=rw@entry=0x1f3b8d0) at 
postmaster.c:5612
#11 0x006a722a in maybe_start_bgworkers () at postmaster.c:5809
#12 0x006a7cf3 in sigusr1_handler (postgres_signal_arg=) 
at postmaster.c:4990
#13 
#14 0x7f16a1007783 in __select_nocancel () from /lib64/libc.so.6
#15 0x006a80e5 in ServerLoop () at postmaster.c:1683
#16 0x006a93c0 in PostmasterMain (argc=argc@entry=1, 
argv=argv@entry=0x1f13cc0) at postmaster.c:1327
#17 0x00626905 in main (argc=1, argv=0x1f13cc0) at main.c:228
(gdb) quit

[centos@asia data]$ gdb ../bin/postgres core.30431
GNU gdb (GDB) Red Hat Enterprise Linux 7.6.1-100.el7
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-redhat-linux-gnu".
For bug reporting instructions, please see:
...
Reading symbols from /home/centos/pgbuild/bin/postgre

Re: [GENERAL] parallel query in 9.6.6 + oracle_fdw -> ERROR: invalid cache ID: 41

2017-11-19 Thread Chris Mair

Hi,


So I'd call this an oracle_fdw bug.  It needs to postpone what it's
doing here to the first normal FDW function call in a session.


Thanks a lot for looking so quickly into this!

I've opened an issue with oracle_fdw:

https://github.com/laurenz/oracle_fdw/issues/215

Thanks,
Chris.



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] PostgreSQL as a backend for Wizcon 9.2

2007-10-16 Thread Chris Mair

Hi,

I've been asked whether it's possible to use PostgreSQL as a backend for
Wizcon 9.2. I don't know anything about Wizcon, but I'd like to give an
answer (for PostgreSQL advocacy reasons).

So I'm forwarding the question to this list.

Any clues?

Bye :)
Chris.




---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [GENERAL] PostgreSQL as a backend for Wizcon 9.2

2007-10-16 Thread Chris Mair



I've been asked whether it's possible to use PostgreSQL as a backend for
Wizcon 9.2. I don't know anything about Wizcon, but I'd like to give an
answer (for PostgreSQL advocacy reasons).

So I'm forwarding the question to this list.


Not one.  I'd ask the folks who write the software is they support
PostgreSQL or not.


Ok,
I've done that just now.

Still, if somebody on the list has first-hand experience with this
(negative or positive), I'd like to know.

Thanks & Bye,
Chris.


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

  http://www.postgresql.org/docs/faq


Re: [GENERAL] Out of Memory

2015-01-16 Thread Chris Mair

>> https://drive.google.com/file/d/0ByfjZX4TabhocUg2MFJ6a21qS2M/view?usp=sharing

> Note: due an error in dump script, if you are in Linux/Unix environment, use 
> this command for uncompressing the file:
> 
> bzip2 -d -c comment_test.dump.bz2 |sed -e '12d' > comment_test.dump

Hi,

I've played a bit with this. Here's what I see.

Let me give a bit of info:

enrico=# \d stage.fbcomment
   Table "stage.fbcomment"
  Column  | Type  | Modifiers
--+---+---
 field_id | jsonb |
Indexes:
"comment_test_idx" btree ((field_id ->> 'pageId'::text))

enrico=# select pg_total_relation_size('stage.fbcomment');
 pg_total_relation_size

   83755008
(1 row)


enrico=# select count(*) from stage.fbcomment;
 count
---
 23431
(1 row)

enrico=# select sum(jsonb_array_length(field_id ->'comment')) from 
stage.fbcomment;
  sum

 541454
(1 row)


-> to keep in mind: there are 23k rows, but if you unnest the 'comment' array 
there are 541k rows.

The following two queries are just fine. I see the postgres worker reaching a 
"RES" size of 108MB
for both.

nrico=# explain analyze
enrico-# SELECT substring((field_id ->'comment')::text,1,1)
enrico-# FROM stage.fbcomment;
QUERY PLAN
---
 Seq Scan on fbcomment  (cost=0.00..3012.62 rows=23431 width=828) (actual 
time=0.147..2749.940 rows=23431 loops=1)
 Planning time: 0.046 ms
 Execution time: 2756.881 ms
(3 rows)

Time: 2757.398 ms
enrico=#
enrico=# explain analyze
enrico-# SELECT jsonb_array_elements(field_id ->'comment')->>'id'
enrico-# FROM stage.fbcomment;
  QUERY PLAN
--
 Seq Scan on fbcomment  (cost=0.00..14552.39 rows=2343100 width=828) (actual 
time=0.067..885.041 rows=541454 loops=1)
 Planning time: 0.053 ms
 Execution time: 978.161 ms
(3 rows)

Time: 978.705 ms

Interestingly, if you combine these, it quickly blows up! The following query 
with a limit 1000 already
has a RES of well over 1GB. With larger limits it quickly thrashes my machine.


enrico=# explain analyze
SELECT substring((field_id ->'comment')::text,1,1),
   jsonb_array_elements(field_id ->'comment')->>'id'
FROM stage.fbcomment limit 1000;
QUERY PLAN
---
 Limit  (cost=0.00..6.31 rows=1000 width=828) (actual time=0.200..2419.749 
rows=1000 loops=1)
   ->  Seq Scan on fbcomment  (cost=0.00..14786.70 rows=2343100 width=828) 
(actual time=0.198..2418.931 rows=1000 loops=1)
 Planning time: 0.059 ms
 Execution time: 2659.065 ms
(4 rows)

Time: 2659.708 ms

I think this triggers some code path that is not really optimal for memory 
usage for some reason. I don't
know if there is something interesting to fix here or not. I guess other people 
will quickly see what happens
here?

In any case the solution for you might be to unnest the comments in this table 
and split the '{' vs '[' before doing
your processing. I.e. create the intermediate table with the 541454 comments 
and then throw your queries against that
table. This should also use way less processing time than the hack with the '[' 
vs '{' cases.

Bye,
Chris.












-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] rollback in C functions

2015-02-19 Thread Chris Mair
> The function is to execute updates on each ID assigning the value, but if
> one of these operation fails (does not meet certain criteria)
> inside the function i would like to rollback and leave everything
> untouched, in case other ID;s were already updated previously,
> and come back to the caller and infor

Hi,

I think you want ereport(), here is an example:
http://www.postgresql.org/docs/9.3/static/xfunc-c.html

Bye,
Chris.




-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] expensive function in select list vs limit clause

2017-04-05 Thread Chris Mair

Hi,

I've found a (simple) situation where the planner does something I don't 
understand.

Below is a complete test case followed by output.

From the timings it appears that in the second explain analyze query a function
call in the select list (expensive()) is evaluated in the sequential scan node
*for each* row in big, despite the use of limit.

I would have expected expensive() to be evaluated only for the ten rows
in the result set. Hence the second explain analyze query shouldn't be more
expensive than the first one.

My trust in Postgres' planner goes so far as I feel the planner is right and 
there
must be a reason for this :)

Could someone help me understand this behaviour?

Thanks & Bye,
Chris.



-- ***

select version();

-- setup: create a time wasting function and a table with 1M rows

create function expensive() returns double precision as
$$
begin
for i in 1 .. 15000 loop
end loop;
return random();
end;
$$ language 'plpgsql';

create table big as select random() as r from generate_series(1, 100);
analyze big;

\timing on

-- benchmark expensive(): one call to expensive takes about 0.3 ms => OK

do $$ begin for i in 1 .. 1000 loop perform expensive(); end loop; end; $$;

-- find the ten smallest values in big: takes ~ 0.18s => OK

explain analyze select r from big order by r offset 0 limit 10;

-- now do the same, but add an expensive() column to the result:
-- takes ~ 29s => WHY?

explain analyze select r, expensive() from big order by r offset 0 limit 10;

-- clean up :)

\timing off
drop function expensive();
drop table big;

-- ***

   version
--
 PostgreSQL 9.5.4 on x86_64-apple-darwin14.5.0, compiled by Apple LLVM version 
7.0.2 (clang-700.1.81), 64-bit
(1 row)

Time: 0.386 ms
CREATE FUNCTION
Time: 0.456 ms
SELECT 100
Time: 466.814 ms
ANALYZE
Time: 73.770 ms
Timing is on.
DO
Time: 33.922 ms
QUERY PLAN
--
 Limit  (cost=36034.64..36034.67 rows=10 width=8) (actual time=182.361..182.363 
rows=10 loops=1)
   ->  Sort  (cost=36034.64..38534.64 rows=100 width=8) (actual 
time=182.360..182.361 rows=10 loops=1)
 Sort Key: r
 Sort Method: top-N heapsort  Memory: 25kB
 ->  Seq Scan on big  (cost=0.00..14425.00 rows=100 width=8) 
(actual time=0.022..99.777 rows=100 loops=1)
 Planning time: 0.070 ms
 Execution time: 182.377 ms
(7 rows)

Time: 182.689 ms
  QUERY PLAN
--
 Limit  (cost=286034.64..286034.67 rows=10 width=8) (actual 
time=28932.311..28932.314 rows=10 loops=1)
   ->  Sort  (cost=286034.64..288534.64 rows=100 width=8) (actual 
time=28932.309..28932.310 rows=10 loops=1)
 Sort Key: r
 Sort Method: top-N heapsort  Memory: 25kB
 ->  Seq Scan on big  (cost=0.00..264425.00 rows=100 width=8) 
(actual time=0.062..28822.520 rows=100 loops=1)
 Planning time: 0.038 ms
 Execution time: 28932.339 ms
(7 rows)

Time: 28932.908 ms
Timing is off.
DROP FUNCTION
DROP TABLE

-- ***



--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] expensive function in select list vs limit clause

2017-04-05 Thread Chris Mair



https://git.postgresql.org/gitweb/?p=postgresql.git&a=commitdiff&h=9118d03a8


Hi,

thanks!

I've just tested with 9.6 and the test runs fast with or without expensive().

So the above patch does indeed improve this case a lot!

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] expensive function in select list vs limit clause

2017-04-05 Thread Chris Mair




ORDER BY can only be processed after all rows have been fetched, this
includes the expensive result column.

You can easily avoid that by applying the LIMIT first:

  SELECT r, expensive()
  FROM (SELECT r
FROM big
ORDER BY r
LIMIT 10
   ) inner;

I don't know how hard it would be to only fetch the necessary columns before
the ORDER BY and fetch the others after the LIMIT has been applied, but it
is probably nontrivial and would require processing time for *everybody*
who runs a query with ORDER BY to solve a rare problem that can easily be
worked around.


Hi,

Tom Lane just pointed out that 9.6 is able to optimise this (at least
the synthetic example).

Anyway, my real problem could be beautifully improved by subselect-trick!

Thanks a lot!

Bye,
Chris.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] keeping WAL after dropping replication slots

2017-04-06 Thread Chris Mair

Postgres version?

9.6.1


Have you considered upgrading to 9.6.2?
There were some fixes, including WAL related:

https://www.postgresql.org/docs/9.6/static/release-9-6-2.html

Not exactly regarding what you see, though...

Bye,
Chris.





--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Service outage: each postgres process use the exact amount of the configured work_mem

2017-04-14 Thread Chris Mair

Sorry,
  my mistake (I'm a bit nervous...)

that's not work_mem, but shared_buffers


Hi.

The resident set size of the worker processes includes all shared memory blocks 
they touched.
So it's not that each of those workers allocated their own 3GB...

(in Linux at least)

Bye,
Chris.






--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Service outage: each postgres process use the exact amount of the configured work_mem

2017-04-14 Thread Chris Mair

you should increase shared_memory to 40GB. General philosphy is to allocate 80% 
of system memory to shared_memory


Uhm...

80% is too much, likely:

https://www.postgresql.org/docs/9.5/static/runtime-config-resource.html

Bye,
Chris.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Migration Query

2017-05-08 Thread Chris Mair

Does one any having list of bottlenecks and workarounds while migrating data
  from Oracle to Postgresql. like what are thing which we can migrate from 
Oracle database to Postgresql and what we can't?

Which is the best tool for migration from Oracle to Postgresql?


Hi,

I like this tool and have used it successfully in the past:

http://ora2pg.darold.net/

Bye,
Chris.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle to PostgreSQL Migration.

2017-05-29 Thread Chris Mair

I am facing below issue while running below command.

*C:\ora2pg>ora2pg -c ora2pg.conf*
DBI connect('host=;sid=;port=',',...) failed: 
ORA-01017: invalid username/p
assword; logon denied (DBD ERROR: OCISessionBegin) at 
C:/Strawberry/perl/site/lib/Ora2Pg.pm line 1376.
FATAL: 1017 ... ORA-01017: invalid username/password; logon denied (DBD ERROR: 
OCISessionBegin)
Aborting export...

even I am able to login in Oracle with same  and 


If you can login from the same host using sqlplus like this:

sqlplus myuser/mypass@myhost:1521/mysid

then you likely can have ora2pg connect too. Just make sure that in ora2pg.conf 
you put the lines:

ORACLE_DSN  dbi:Oracle:host=myhost;sid=mysid
ORACLE_USER myuser
ORACLE_PWD  mypass

Bye,
Chris.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle to PostgreSQL Migration.

2017-05-29 Thread Chris Mair

C:\ora2pg>ora2pg -c ora2pg.conf
[>] 2/2 tables (100.0%) end of scanning.
[>] 0/2 tables (0.0%) end of scanning.
[>] 2/2 tables (100.0%) end of table export.


Looks good so far.
This means you could connect to Oracle DB now.



DBD::Pg::st execute failed: ERROR:  relation "mytab" does not exist


This is coming from the Postgres side.

In ora2pg.conf go to the section

OUTPUT SECTION (Control output to file or PostgreSQL database)

I suggest you comment out (prefix with #) the part

#PG_DSN dbi:Pg:dbname=test_db;host=localhost;port=5432
#PG_USERtest
#PG_PWD test

and just have ora2pg write its ouput to a file by setting OUTPUT like this:

OUTPUT  output.sql

This way you have your oputput for Postgres in a file that you can check out
and try importing step by step. I guess you are running this on some test
data, so the file will be small enough to open it with an editor.
You cap paste piece by piece into a Postgres prompt (psql or pgadmin or whatever
you're using).

You can then see at what point you get an error (and hopefully understand
what's happening).

Bye,
Chris.




--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Oracle to PostgreSQL Migration.

2017-05-29 Thread Chris Mair

[>] 1/1 rows (100.0%) Table MYTAB (0 recs/sec)
[>] 1/2 total rows (50.0%) - (4 sec., avg: 0 recs/sec).
Out of memory!] 1/2 rows (50.0%) on total estimated data (4 sec., 
avg: 0 recs/sec)
Issuing rollback() due to DESTROY without explicit disconnect() of 
DBD::Oracle::db handle (DESCRIPTION=(ADDRESS=(PO
RT=1521)(PROTOCOL=tcp)(HOST=))(CONNECT_DATA=(SID=))) at 
C:/Strawberry/perl/vendor/
lib/DBD/Oracle.pm line 348.


So, It will show data enter in mytab 100% but in test it is 50%. but when I 
checked on PostgreSQL server their data only in one table.


Did you notice the "Out of memory!" you got there?

I guess that's the problem now...

Bye,
Chris.








--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Major Performance decrease after some hours

2006-10-01 Thread Chris Mair
Hi,

a few random question...

> > i have a Tomcat application with PostgreSQL 8.1.4 running which
> > performs about 1 inserts/deletes every 2-4 minutes and updates on
> > a database and after some hours of loadtesting the top output says
> > 0.0% idle, 6-7% system load, load average 32, 31, 28

Who is responsible for this high load values?
Do you see many postmaster processes at the top?


>  and there are
> > many exceptions at statement execution like:
> > An I/O error occured while sending to the backend.

Is this from the application?
How many connections does the application open in parallel?
Using JDBC, I guess?


> > - Each Cluster has 4 drbd Devices, one for the PostgreSQL data
> > - Two of these clusters are using the same PostgreSQL installation to
> > share the data, the database can be moved from one cluster to the
> > other in case of failure

Just to be 100% sure: just one server at a time runs PostgreSQL on
that shared data disk, right?


Bye,
Chris.


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] storing transactions

2006-10-05 Thread Chris Mair

> I’ve been studying the whole evening and don’t seem to find an answer:
> I want to “store” transactions on the server- like view’s, or, (sorry) as in 
> M$ SQL Server CREATE OR REPLACE TRANSACTION xyz() 

Of course, it's possible.
What you need is
CREATE OR REPLACE FUNCTION xyz() RETURNS trigger AS ...
then
CREATE TRIGGER ... EXECUTE PROCEDURE xyz();


Look here for an example:
http://www.postgresql.org/docs/8.1/static/plpgsql-trigger.html

Bye,
Chris.



---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] more anti-postgresql FUD

2006-10-11 Thread Chris Mair

> Do a simple test to see my point:
> 
> 1. create table test (id int4, aaa int4, primary key (id));
> 2. insert into test values (0,1);
> 3. Execute "update test set aaa=1 where id=0;" in an endless loop
> 
> I just did the test on PostgreSQL 7.4.12 and MySQL 5.0.22 (MyISAM,
> sorry had no configured InnoDB). Ubuntu 6.0.6, AMD64, 2GB, default
> database settings.
> 
> MySQL performs very well, approximately 15000-2 updates per second
> with no degradation of performance.
> 
> PostgreSQL does approximately 1600 records per second for the first
> 1, then 200rps for the first 100k records, and then slower and
> slower downgrading to 10-20 rps(!!!) when reaching 300k.

Hi,
it would be cool if you could at least:

 - bundle your updates into transactions of, say, 1000 updates at a time
   i.e. wrap a BEGIN; END; around a 1000 of them
 - run postgresql with fsync off, since you're using MyISAM
 - run PostgreSQL at least 8, since you're running MySQL 5

I'd bet MySQL would still be faster on such an artificial, single user
test, but not *that much* faster.

If you don't want to install 8.0, could you maybe at least do the first
two items (shouldn't be a lot of work)...?

Which client are you using? Just mysql/psql or some API?

Bye, Chris.











---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] more anti-postgresql FUD

2006-10-14 Thread Chris Mair

> > it would be cool if you could at least:
> >
> >  - bundle your updates into transactions of, say, 1000 updates at a time
> >i.e. wrap a BEGIN; END; around a 1000 of them
> >  - run postgresql with fsync off, since you're using MyISAM
> >  - run PostgreSQL at least 8, since you're running MySQL 5
> >
> > I'd bet MySQL would still be faster on such an artificial, single user
> > test, but not *that much* faster.
> 
> I'm quite sure the results will be very close to what I get before even
> if I do all of the above. My post was not about MySQL vs PostgreSQL. It
> was about very fast performance degradation of PostgreSQL in case of
> large number of updates provided vacuum is not used.
> 
> > If you don't want to install 8.0, could you maybe at least do the first
> > two items (shouldn't be a lot of work)...?
> >
> > Which client are you using? Just mysql/psql or some API?
> 
> C API


Ok,
I did some tests at last on this using the above 3 suggestions.
I ran on 8.1.4 on Linux with fsync=off and did 300k updates bundled
into 100 updates / 1 transaction (100 turned out to be a sweeter
spot than 1000). 

The box was comparable to yours, I think: 1xOpteron 2.2GHz, 2xSATA
RAID0 (yes, I know...), 1GB RAM

Details and results are here:
http://www.1006.org/misc/20061014_pgupdates_bench/

The interesting part is the graph that shows updates / sec real time
vs. running total of updates:
http://www.1006.org/misc/20061014_pgupdates_bench/results.png

Let's start with the red crosses: that's without vacuum, and yes,
you're right: PG's performance degrades.

But, it doesn't degrade quite as bad as you mentioned
(you mentioned 1600u/s for the first 10k, then 200u/s for
the first 100k). At 100k I'm still at 2700u/s down
from ~8000u/s. Only after ~140k updates my line drops
quicker. I obviously bump into some limit given by my
setup there. The thing is totally CPU-bound by the way.

Ok.
So, this a very bizarre load for PostgreSQL, especially
without any vacuum.

Let's add some vacuum: every 50k (green x) or even every 10k
(blue *) updates - which is a very reasonable thing do to for
this type of load.

With vacuum, I get a stable performance all the way up to
300k updates. Rates are 4700 u/s or even 8500 u/s.

Note the curves show no drops when vacuum is active.

Out of curiosity I did a run having autovacuum visit the db
every 30 seconds (purple squares): even without any special
effort to find good vacuum spots, I can get a good 3300
updates/sec all the way up to 300k updates!

I'd dare to say that if you just ran ZABBIX on 8.1 with
autovacuum on with a shortish interval (30 sec?) you'd
get rid of your performance problems. Time to update
the documentation after all? ;)

Bye,
Chris.



-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] a math function with error handling

2006-10-14 Thread Chris Mair

> If I send something like 25 / '5' I get the result, 5
> but if I send 25 / '0' I get null (division_by_zero)
> or when I send 25 / 'textcrap' I get null too

You might want to look at pl/pgsql exception handlers:
http://www.postgresql.org/docs/8.1/static/plpgsql-control-structures.html#PLPGSQL-ERROR-TRAPPING

Trapping division by zero is given as an example there...


Bye, Chris.


-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Triggers

2006-10-15 Thread Chris Mair

> Why you should write a function first and then the trigger, which must
> call that function?
> 
> What are the advantages/disadvantages of that? Where can I find more
> information?

The PG way seems very natural to me:
you can write functions that do something and then have many triggers
call that same function.

Also there's not just PL/PGSQL: you might want
to define a function in C or Perl and then have a trigger call it.

Bye, Chris.


-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Overload after some minutes, please help!

2006-10-21 Thread Chris Mair

> its just a vacuumdb --all. We already learned that full vacuums are
> evil because the database was carrupted after some time.

Wait a sec...
vacuum full maybe evil in the 'locks stuff and takes long to run'-sense,
but it should definitly NOT corrupt your database.

Are you sure there's no issues on the hardware / system administration
side of things?

Bye, Chris.







---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] deadlock detected messages

2006-10-29 Thread Chris Mair

> I have a process that is hitting deadlocks.  The message I get talks about
> relation and database numbers, not names.  How do I map the numbers back 
> into names?

Hi,
you need to query the catalog:

  -- relations like tables:
  select oid, relname from pg_class;
  -- databases:
  select oid, datname from pg_database;

Bye,
Chris.

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Why isn't it allowed to create an index in a schema

2006-11-12 Thread Chris Mair
> I'd like to know if there's any reasoning for not allowing creating an index
> inside the same schema where the table is.  For example, if I have a
> multi-company database where each company has its own schema and its employees
> table, shouldn't I have a different index for each of those?  What if I have
> some slightly different columns on some of these tables? 
> 
> 
> teste=# create schema testing;
> CREATE SCHEMA
> teste=# create table testing.testing123 (something serial primary key, 
> otherthing float);
> NOTICE:  CREATE TABLE will create implicit sequence 
> "testing123_something_seq" for serial column "testing123.something"
> NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
> "testing123_pkey" for table "testing123"
> CREATE TABLE
> teste=# create index testing.testing123_index on testing.testing123 
> (otherthing);
> ERROR:  syntax error at or near "." no caracter 21
> LINHA 1: create index testing.testing123_index on testing.testing123 ...
>  ^
> teste=# 
> 
> 
> 
> (I wouldn't mind if the autogenerated index for the PK was created on the
> public schema if no specific name was supplied.)
> 
> 
> This would also help identifying all objects to make a certain feature
> available and where they belong to on the database... 

Just say 
create index testing123_index on testing.testing123 (otherthing);
and you'll otain exactly what you want (see below).

Bye, Chris.

chris=> create schema testing;
CREATE SCHEMA
chris=> create table testing.testing123 (something serial primary key, 
otherthing float);
NOTICE:  CREATE TABLE will create implicit sequence "testing123_something_seq" 
for serial column "testing123.something"
NOTICE:  CREATE TABLE / PRIMARY KEY will create implicit index 
"testing123_pkey" for table "testing123"
CREATE TABLE
chris=> create index testing123_index on testing.testing123 (otherthing);
CREATE INDEX
chris=> \di *.*
List of relations
 Schema  |   Name   | Type  | Owner |   Table
-+--+---+---+
 testing | testing123_index | index | chris | testing123
 testing | testing123_pkey  | index | chris | testing123
(2 rows)


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Why isn't it allowed to create an index in a schema

2006-11-12 Thread Chris Mair

> > create index testing123_index on testing.testing123 (otherthing);
> > and you'll otain exactly what you want (see below).
> >
> > Bye, Chris.
> 
> I know I can workaround such debilitation.  What I wanted to know is if
> there's some reason (such as performance gain, for example) for that
> decision. 

Read what \di *.* shows: the index *is* inside schema testing.
 
> Using this approach, though, doesn't group all items that belong to a schema
> inside of it.


Bye, Chris.


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] The old Insert and retrieving your Serial problem in

2006-11-15 Thread Chris Mair

> That looks like the solution to my problem, thanks!...I tried running it 
> on my 8.0.8 server, but it wasn't found, I assume that's an 8.1 only 
> function?

Note that the upcoming 8.2 release has a handy "returning" clause
for insert:
  http://developer.postgresql.org/pgdocs/postgres/sql-insert.html

Bye, Chris.


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] OS X Tiger, and PostgreSQL 8.2 don't mix?

2006-12-28 Thread Chris Mair
> I have OS X tiger with all the updates:
> 
> uname -r
> 8.8.0
> 
> Here is what I get when I try to initdb on a freshly compiled 8.2:
> 
> selecting default max_connections ... 10
> selecting default shared_buffers/max_fsm_pages ... 400kB/2
> creating configuration files ... ok
> creating template1 database in /usr/local/pgsql/data/base/1 ... FATAL:
> could not create shared memory segment: Cannot allocate memory
> DETAIL:  Failed system call was shmget(key=2, size=1646592, 03600).
> HINT:  This error usually means that PostgreSQL's request for a shared
> memory segment exceeded available memory or swap space. To reduce the
> request size (currently 1646592 bytes), reduce PostgreSQL's
> shared_buffers parameter (currently 50) and/or its max_connections
> parameter (currently 10).
> The PostgreSQL documentation contains more information about
> shared memory configuration.
> child process exited with exit code 1
> 

Works for me :|
(see initdb output below)...

> 
> I read the documentation
> (http://www.postgresql.org/docs/8.2/static/kernel-resources.html) and
> added the appropriate items to /etc/sysctl.conf, and I rebooted for it
> to take effect.
> 
> cat /etc/sysctl.conf
> kern.sysv.shmmax=4194304
> kern.sysv.shmmin=1
> kern.sysv.shmmni=32
> kern.sysv.shmseg=8
> kern.sysv.shmall=1024

Can you check whether the settings worked? Do:

ibook:~ chris$ sysctl -a | grep shm
kern.sysv.shmmax: 4194304
kern.sysv.shmmin: 1
kern.sysv.shmmni: 32
kern.sysv.shmseg: 8
kern.sysv.shmall: 1024

Bye,
Chris.

-
ibook:/opt/pg chris$ initdb
The files belonging to this database system will be owned by user "chris".
This user must also own the server process.

The database cluster will be initialized with locale C.

creating directory /opt/pg/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 30
selecting default shared_buffers/max_fsm_pages ... 2400kB/2
creating configuration files ... ok
creating template1 database in /opt/pg/data/base/1 ... ok
initializing pg_authid ... ok
initializing dependencies ... ok
creating system views ... ok
loading system objects' descriptions ... ok
creating conversions ... ok
setting privileges on built-in objects ... ok
creating information schema ... ok
vacuuming database template1 ... ok
copying template1 to template0 ... ok
copying template1 to postgres ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the -A option the
next time you run initdb.

Success. You can now start the database server using:

postgres -D /opt/pg/data
or
pg_ctl -D /opt/pg/data -l logfile start

ibook:/opt/pg chris$ uname -r
8.8.0
-


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL 8.0.0 Release Candidate 4

2005-01-09 Thread Chris Mair
On Fri, 2005-01-07 at 17:04, Marc G. Fournier wrote:

> [...]
> A current list of *known* supported platforms can be found at:
> 
>   http://developer.postgresql.org/supported-platforms.html
> 
> We're always looking to improve that list, so we encourage anyone that is 
> running a platform not listed to please report on any success or failures 
> with Release Candidate 4.
> [...]

Hi,

I've tested RC4 on Linux on the PlayStation 2 again
(compare
http://archives.postgresql.org/pgsql-hackers/2004-08/msg01339.php).

With --disable-spinlocks everything but the two floating point tests
succeeds (regression output and diffs attached).

The Playstation 2 hardware manuals confirm that the FPU has
no support for NaN and +/-Inf. That explains the regression diffs.
(EE Core User's Manual Version 5, p. 153)

In summary one might conclude that PostgreSQL 8 works on Linux on the
Playstation 2 as far as the hardware supports it.

Kind regards,
Chris.

parallel group (13 tests):  text float4 int2 boolean name oid varchar int4 int8 
char float8 bit numeric
 boolean  ... ok
 char ... ok
 name ... ok
 varchar  ... ok
 text ... ok
 int2 ... ok
 int4 ... ok
 int8 ... ok
 oid  ... ok
 float4   ... FAILED
 float8   ... FAILED
 bit  ... ok
 numeric  ... ok
test strings  ... ok
test numerology   ... ok
parallel group (20 tests):  lseg circle point box path polygon timetz time 
reltime abstime comments tinterval interval date inet timestamp timestamptz 
type_sanity oidjoins opr_sanity
 point... ok
 lseg ... ok
 box  ... ok
 path ... ok
 polygon  ... ok
 circle   ... ok
 date ... ok
 time ... ok
 timetz   ... ok
 timestamp... ok
 timestamptz  ... ok
 interval ... ok
 abstime  ... ok
 reltime  ... ok
 tinterval... ok
 inet ... ok
 comments ... ok
 oidjoins ... ok
 type_sanity  ... ok
 opr_sanity   ... ok
test geometry ... ok
test horology ... ok
test insert   ... ok
test create_function_1... ok
test create_type  ... ok
test create_table ... ok
test create_function_2... ok
test copy ... ok
parallel group (7 tests):  create_operator create_aggregate triggers 
constraints vacuum inherit create_misc
 constraints  ... ok
 triggers ... ok
 create_misc  ... ok
 create_aggregate ... ok
 create_operator  ... ok
 inherit  ... ok
 vacuum   ... ok
parallel group (2 tests):  create_view create_index
 create_index ... ok
 create_view  ... ok
test sanity_check ... ok
test errors   ... ok
test select   ... ok
parallel group (18 tests):  select_distinct_on select_distinct select_into 
union update select_having case select_implicit namespace transactions join 
arrays hash_index random portals aggregates btree_index subselect
 select_into  ... ok
 select_distinct  ... ok
 select_distinct_on   ... ok
 select_implicit  ... ok
 select_having... ok
 subselect... ok
 union... ok
 case ... ok
 join ... ok
 aggregates   ... ok
 transactions ... ok
 random   ... ok
 portals  ... ok
 arrays   ... ok
 btree_index  ... ok
 hash_index   ... ok
 update   ... ok
 namespace... ok
test privileges   ... ok
test misc ... ok
parallel group (5 tests):  portals_p2 cluster rules select_views foreign_key
 select_views ... ok
 portals_p2   ... ok
 rules... ok
 foreign_key  ... ok
 cluster  ... ok
parallel group (14 tests):  polymorphism truncate sequence copy2 conversion 
rowtypes temp domain rangefuncs limit prepare without_oid plpgsql alter_table
 limit... ok
 plpgsql  ... ok
 copy2... ok
 temp ... ok
 domain   ... ok
 rangefuncs   ... ok
 prepare  ... ok
 without_oid  ... ok
 conversion   ... ok
 truncate ... ok
 alter_table  ... ok
 sequence ... ok
 polymorphism ... ok
 rowtypes ... ok
test stats... o

Re: [GENERAL] PostgreSQL 8.0.0 Release Candidate 4

2005-01-09 Thread Chris Mair
On Fri, 2005-01-07 at 17:04, Marc G. Fournier wrote:

> [...]
> A current list of *known* supported platforms can be found at:
> 
>   http://developer.postgresql.org/supported-platforms.html
> 
> We're always looking to improve that list, so we encourage anyone that
is 
> running a platform not listed to please report on any success or
failures 
> with Release Candidate 4.
> [...]

Hi,

one more test result: the regression test for 8.0 RC4 on a OpenBSD/sparc
says all 96 tests passed.

This was OpenBSD/sparc 3.3 with gcc 2.95.3 and GNU make 3.80 on a
SPARCstation 4.
I ran "make check" with option MAX_CONNECTIONS=10.

Bye, Chris.

PS: this is the second post of this message, the first has misteriously
disappeared 8 hours ago.





---(end of broadcast)---
TIP 7: don't forget to increase your free space map settings


Re: [GENERAL] Generating unique session ids

2006-07-27 Thread Chris Mair

> > SELECT md5('secret_salt' || nextval('my_seq')::text)
> 
> * When somebody knows md5('secret_salt' || '5') he will be able to
> easily compute
>   md5('secret_salt' || '50')
>   md5('secret_salt' || '51')
>   md5('secret_salt' || '52')
>   ...
>   md5('secret_salt' || '59')
>   md5('secret_salt' || '500')
>   md5('secret_salt' || '501')
>   ...
>   md5('secret_salt' || '[any number starting from 5]').
> Without knowledge of 'secret_salt'. So your proposal is totally
> insecure.

Challenge :)

chris=> select md5('**' || '5');
   md5
--
 7b076f591070f6912e320b95782250ae
(1 row)

I won't tell what '**' was.

Can you send me what md5('**' || '50') will give?

Bye,
Chris.





---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Triggers in Postgres

2006-08-01 Thread Chris Mair

> http://www.postgresql.org/docs/8.1/interactive/triggers.html
>  
> it says something like this:
>  
> " It is not currently possible to write a trigger function in the
> plain SQL function language. "

The whole paragraph says.

"It is also possible to write a trigger function in C, although most
people find it easier to use one of the procedural languages. It is not
currently possible to write a trigger function in the plain SQL function
language."

That is: you can and you should write your trigger in a procedural
language. In particular - if you want to stay as closed as possible
to SQL you should use procedural SQL, which in PostgreSQL is called
PL/pgSQL:
http://www.postgresql.org/docs/8.1/interactive/plpgsql.html

Bye, Chris.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Query performance

2006-08-03 Thread Chris Mair

> i have a table with around 57 million tuples, with the following columns: 
> pid(varchar), crit(varchar), val1(varchar), val2(varchar). Example:
> pidcritval1val2
> p1  c1  xy
> p1  c2  xz
> p1  c3  yx
> ...
> What i am doing is to query all val1 and val2 for one pid and all crit values:
> 
> select val1, val2, crit from mytable where pid='somepid' and crit in(select 
> crit from myCritTable);
> where myCritTable is a table that contains all crit values (around 42.000) 
> ordered by their insertion date.

In case myCritTable doesn't change a lot and this select by contrast is
executed a lot, have you considered precomputing whether a record from
your big table has a crit value from myCritTable?

Of course this info would be invalidated each time myCritTable is
updated, so you would trade fast selects on the big table vs. slow
updates on myCritTable. Don't know wether that makes sence for you...

Bye, Chris.



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] PostgreSQL engagment

2006-08-04 Thread Chris Mair

> I am the solution architect located in Hong Kong. Some of our partners 
> are very interested to promote this product especially the Data 
> Warehouse appliance. However, what kind of support or who I can contact 
> to moving forward.

Hello,

you might want to check out the professional services listings at
http://www.postgresql.org/support/professional_support

Bye,
Chris.


-- 

Chris Mair
http://www.1006.org



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] How to do auto numbering on INT column

2006-08-10 Thread Chris Mair

> HI
> I have a simple table created using PGAdmin III. How do i do a auto
> numbering on a column SYMBOL_ID?
> My table is
> CREATE TABLE "SYMBOL"
> (
>   "SYMBOL_ID" int4 NOT NULL,
>   "SYMBOL2EXCHANGE" int2 NOT NULL,
>   "SYMBOL_ALIAS" text[],
>   "RELATED_SYMBOLS_OTHER_EXCHANGES" int8[],
>   "SYMBOL_NAME" text,
>   "COMPANY_NAME" text,
>   "SYMBOL2SECTOR" int2,
>   "SYMBOL2INDUSTRY" int4,
>   "STOCK_SUMMARY" text
> ) 

Use column type SERIAL, or, if you want to fine tune things,
use sequences: 
http://www.postgresql.org/docs/8.1/static/sql-createsequence.html
(in fact, SERIAL does define a sequence behind the scenes for you).

Bye, Chris.

-- 

Chris Mair
http://www.1006.org



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Segmentation Fault

2006-08-16 Thread Chris Mair

> dmesg
> [2425253.737383] postmaster[4792]: segfault at 2aaab6f0e000 rip 
> 2b73795b rsp 7f8c9228 error 4
> 
> 
> Any suggestions ?

Do you trust that machine's RAM?
Can you try running memtest86 for some extended period of time?

(just to make sure it's not a hardware issue)

Bye, Chris.


-- 

Chris Mair
http://www.1006.org



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Postgresql mentioned on Newsforge MySQL article

2006-08-28 Thread Chris Mair

> http://newsvac.newsforge.com/newsvac/06/08/28/1738259.shtml
> 
> Don't know the validity of this dvd order test they did, but the article 
> claims Postgresql only did 120 OPM.
> Seems a little fishy to me.

There was just one submission for PostgreSQL made by one guy who didn't
manage to finish it (c't says connection pooling didn't work).

See: http://archives.postgresql.org/pgsql-advocacy/2006-06/msg00052.php

MySQL's winning submission was by a team from MySQL AB.

Enough said...

Bye,
Chris.

PS: this is, by the way a few months old, I'm wondering why MySQL
does the press release only now...


-- 

Chris Mair
http://www.1006.org



---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] strange sum behaviour

2006-08-29 Thread Chris Mair

> corp=# select amount from acc_trans where trans_id=19721 and chart_id=10019;
>  amount
> -
> 4.88
>117.1
>  -121.98
> (3 rows)
> 
> corp=# select sum(amount) from acc_trans where trans_id=19721 and
> chart_id=10019;
>  sum
> --
>  -1.4210854715202e-14
> (1 row)
> 
> 
> amount is defined as double precision. I noticed that if I cast amount
> as numeric, the sum comes out 0 as expected.

0.1 cannot be represented exactly using floating point numbers
(the same way as 1/3 cannot be represented exactly using decimal
numbers). You're bound to suffer from round-off errors.

Use numeric for exact, decimal math.

Bye, Chris.


-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL on system with root as only user

2006-08-30 Thread Chris Mair

> I am about to install PostgreSQL on a minimal Linux system, where root
> is the only user that is allowed to exist. I would prefer to use a
> binary installer. Is it possible to install PostgreSQL without the
> postgres user?

PostgreSQL refuses to start up as root user for security
reasons (this is a feature).

If you know what you're doing and you want to disable that
feature you need to recompile from source and disable the
uid checks in src/backend/main/main.c.

Unless you're working in the embedded space or some such thing,
I don't think it's a good idea, anyway.

Bye :)
Chris.
 

-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Porting from ORACLE to PostgSQL

2006-09-04 Thread Chris Mair

>  I need to porting many old ORACLE-oriented-SQL files and I have many 
> problem with this code. Sometimes the code use some types not supported 
> in PosgSQL like "number" or "varchar2", there fore, can I write some 
> type of declaration (like in c : #define alias_name name) in order to 
> make PosgSQL recognize these types? Or there are other solution that you 
> recommend to use ?

You might look into the contrib package "ora2pg".

There's also a more ambitious project called "protopg".
That's in pre-alpha state, though. A development snapshots
for the *very* *adventurous* can be obtained here:
http://protopg.projects.postgresql.org/nightlies/ 

Bye, Chris.


-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [GENERAL] berkley sockets

2006-09-13 Thread Chris Mair
On Wed, 2006-09-13 at 01:51 -0400, J S B wrote:

> I don't want to connect to the postgres database.
>  
> The scenario is something like this.
>  
> Postgres database has to initiate some deamon process running is
> another server.
> The only way i could think of doing this was openeing a socket
> connection between postgres database and 
> the deamon process through a shared object dynamicall loaded in
> postgres.
>  
> Berkley sockets is the socket API in unix that uses
> 
>  
> Don't know if there's a better way to do it.

Is ist that you want to have a PG instance running on host A accepting
connections on host B?

Maybe you can use an SSH tunnel?

Bye, Chris.



-- 

Chris Mair
http://www.1006.org


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] data modeler

2004-09-27 Thread Chris Mair

> > Druid? http://druid.sourceforge.net/
> 
> from reading the description, it looks like they might have something,
> but there is *no* documentation and i can't figure out how they want
> me to run the install jar file.

java -jar druid-3.5-install.jar

Bye, Chris.



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


Re: [GENERAL] earthdistance results seem to be wrong.

2004-10-03 Thread Chris Mair

> select
> earth_distance(ll_to_earth('122.55688','45.513746'),ll_to_earth('122.396357','47.648845'));
> 
> The result I get is this:
> 
> 128862.563227506
> 
> The distance from Portland to Seattle is not 128862
> miles.

It is 128000m = 128km.

Welcome to the metric system :)

Bye, Chris.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faqs/FAQ.html