Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
"Scott Marlowe" wrote:

>When I go to amazon.com I only ever get three pages of results.  ever.
> Because they know that returning 190 pages is not that useful, as
>hardly anyone is going to wander through that many pages.
>
>Google, you'll notice says "Results 1 - 10 of about 5,610,000 for
>blacksmith"  i.e. it's guesstimating as well.  no reason for google to
>look at every single row for blacksmith to know that there's about 5.6
>million.

But if you go to eBay, they always give you an accurate count. Even if the no.
of items found is pretty large (example: ).

Rainer

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
Decibel! wrote:

>On Thu, Aug 16, 2007 at 12:12:03PM +0200, Rainer Bauer wrote:
>> "Scott Marlowe" wrote:
>> 
>> >When I go to amazon.com I only ever get three pages of results.  ever.
>> > Because they know that returning 190 pages is not that useful, as
>> >hardly anyone is going to wander through that many pages.
>> >
>> >Google, you'll notice says "Results 1 - 10 of about 5,610,000 for
>> >blacksmith"  i.e. it's guesstimating as well.  no reason for google to
>> >look at every single row for blacksmith to know that there's about 5.6
>> >million.
>> 
>> But if you go to eBay, they always give you an accurate count. Even if the 
>> no.
>> of items found is pretty large (example: <http://search.ebay.com/new>).
>
>And I'd bet money that they're using a full text search of some kind to
>get those results, which isn't remotely close to the same thing as a
>generic SELECT count(*).

Without text search (but with a category restriction):
<http://collectibles.listings.ebay.com/_W0QQsacatZ1QQsocmdZListingItemList>

I only wanted to show a counter-example for a big site which uses pagination
to display result sets and still reports accurate counts.

Anyway, what Phoenix is trying to say is that 2 queries are required: One to
get the total count and one to get the tuples for the current page. I reckon
it would help, if the query returning the result set could also report the
total no. of tuples found. Somthing like
SELECT COUNT(*), *  FROM  WHERE  OFFSET  LIMIT 

Or is there a way to do that?

Rainer

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
"Trevor Talbot" wrote:

>On 8/16/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
>
>> >> But if you go to eBay, they always give you an accurate count. Even if 
>> >> the no.
>> >> of items found is pretty large (example: <http://search.ebay.com/new>).
>> >
>> >And I'd bet money that they're using a full text search of some kind to
>> >get those results, which isn't remotely close to the same thing as a
>> >generic SELECT count(*).
>>
>> Without text search (but with a category restriction):
>> <http://collectibles.listings.ebay.com/_W0QQsacatZ1QQsocmdZListingItemList>
>>
>> I only wanted to show a counter-example for a big site which uses pagination
>> to display result sets and still reports accurate counts.
>
>Categories are still finite state: you can simply store a count for
>each category.  Again it's just a case of knowing your data and
>queries; it's not trying to solve a general infinite-possibilities
>situation.

Consider this query with multiple WHERE conditions:
<http://search.ebay.com/ne-ol-an_W0QQfasiZ1QQfbdZ1QQfcdZ1QQfcidZ77QQfclZ3QQfmcZ1QQfrppZ50QQfsooZ1QQfsopZ1QQftidZ1QQpriceZ1QQsabdhiZ100QQsacurZ999QQsalicZQ2d15QQsaprchiZ5QQsatitleZQ28neQ2aQ2colQ2aQ2canQ2aQ29QQsojsZ0>

My point is that whatever search criterias are involved and how many items are 
found eBay always returns the *accurate* number of items found.

Before this drifts off:
* I do know *why* count(*) is slow using Postgres.
* I *think* that count(*) is fast on eBay because count is cheaper using Oracle 
(which eBay does: <http://www.sun.com/customers/index.xml?c=ebay.xml>).
* I realize that pagination for multi-million tuple results does not make sense.

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
Gregory Stark wrote:

>"Rainer Bauer" <[EMAIL PROTECTED]> writes:
>
>> Anyway, what Phoenix is trying to say is that 2 queries are required: One to
>> get the total count and one to get the tuples for the current page. I reckon
>> it would help, if the query returning the result set could also report the
>> total no. of tuples found. Somthing like
>> SELECT COUNT(*), *  FROM  WHERE  OFFSET  LIMIT 
>>
>> Or is there a way to do that?
>
>Well anything like the above would just report l as the count.

True, but what about this:

SELECT (SELECT COUNT(*) FROM  WHERE ), *  FROM  WHERE 
 OFFSET  LIMIT 

I just tested this on a query and its about 5-10% faster than issuing both 
commands separately (caching effects?). I wonder whether there would be any 
chance that Postgres could detect that the "count" select and the "data" select 
result in the same query plan?

In my example (which is included below) the hash join is executed twice.

>The only way to do it in Postgres currently is to create a temporary table.
>Then you can populate it once, then select the count from the temporary table
>in one query and the required page from it in the second query.
>
>But temporary tables in Postgres are not really designed for this. In
>particular they count as DDL so you have to grant privileges to create tables
>to the application and it has to create and delete entries in pg_class for
>every use.

Well I don't think popuplating a temporary table with possible millions of rows 
is faster than executing the query twice. Remember that a performance problem 
only occurs if there are a lot of tuples returned.

Rainer

==

This is the count query:
SELECT COUNT(*) FROM "tblItem" AS i INNER JOIN
(SELECT "intItemID" FROM "tblItem2Category" WHERE "intCategoryID"=88869805) AS 
vrtChild ON i."intItemIDCnt"=vrtChild."intItemID"
WHERE ("intTimeEnd" < 1187273177)

This is the select analyse output:
"Aggregate  (cost=356098.46..356098.47 rows=1 width=0) (actual 
time=29411.570..29411.570 rows=1 loops=1)"
"  ->  Hash Join  (cost=177545.23..350137.60 rows=2384343 width=0) (actual 
time=17382.286..28864.851 rows=2383740 loops=1)"
"Hash Cond: ("tblItem2Category"."intItemID" = i."intItemIDCnt")"
"->  Bitmap Heap Scan on "tblItem2Category"  (cost=41560.03..134660.50 
rows=2561397 width=4) (actual time=1984.006..11048.762 rows=2513204 loops=1)"
"  Recheck Cond: ("intCategoryID" = 88869805)"
"  ->  Bitmap Index Scan on ccitem2categorycategoryidix  
(cost=0.00..40919.69 rows=2561397 width=0) (actual time=1980.614..1980.614 
rows=2513204 loops=1)"
"Index Cond: ("intCategoryID" = 88869805)"
"->  Hash  (cost=95316.41..95316.41 rows=2339583 width=4) (actual 
time=15024.827..15024.827 rows=2383832 loops=1)"
"  ->  Seq Scan on "tblItem" i  (cost=0.00..95316.41 rows=2339583 
width=4) (actual time=8.634..13763.878 rows=2383832 loops=1)"
"Filter: ("intTimeEnd" < 1187273177)"
"Total runtime: 29411.668 ms"

==

This is the data query:
SELECT i."intItemIDCnt" FROM "tblItem" AS i INNER JOIN
(SELECT "intItemID" FROM "tblItem2Category" WHERE "intCategoryID"=88869805) AS 
vrtChild ON i."intItemIDCnt"=vrtChild."intItemID"
WHERE ("intTimeEnd" < 1187273177)
ORDER BY "intFlagListingFeatured" DESC, "intTimeEnd", "intItemIDCnt" OFFSET 500 
LIMIT 50

This is the select analyse output:
"Limit  (cost=733011.30..733011.42 rows=50 width=12) (actual 
time=37852.007..37852.058 rows=50 loops=1)"
"  ->  Sort  (cost=733010.05..738970.91 rows=2384343 width=12) (actual 
time=37851.581..37851.947 rows=550 loops=1)"
"Sort Key: i."intFlagListingFeatured", i."intTimeEnd", i."intItemIDCnt""
"->  Hash Join  (cost=179830.23..354707.60 rows=2384343 width=12) 
(actual time=17091.753..29040.425 rows=2383740 loops=1)"
"  Hash Cond: ("tblItem2Category"."intItemID" = i."intItemIDCnt")"
"  ->  Bitmap Heap Scan on "tblItem2Category"  
(cost=41560.03..134660.50 rows=2561397 width=4) (actual 
time=1976.599..10970.394 rows=2513204 loops=1)"
"Recheck Cond: ("intCategoryID" = 88869805)"
"->  Bitmap Index Scan on c

Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
Martijn van Oosterhout wrote:

>On Thu, Aug 16, 2007 at 01:09:32PM +0200, Rainer Bauer wrote:
>> Anyway, what Phoenix is trying to say is that 2 queries are required: One to
>> get the total count and one to get the tuples for the current page. I reckon
>> it would help, if the query returning the result set could also report the
>> total no. of tuples found. Somthing like
>> SELECT COUNT(*), *  FROM  WHERE  OFFSET  LIMIT 
>
>Well, thee is another possibility, use cursors:
>
>DECLARE CURSOR ... AS ;
>FETCH 30  -- or however many to want now
>MOVE TO END  -- or whatever the command is, this gives you the number of rows 
>
>Hope this helps,

Thanks! I will give this a try.

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
Bill Moran wrote:

>> Consider this query with multiple WHERE conditions:
>> 
>> 
>> My point is that whatever search criterias are involved and how many items 
>> are found eBay always returns the *accurate* number of items found.
>
>While I don't _want_ to argue with you ... I can't seem to help myself.
>
>How do you _know_ that's the exact number of items?  There are 50 items on
>that page, the paginator at the bottom shows 97,686 pages, but there's no
>way (that I can find) to go to the _last_ page to ensure that said numbers
>are correct.  It could simply be estimating the number of items and
>calculating the # of pages based on that.  With 4mil items, a few 1000 off
>isn't anything anyone would notice.

No, those numbers are correct. You can go to the last page using the input box
at the end of the listing "Go to page". However, with a 4 million result set,
the count will have changed while the last page is retrieved (new items have
been listed and otheres have ended in the mean time). You will have to check
this out with a search yielding fewer results (couple of hundred thousands)
and load the first and last page simultaneously.

>> * I realize that pagination for multi-million tuple results does not make 
>> sense.
>
>Then what is the point to this thread?  Are we just shooting the breeze at
>this point?

I just wanted to show an example where accurate counts are reported for large
search results.

Rainer

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
Tom Lane wrote:

>Rainer Bauer <[EMAIL PROTECTED]> writes:
>> My point is that whatever search criterias are involved and how many items 
>> are found eBay always returns the *accurate* number of items found.
>
>And exactly how do you know that that's true?

5 years experience with developing a browser for eBay?

Rainer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
"Trevor Talbot" wrote:

>On 8/16/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
>
>> My point is that whatever search criterias are involved and how many items 
>> are found eBay always returns the *accurate* number of items found.
>>
>> Before this drifts off:
>> * I do know *why* count(*) is slow using Postgres.
>> * I *think* that count(*) is fast on eBay because count is cheaper using 
>> Oracle (which eBay does: 
>> <http://www.sun.com/customers/index.xml?c=ebay.xml>).
>> * I realize that pagination for multi-million tuple results does not make 
>> sense.
>
>You got me curious, so I went hunting for more hints on what eBay
>actually does, and found these slides from a presentation given by two
>eBay engineers last year:
>http://www.addsimplicity.com/downloads/eBaySDForum2006-11-29.pdf

Quite interesting.

>It's, er, a whole different ballgame there.  Database behavior is
>barely involved in their searching; they do joins and RI across
>database clusters within the _application_.  I knew eBay was big, but
>wow...

Well then: forget the Oracle count(*) argument :-(

Rainer

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Yet Another COUNT(*)...WHERE...question

2007-08-16 Thread Rainer Bauer
"Scott Marlowe" wrote:

>FYI, I went to the ebay page you posted, which listed something like
>98011 pages, and asked for page 96000.  It searched for about a minute
>and timed out with the error message
>
>There was a problem executing your request. Please try again.
>
>Tried it again, twice, about 5 minutes apart, and got the same error each time.
>
>So I'm guessing that ebay is better at making your THINK it has the
>exact count than actually having the exact count.

Well that depends on the current traffic on eBay. It worked allright this
afternoon (GMT), but I get the same error message now (btw, a few years ago
you only got an error message if a search retruned more than 1M items).

Try it with this search:  which gives about 2.2M
items.

Rainer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] Using oid as pkey

2007-08-20 Thread Rainer Bauer
"D. Dante Lorenso" wrote:

>Using a brain-dead sample table that looks like this:
>
>   CREATE table some_table (
>   col0 SERIAL,
>   col1 VARCHAR,
>   col2 VARCHAR
>   );
>
>I want to do something like this:
>
>   INSERT INTO some_table (col1, col2)
>   VALUES ('val1', 'val2');
>
>I want the value of col0 returned to the application and I don't want to 
>know the name of the sequence involved in the SERIAL column.  I just 
>want the value inserted into the column by using just it's column name.

Using 8.2 or above:
INSERT INTO some_table (col1, col2) VALUES ('val1', 'val2') RETURNING col0;

Rainer

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Removing pollution from log files

2007-08-27 Thread Rainer Bauer
Andrew Sullivan wrote:

>On Mon, Aug 27, 2007 at 02:00:02PM +0300, Andrus wrote:
>> Postgres  log files are polluted with messages
>> 
>> 2007-08-27 06:10:38 WARNING:  nonstandard use of \\ in a string literal at
>> character 190
>> 2007-08-27 06:10:38 HINT:  Use the escape string syntax for backslashes,
>> e.g., E'\\'.
>
>That's not pollution; it's telling you you need to fix your
>application to escape the backslashes differently.  If you want to
>suppress them, though, you can change your logging level to be higher
>than "WARNING".

Or lookup
.

Apart from that: there was a bug in the ODBC driver prior 08.02.0402 which
resulted in this error message whenever binary data of type SQL_LONGVARBINARY
was send.

Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Removing pollution from log files

2007-08-31 Thread Rainer Bauer
Andrus wrote:

>> Apart from that: there was a bug in the ODBC driver prior 08.02.0402 which
>> resulted in this error message whenever binary data of type 
>> SQL_LONGVARBINARY
>> was send.
>
>Where is 0402 driver ?

The snapshot drivers can be found here:


Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Cannot install under Windows Vista

2007-09-03 Thread Rainer Bauer
tkdchen wrote:

>I install PostgreSQL under Windows Vista, but the MSI failed. It told
>me that it has no right to create postgres user account. I don't know
>why. I just run the MSI package with an Administrator account. Please
>help me to solve this problem.

Sounds like the UAC (User Access Control) is in the way.

There is a document available from Microsoft describing the installation:


Quote:
"Notes on Vista: 
The install on Vista is similar to other Windows installs but to install on
Vista, you must turn off User Account Control first."

Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] PostgreSQL Storage: Sorted by Primary Key?

2007-10-07 Thread Rainer Bauer
michi wrote:

>Does PostgreSQL store records sorted by primary key?

Only after you cluster the table:


Rainer

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-20 Thread Rainer Bauer
"Magnus Hagander" wrote:

>> - Max_connections is set to 500. I did that originally because I kept
>> seeing a message about no connection available and I thought it was
>> because I was not allocating enough connections. My machine has 2GB of RAM.
>
>There's your problem. 500 is way above what the windows version can handle. 
>IIRC the hard max is somewhere around 200  depending on some OS factors that 
>we don't entirely know. I'd never recommend going above 100-150. With no more 
>than 2Gb ram, not above 100. 

My guess is that Windows is running out of handles. Each backend uses about
150 handles. 100 Backends means 15000 handles. Depending how many other
programs are currently running the no. of startable backends will vary
depending on the total handle limit Windows imposes.

Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-20 Thread Rainer Bauer
"Trevor Talbot" wrote:

>On 10/20/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
>> "Magnus Hagander" wrote:
>>
>> >> - Max_connections is set to 500. I did that originally because I kept
>> >> seeing a message about no connection available and I thought it was
>> >> because I was not allocating enough connections. My machine has 2GB of 
>> >> RAM.
>> >
>> >There's your problem. 500 is way above what the windows version can handle. 
>> >IIRC the hard max is somewhere around 200  depending on some OS factors 
>> >that we don't entirely know. I'd never recommend going above 100-150. With 
>> >no more than 2Gb ram, not above 100.
>>
>> My guess is that Windows is running out of handles. Each backend uses about
>> 150 handles. 100 Backends means 15000 handles. Depending how many other
>> programs are currently running the no. of startable backends will vary
>> depending on the total handle limit Windows imposes.
>
>Those are kernel object handles; the ceiling does depend on available
>kernel memory, but they're cheap, and postgres is in no danger of
>running into that limit.  Most of the handle limits people talk about
>are on USER (window etc) objects, which come from a single shared
>pool.

You are right. I just did a quick test and depending on the handle type these
limits are quite high. I could create 5 millions events or 4 millions
semaphores or 3,5 millions mutexes before the system returned error 1816
ERROR_NOT_ENOUGH_QUOTA "Not enough quota is available to process this
command.".

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-20 Thread Rainer Bauer
I wrote:

>You are right. I just did a quick test and depending on the handle type these
>limits are quite high. I could create 5 millions events or 4 millions
>semaphores or 3,5 millions mutexes before the system returned error 1816
>ERROR_NOT_ENOUGH_QUOTA "Not enough quota is available to process this
>command.".

[Does some further testing] The limit is high, but nonetheless Postgres is
running out of handles. Setting  to 1 and starting
postgres _without_ any connection consumes 4 handles. This correspodends
to the 4 Postgres processes running after the server was started. Every new
connection consumes another 1 handles.

I don't know the Postgres code involved, but it seems that every backend
consumes at least  handles. Hence increasing this value will
have the opposite effect once a certain threshold is met.

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-20 Thread Rainer Bauer
Shelby Cain wrote:

>--- Trevor Talbot <[EMAIL PROTECTED]> wrote:
>
>> On 10/20/07, Shelby Cain <[EMAIL PROTECTED]> wrote:
>> 
>> > I'd personally vote for a lower warning limit like 175 as I can
>> > consistently crash Postgresql on Windows system right around the
>> 200th
>> > connection.
>> 
>> What error gets logged for your crashes?
>> 
>
>It's been a while but IIRC there wasn't anything in the logs other than
>an entry noting that a backend had crashed unexpectedly so the
>postmaster was restarting all active backends.  I can trivially
>reproduce it at work on my workstation if you need the exact error
>text.

I could reproduce this here:

Server closed the connection unexpectedly
This probaly means the server terminated abnormally before or while processing
the request

2007-10-20 23:33:42 LOG:  server process (PID 5240) exited with exit code
-1073741502


Shelby, are you using the /3GB switch by chance? This will half the no. of
available handles on your system.

Rainer

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Fwd: Re[2]: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-20 Thread Rainer Bauer
I wrote

> Anyway, the problem are the no. of semaphores created by Postgres:
> Every backend creates at least 4* semaphores.

Sorry, this must read  semaphores, not 4 times.

Rainer


---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-20 Thread Rainer Bauer
Hello Trevor,

Sunday, October 21, 2007, 12:15:25 AM, you wrote:

TT> On 10/20/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:

>> I could reproduce this here:
>>
>> Server closed the connection unexpectedly
>> This probaly means the server terminated abnormally before or while 
>> processing
>> the request
>>
>> 2007-10-20 23:33:42 LOG:  server process (PID 5240) exited with exit code
>> -1073741502

TT> How?

Seems like the mailiming list is not catching up fast enough (I am
posting through usenet)...

Anyway, the problem are the no. of semaphores created by Postgres:
Every backend creates at least 4* semaphores. Just
increase  to an unusual high value (say 1) and
start creating new connections while monitoring the handle count.

Rainer


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-21 Thread Rainer Bauer
Magnus Hagander wrote:

>Trevor Talbot wrote:
>> On 10/20/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
>> 
>>> Anyway, the problem are the no. of semaphores created by Postgres:
>>> Every backend creates at least 4* semaphores. Just
>>> increase  to an unusual high value (say 1) and
>>> start creating new connections while monitoring the handle count.
>> 
>> Hmm, they're actually the same semaphores, so the only cost is for
>> slots in each process's handle table, which comes from kernel paged
>> pool.  Testing shows I can easily create about 30 million handles to a
>> given object on this machine.  This is under win2003 with 1.25GB RAM,
>> which gives it a paged pool limit of 352MB.

On my system I can only create about 4 millions semaphores.

>> I tried going up to 2 max_connections, and still blew postmaster's
>> VM space long before paged pool was exhausted.  I couldn't test any
>> higher values, as there's some interaction between max_connections and
>> shared_buffers that prevents it from mapping the buffer contiguously.
>> 
>> Something's missing though, since I'm not hitting the same issue you
>> are.  How are you generating the connections?  I just have an app
>> calling PQconnectdb() in a loop, but I guess that's not good enough.

I am using the ASCII version of the psqlODBC driver version 8.2.4.2 to
establish the test connections.

>Yeah, something is obviously missing.. Are you guys on the exactly the
>same Windows versions? WRT both version and servivepack. Anybody on x64
>windows?

No, I am using WinXP SP2 32 bit with 2GB RAM.

These are my altered settings from the default 8.2.5 Postgres installation:
ssl = on
shared_buffers = 512MB
work_mem = 16MB
maintenance_work_mem = 256MB
wal_sync_method = fsync_writethrough
checkpoint_segments = 15
checkpoint_timeout = 30min
random_page_cost = 3.0
effective_cache_size = 1GB
autovacuum_vacuum_scale_factor = 0.10
autovacuum_analyze_scale_factor = 0.05

>Another thing worth testing - check if the amount of shared memory used
>makes a noticable difference. Try both very small and very large values.

Well I tried different shared_buffers settings, but the result was consisting:
with max_connections set to 1, I can create 150 database connections.

However, I checked the handle count at the moment the last connection fails
and it is only at 1,5 million. So it seems the handles are not the primary
problem.

Let me know if you want any other tests performed on this machine. I also have
VS2005 installed, but until now I haven't compiled Postgres here (I was
waiting for 8.3 which fully supports building with VS).

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-22 Thread Rainer Bauer
Magnus Hagander schrieb:

>On Sun, Oct 21, 2007 at 09:43:27PM +0200, Rainer Bauer wrote:
>> Magnus Hagander wrote:
>> 
>> >Trevor Talbot wrote:
>> >> On 10/20/07, Rainer Bauer <[EMAIL PROTECTED]> wrote:
>> >> 
>> >>> Anyway, the problem are the no. of semaphores created by Postgres:
>> >>> Every backend creates at least 4* semaphores. Just
>> >>> increase  to an unusual high value (say 1) and
>> >>> start creating new connections while monitoring the handle count.
>> >> 
>> >> Hmm, they're actually the same semaphores, so the only cost is for
>> >> slots in each process's handle table, which comes from kernel paged
>> >> pool.  Testing shows I can easily create about 30 million handles to a
>> >> given object on this machine.  This is under win2003 with 1.25GB RAM,
>> >> which gives it a paged pool limit of 352MB.
>> 
>> On my system I can only create about 4 millions semaphores.
>
>Is that 4 million semaphores, or 4 million handles to a smaller number of
>semaphores?

No, 4 millions distinct semaphores by calling:
CreateSemaphore( NULL, 0, 1, NULL );

>> >> I tried going up to 2 max_connections, and still blew postmaster's
>> >> VM space long before paged pool was exhausted.  I couldn't test any
>> >> higher values, as there's some interaction between max_connections and
>> >> shared_buffers that prevents it from mapping the buffer contiguously.
>> >> 
>> >> Something's missing though, since I'm not hitting the same issue you
>> >> are.  How are you generating the connections?  I just have an app
>> >> calling PQconnectdb() in a loop, but I guess that's not good enough.
>> 
>> I am using the ASCII version of the psqlODBC driver version 8.2.4.2 to
>> establish the test connections.
>
>Could you try the same tests with the client runnint on a different system?
>Since the client eats up a bunch of handles and such as well, and that
>would eliminate the difference due to different clients.
>
>Followup, when running these tests, could you check using Process Explorer
>if you're hitting close to the limit of either of the two pools? See
>http://blogs.technet.com/askperf/archive/2007/03/07/memory-management-understanding-pool-resources.aspx

Well after installing Postgres explorer and starting the system information
program the kernel memory section shows me the current count, but not the
limits (it says "no symbols"). I am currently downloading the "Debugging Tools
for Windows". Maybe these limits are shown after the installation.

I just repeated the test with a local connection. After 150 connections, the
following values are displayed:
Paged physical  113000
Paged virtual   12
Nonpaged 28000

Also there are 1.583.182 handles open.

I will check the behaviour with a remote connection later (have to go now...).

>> These are my altered settings from the default 8.2.5 Postgres installation:
>> ssl = on
>
>Does it make a difference if you turn this off?
>
>> shared_buffers = 512MB
>
>As a general note, thsi is *way* too high. All evidence I've seen points to
>that you should have shared_buffers as *small* as possible on win32,
>because memory access there is slow. And leave more of the caching up to
>the OS.

I followed Josh's advice here:
<http://archives.postgresql.org/pgsql-performance/2007-06/msg00606.php>

What value would you recommend then? The default 32MB?

>> These are my altered settings from the default 8.2.5 Postgres installation:
>> ssl = on
>
>Does it make a difference if you turn this off?

No.

>> >Another thing worth testing - check if the amount of shared memory used
>> >makes a noticable difference. Try both very small and very large values.
>> 
>> Well I tried different shared_buffers settings, but the result was 
>> consisting:
>> with max_connections set to 1, I can create 150 database connections.
>
>Ok. But if you decrease max_connections, you can have more connections? Or
>the other way around?

A few tests indicated, that the maximum no. of connections is 150, regardless
of the  settings. But I will have to check whether this is
somehow caused by the ODBC driver.

Rainer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-22 Thread Rainer Bauer
Dave Page wrote:

>So, we seem to be hitting two limits here - the desktop heap, and
>something else which is cluster-specific. Investigation continues...

I will make these tests tonight or tomorrow morning and will let you know.

Rainer

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-23 Thread Rainer Bauer
"Trevor Talbot" wrote:

>I wrote:
>
>[ desktop heap usage ]
>
>> It could be that there's a significant difference between XP and 2003
>> in how that's handled though.  I do have an XP SP2 machine here with
>> 512MB RAM, and I'll try tests on it as soon as I can free up what it's
>> currently occupied with.
>
>...yep, under XP I'm using about 3.1KB of the service heap per
>connection, which tears through it quite a bit faster.  Now to figure
>out exactly where it's coming from...

I can confirm this here (WinXP SP2).

I have restored the original postgresql.conf file that was created when the
cluster was initialized with Postgres 8.2.4-1 (the installed version now is
8.2.5-1). The only other change to this installation is that I have moved the
WAL directory pg_xlog to another drive using a junction link.

Here are my numbers from SysInternals System Information program:
Pages Limit:364544KB  [356MB]
Nonpaged Limit: 262144KB  [256MB]
These limits are never reached.

Using the Desktop Heap Monitor every new connection consumes 3232 bytes of the
total 512KB heap.

>It could be that there's a significant difference between XP and 2003
>in how that's handled though.  I do have an XP SP2 machine here with
>512MB RAM, and I'll try tests on it as soon as I can free up what it's
>currently occupied with.

Yeah, Win2003 behaves differently accoriding to this source:



Session paged pool allows session specific paged pool allocations.  Windows XP
uses regular paged pool, since the number of remote desktop connections is
limited.  On the other hand, Windows Server 2003 makes allocations from
session paged pool instead of regular paged pool if Terminal Services
(application server mode) is installed.


After increasing the session heap size in the registry from 512KB to 1024KB
the no. of connections was roughly doubled. So this might be a solution for
people running out of Desktop heap.

Alter the value of the following key


The numeric values following "SharedSection=" control the heap management:
On WinXP these are the default values: "SharedSection=1024,3072,512"
Altering this to "SharedSection=1024,3072,1024" will increase the heap for all
non-interactive window stations to 1024KB.

Rainer

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-26 Thread Rainer Bauer
Dave Page wrote:

>Magnus Hagander wrote:
>> Rainer Bauer wrote:
>>> After increasing the session heap size in the registry from 512KB to 1024KB
>>> the no. of connections was roughly doubled. So this might be a solution for
>>> people running out of Desktop heap.
>>>
>>> Alter the value of the following key
>>> 
>>>
>>> The numeric values following "SharedSection=" control the heap management:
>>> On WinXP these are the default values: "SharedSection=1024,3072,512"
>>> Altering this to "SharedSection=1024,3072,1024" will increase the heap for 
>>> all
>>> non-interactive window stations to 1024KB.
>> 
>> This part should go in the FAQ, I think. It's valid for 8.2 as well,
>> from what I can tell, and it's valid for 8.3 both before and after the
>> patch I just applied.
>> 
>> Dave, you're listed as maintainer :-P
>
>done.

Dave could you add that it's the third parameter of the "SharedSection" string
that must be changed. I read that KB article, but still had to find the
correct one by trial and error, which required a reboot every time.

Rainer

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit

2007-10-26 Thread Rainer Bauer
"Dave Page" wrote:

>> --- Original Message ---
>> From: Rainer Bauer <[EMAIL PROTECTED]>
>> To: pgsql-general@postgresql.org
>> Sent: 26/10/07, 18:09:26
>> Subject: Re: [GENERAL] 8.2.3: Server crashes on Windows using Eclipse/Junit
>> 
>> Dave could you add that it's the third parameter of the "SharedSection" 
>> string
>> that must be changed. I read that KB article, but still had to find the
>> correct one by trial and error, which required a reboot every time.
>
>Err, it does say that:
>
>You can increase the non-interactive Desktop Heap by modifying the third 
>SharedSection value in the registry as described in this Microsoft 
>Knowledgebase article.

Must have overlooked that part. Sorry for the noise.

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Copy the database..

2007-11-03 Thread Rainer Bauer
Abandoned wrote:

>I tryed pg_dump but it is very slowly. Are there any faster way to
>copy database?

Actually, I was looking for something along the same line.

I often want to test some functionality in my program based on the same
dataset. However, dump/restore takes too long to be of any use.

Wouldn't it be possible to copy the database folder and somehow instruct the
postmaster to include the copied data after a restart?

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Copy the database..

2007-11-05 Thread Rainer Bauer
Tom Lane wrote:

>Rainer Bauer <[EMAIL PROTECTED]> writes:
>> Wouldn't it be possible to copy the database folder and somehow instruct the
>> postmaster to include the copied data after a restart?
>
>See CREATE DATABASE's TEMPLATE option.  It's a bit crude but I think
>it'll help.

Thanks, Tom. Works like a charm.

Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Npsql is much faster than ODBC ?

2007-11-06 Thread Rainer Bauer
Alvaro Herrera wrote:

>FYI there's another Postgres ODBC driver that is said to have better
>performance.
>
>https://projects.commandprompt.com/public/odbcng
>
>(Yes, my company maintains it)

psqlodbc is licenced under LGPL, but ODBCng uses the GPL.

That means that commercial software cannot use the ODBCng driver. Are there
any plans to release it under LGPL licence?

Rainer

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Npsql is much faster than ODBC ?

2007-11-06 Thread Rainer Bauer
"Joshua D. Drake" wrote:

>> psqlodbc is licenced under LGPL, but ODBCng uses the GPL.
>> 
>> That means that commercial software cannot use the ODBCng driver. Are
>> there any plans to release it under LGPL licence?
>
>The only time this would be an issue is if you tried to embed the
>driver. Other than that, you are more than welcome to use closed source
>software against the ODBCng driver.

That's nice to hear. But I respect licences as they are and the ODBCng driver
is licenced under the GPL.

So is this an _official_ statement from CommandPrompt that the driver can be
used in commercial applications? If so then either this should be stated on
the drivers' page (or the licence should be changed to LGPL).

Rainer

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [GENERAL] Npsql is much faster than ODBC ?

2007-11-07 Thread Rainer Bauer
Alvaro Herrera wrote:

>Rainer Bauer wrote:
>> "Andrej Ricnik-Bay" wrote:
>> 
>> >On Nov 7, 2007 2:40 PM, Rainer Bauer <[EMAIL PROTECTED]> wrote:
>> >
>> >> That's nice to hear. But I respect licences as they are and the ODBCng 
>> >> driver
>> >> is licenced under the GPL.
>> >That doesn't mean that you're not allowed to use it with commercial
>> >applications;  it just means that you need to be happy to provide the
>> >source for it on request.
>> 
>> Which is exactly the reason why the LGPL licence was created. So that any
>> software can link against  a library without the restrictions of the GPL.
>
>Keep in mind, though, that the ODBC driver is not linked to your app.
>It is only loaded on demand at run time, and can be replaced by any
>other ODBC driver.   So AFAIU your application is "shielded" from GPL.
>IANAL of course. 

Neither am I.

However, the GPL FAQ has an entry specially for this case:
<http://www.gnu.org/licenses/gpl-faq.html#NFUseGPLPlugins>

"If the program dynamically links plug-ins, and they make function calls to
each other and share data structures, we believe they form a single program,
which must be treated as an extension of both the main program and the
plug-ins. In order to use the GPL-covered plug-ins, the main program must be
released under the GPL or a GPL-compatible free software license, and that the
terms of the GPL must be followed when the main program is distributed for use
with these plug-ins."

The way I read this section is that linking to a GPL ODBC driver would imply
that I have to release my program under a GPL  (compatible) licence.

This was one of the reasons why I added Postgres support to my program instead
of MySQL [1]. They altered the licence for their drivers from LGPL to GPL so
that you have to purchase a commercial licence.

Rainer

[1] In the meantime I am of course glad that I made this decision. I have not
only learned a lot more about databases, but especially that Postgres is
superior to MySQL ;-)

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [GENERAL] list of postgres related unexpected 'features'

2007-11-07 Thread Rainer Bauer
Sascha Bohnenkamp wrote:

>Is there a list postgres related unexpected 'features', like count(*) is
>expensive etc.?
>I do not ask to bash postgres, but to use it ... and it would be nice if
>I have not to try any pittfall by myself.

Something like ?

Rainer

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Npsql is much faster than ODBC ?

2007-11-07 Thread Rainer Bauer
"Andrej Ricnik-Bay" wrote:

>On Nov 7, 2007 2:40 PM, Rainer Bauer <[EMAIL PROTECTED]> wrote:
>
>> That's nice to hear. But I respect licences as they are and the ODBCng driver
>> is licenced under the GPL.
>That doesn't mean that you're not allowed to use it with commercial
>applications;  it just means that you need to be happy to provide the
>source for it on request.

Which is exactly the reason why the LGPL licence was created. So that any
software can link against  a library without the restrictions of the GPL.

Rainer

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [GENERAL] Npsql is much faster than ODBC ?

2007-11-07 Thread Rainer Bauer
Greg Smith wrote:

>On Wed, 7 Nov 2007, Rainer Bauer wrote:
>
>> The way I read this section is that linking to a GPL ODBC driver would imply
>> that I have to release my program under a GPL  (compatible) licence.
>
>What you actually link against is the ODBC implementation for your 
>platform.  If you're on something UNIX-ish, you're probably linking 
>against unixODBC, which is available under the LGPL presumably to avoid 
>this issue.

My program runs under MS Windows only. Which means the Microsoft ODBC "driver
manager" is closed source.

I have found the original announcement from CommandPrompt and it seems that
this topic  was already discussed before:
<http://archives.postgresql.org/pgsql-odbc/2005-04/msg00084.php>

Rainer

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[GENERAL] Cluster using tablespaces?

2007-11-28 Thread Rainer Bauer
Hello,

is there a way to instruct cluster to store the temporary created data on a
different tablespace (i.e. drive)? If not, wouldn't that have a decent
performance impact or is most of the time spend retrieving the data in index
order?

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cluster using tablespaces?

2007-11-28 Thread Rainer Bauer
Erik Jones wrote:

>> is there a way to instruct cluster to store the temporary created  
>> data on a
>> different tablespace (i.e. drive)? If not, wouldn't that have a decent
>> performance impact or is most of the time spend retrieving the data  
>> in index
>> order?
>
>What temporary created data are you referring to?

The one described in the manual
:

"During the cluster operation, a temporary copy of the table is created that
contains the table data in the index order. Temporary copies of each index on
the table are created as well."

Rainer

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] Cluster using tablespaces?

2007-11-28 Thread Rainer Bauer
Tom Lane wrote:

>Rainer Bauer <[EMAIL PROTECTED]> writes:
>
>> "During the cluster operation, a temporary copy of the table is created that
>> contains the table data in the index order. Temporary copies of each index on
>> the table are created as well."
>
>That's probably a bit misleading.  There is no "temporary" copy of the
>table, just the new permanent copy.  The document is trying to point out
>to you that the transient disk space requirement will be 2X the table
>size, but maybe we could phrase it better.

Ok, I expected that. Does this work:
ALTER TABLE foo CLUSTER ON index_name SET TABLESPACE new_tablespace

I.e. is the table moved to the other tablespace and clustered at the same time
or are these independant operations?

What I am trying to achieve is cutting down the time the cluster command
takes. I thought the most promising way would be if the new data is written to
different drive.

>For btree indexes, there is a temporary copy of the index data, which
>will go wherever you have arranged for temp files to go.  (I think that
>easy user control of this may be new for 8.3, though.)

Could you give me a hint where that would be on Windows? I guess this might be
worth a try since there are a couple of btree indexes in the database.

Rainer

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [GENERAL] Cluster using tablespaces?

2007-12-01 Thread Rainer Bauer
Alvaro Herrera wrote:

>Rainer Bauer wrote:
>> Ok, I expected that. Does this work:
>> ALTER TABLE foo CLUSTER ON index_name SET TABLESPACE new_tablespace
>> 
>> I.e. is the table moved to the other tablespace and clustered at the same 
>> time
>> or are these independant operations?
>
>No, the ALTER TABLE/CLUSTER ON only defines what index will the table be
>clustered on in the future, but it doesn't cluster it at that time.
>Perhaps it could be improved so that if a table rewrite is going to be
>done anyway for some other reason, then make sure the rewrite uses the
>cluster order.  I think it's far from trivial though.

Yeah that is what I was originally looking for.

>> What I am trying to achieve is cutting down the time the cluster command
>> takes. I thought the most promising way would be if the new data is written 
>> to
>> different drive.
>
>It has been theorized that cluster would be faster in general if instead
>of doing an indexscan we would instead use a seqscan + sort step.  It
>would be good to measure it.

Could a reindex on the clustered index speed up the clustering (when executed
immediatelly before the cluster command)? As I understand it, this index is
used to fetch the table data in the correct order. Or is most of the time
spend fetching the table data?

Also, would it make sense to increase  for the cluster
operation. This is set to 32MB here on my Windows box as was recommended.

>From my questions you can see that I don't know how the clustering is working
internally. I.e. I don't have a concrete idea how to make cluster any faster.

>> >For btree indexes, there is a temporary copy of the index data, which
>> >will go wherever you have arranged for temp files to go.  (I think that
>> >easy user control of this may be new for 8.3, though.)
>> 
>> Could you give me a hint where that would be on Windows? I guess this might 
>> be
>> worth a try since there are a couple of btree indexes in the database.
>
>I think Tom is referring to the new temp_tablespaces config variable.

I moved the pgsql_tmp directory to another disk, but that didn't speed up the
cluster command.

Rainer

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org/


Re: [GENERAL] Cluster using tablespaces?

2007-12-02 Thread Rainer Bauer
Alvaro Herrera wrote:

> Alvaro Herrera wrote:
>Probably most of the time is going into creating the new table then.
>
>If you are looking for a short-term solution to your problem, maybe the
>best is to follow the recommendation on CLUSTER ref page:

I've read that section before, but I have lots of foreign key relationships
between the tables.

Thanks Alvaro and Tom, but it seems that I will have to live with that
behaviour, until ...

>No, the ALTER TABLE/CLUSTER ON only defines what index will the table be
>clustered on in the future, but it doesn't cluster it at that time.
>Perhaps it could be improved so that if a table rewrite is going to be
>done anyway for some other reason, then make sure the rewrite uses the
>cluster order.  I think it's far from trivial though.

... this has been tried.

Rainer

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[GENERAL] Is news.postgresql.org down?

2008-01-28 Thread Rainer Bauer
Hello,

I cannot retrieve any list messages through the news server anymore
(since last Tuesday). Are there any known problems?

The reported error is: "503 NNTP server unavailable".

Rainer


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [GENERAL] Is news.postgresql.org down?

2008-01-28 Thread Rainer Bauer
Hello Magnus,

MH> Yes, it's been down for quite a long time. AFAIK, Marc has a plan for
MH> fixing it, but I don't know the timeframe.

Thanks Magnus. I was not sure whether it was really the server.
Hopefully it won't be down for too long as I use a newsreader to read
the lists.

Rainer


---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [GENERAL] [pgsql-general] Daily digest v1.8030 (22 messages)

2008-03-31 Thread Rainer Bauer
Tom Lane wrote:

>"Brett Hoerner" <[EMAIL PROTECTED]> writes:
>> On Mon, Mar 31, 2008 at 10:54 AM, Colin Wetherbee <[EMAIL PROTECTED]> wrote:
>>> I received eight of these unwanted digests last night and early this
>>> morning, after never seeing them before.  Is this a new configuration
>>> change that I need to go personalize?
>
>> I was wondering the same thing, but don't see an option regarding digests...
>
>Did you look closely at the Received: history of the messages?

They turned up on usenet as well (see below). Seems like
<[EMAIL PROTECTED]> has a problem.

Rainer

Path: news.hub.org!postgresql.org!pgsql-general-owner+m130868
From: [EMAIL PROTECTED]
Newsgroups: pgsql.general
Subject: [pgsql-general] Daily digest v1.8030 (22 messages)
Date: Thu, 27 Mar 2008 05:56:10 -0300
Organization: Hub.Org Networking Services
Lines: 38
Sender: [EMAIL PROTECTED]
Message-ID: <[EMAIL PROTECTED]>
NNTP-Posting-Host: news.hub.org
Mime-Version: 1.0
Content-Type: text/plain
Content-Transfer-Encoding: 8bit
X-Trace: news.hub.org 1206832273 86042 200.46.204.72 (29 Mar 2008 23:11:13
GMT)
X-Complaints-To: [EMAIL PROTECTED]
NNTP-Posting-Date: Sat, 29 Mar 2008 23:11:13 + (UTC)
Content-Disposition: inline
X-Received: from hub.org (hub.org [200.46.204.220])
by news.hub.org (8.14.1/8.14.1) with ESMTP id m2TNBDuJ086034
for <[EMAIL PROTECTED]>; Sat, 29 Mar 2008 20:11:13 -0300
(ADT)
(envelope-from [EMAIL PROTECTED])
X-Received: from localhost (unknown [200.46.204.184])
by hub.org (Postfix) with ESMTP id 0121A1688661
for <[EMAIL PROTECTED]>; Sat, 29 Mar 2008 20:11:18 -0300
(ADT)
X-Received: from hub.org ([200.46.204.220])
 by localhost (mx1.hub.org [200.46.204.184]) (amavisd-maia, port 10024)
 with ESMTP id 61755-02 for <[EMAIL PROTECTED]>;
 Sat, 29 Mar 2008 20:11:10 -0300 (ADT)
X-Received: from postgresql.org (postgresql.org [200.46.204.71])
by hub.org (Postfix) with ESMTP id 8DB711688644
for <[EMAIL PROTECTED]>; Sat, 29 Mar 2008 20:11:17 -0300
(ADT)
X-Received: from localhost (unknown [200.46.204.183])
by postgresql.org (Postfix) with ESMTP id DD2202E0031
for <[EMAIL PROTECTED]>; Fri, 28 Mar 2008
15:43:10 -0300 (ADT)
X-Received: from postgresql.org ([200.46.204.71])
 by localhost (mx1.hub.org [200.46.204.183]) (amavisd-maia, port 10024)
 with ESMTP id 69425-02 for <[EMAIL PROTECTED]>;
 Fri, 28 Mar 2008 15:43:03 -0300 (ADT)
X-Greylist: from auto-whitelisted by SQLgrey-1.7.5
X-Received: from marte.gdl.cinvestav.mx (gdl.cinvestav.mx [148.247.21.3])
by postgresql.org (Postfix) with ESMTP id 77EE22E0030
for ; Fri, 28 Mar 2008 15:43:03 -0300
(ADT)
X-Received: by marte.gdl.cinvestav.mx (Postfix, from userid 0)
id 9F5A92080FE02; Fri, 28 Mar 2008 12:41:55 -0600 (CST)
X-Received: from maia-1.hub.org (maia-1.hub.org [200.46.204.191])
by marte.gdl.cinvestav.mx (Postfix) with ESMTP id AB7D02056B5CE
for <[EMAIL PROTECTED]>; Thu, 27 Mar 2008 03:13:48 -0600
(CST)
X-Received: from postgresql.org (postgresql.org [200.46.204.71])
by maia-1.hub.org (Postfix) with ESMTP id D0D25DB6A94;
Thu, 27 Mar 2008 06:13:44 -0300 (ADT)
X-Mailer: MIME-tools 5.420 (Entity 5.420)
X-To: pgsql-general@postgresql.org
X-Mailing-List: pgsql-general
X-Precedence: bulk
X-Cinvestav-CTS-MailScanner-Information: Please contact the ISP for more
information
X-MailScanner-ID: 9F5A92080FE02.C88E5
X-Cinvestav-CTS-MailScanner: Found to be clean
X-Cinvestav-CTS-MailScanner-From: [EMAIL PROTECTED]
X-Virus-Scanned: Maia Mailguard 1.0.1
X-Mailing-List: pgsql-general
X-List-Archive: 
X-List-Help: 
X-List-ID: 
X-List-Owner: 
X-List-Post: 
X-List-Subscribe: 
X-List-Unsubscribe:

X-Precedence: bulk
Xref: news.hub.org pgsql.general:58447

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] lc_messages 8.3.7

2009-06-03 Thread Rainer Bauer
Alvaro Herrera wrote:

>Hmm, it works fine for me (not Windows though):

The lc_messages setting is broken in 8.3 under Windows (not sure whether this
will be fixed in 8.4):


Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] c++ program to connect to postgre database

2009-07-09 Thread Rainer Bauer
John R Pierce schrieb:

>ramon rhey serrano wrote:
>> Hi Sir John,
>>
>> Thanks for the links and reply.
>> I downloaded this "libpqxx-3.0" but i really don't know what to do 
>> with the file and where to put them. I'm still having hard time how to 
>> start the C++ program using Dev C++ as my IDE, what files do i need to 
>> install, what headers to use, how the basic flow of program should 
>> look like (maybe for a simple database connection). I already 
>> installed PostgreSQL version 1.8.4 in my PC (windows). I have basic 
>> understanding and background in C++ programming, but totally new to 
>> PostgreSQL database system.
>
>as I said earlier, I'd probably just use C API calls to libpq, and do my 
>own C++ wrappings.C++ libraries pretty much have to be compiled for 
>the speciifc compiler environment, so I'd have to assume youd load the 
>libpqxx project source into your IDE, and build it.

The best C++ library I came across so far is SOCI:


Maybe this could be a good starting point for Ramon since he is familiar with
C++.

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Postgres memory question

2009-08-09 Thread Rainer Bauer
Kobus Wolvaardt schrieb:

>We have software deployed on our network that need postgres, we have server
>that hosts the server and all worked fine until we crossed about 200 users.
>The application is written so that it makes a connection right at the start
>and keeps it alive for the duration of the app. The app is written in
>Delphi. The postgres server runs on a windows 2008 server with quad core cpu
>and 4 GB of ram.

Have you tried to increase Desktop Heap as described here:


Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] No download of Windows binaries without registering?

2009-08-26 Thread Rainer Bauer
Thomas Kellerer schrieb:

>If one goes directly http://www.enterprisedb.com/products/pgbindownload.do 
>this is not necessary (which is what I expect), but as far as I can tell, 
>there is no direct link to that page. 

You get there from the official PostgreSQL download page:


Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] How to change log file language?

2008-09-19 Thread Rainer Bauer
Hi all,

I installed 8.3.3 on an english WinXP. The database cluster was initialized
with server encoding UTF8 and the locale was set to 'German, Germany'.

Now all messages in the log and everywhere else are showing up in German (as
expected). However I want to see those messages in English. I tried to alter
lc_messages in the postgresql.conf file ( '', 'C' and 'English_United
States'), but this seems to have no effect (yes, I restarted the server).

Searching the archives, I found someone with a similar problem:


Could anybody tell me what I am doing wrong?

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] How to change log file language?

2008-09-19 Thread Rainer Bauer
"Leif B. Kristensen" wrote:

>I don't know how this is handled in Windows, but on a Linux computer you 
>can enter the directory /usr/local/share/locale/de/LC_MESSAGES/ and 
>just rename or delete the file psql.mo.

Thanks for the tipp: After renaming folder
C:\Program Files\PostgreSQL\8.3\share\locale\de
to "de_" I have the english texts. But I cannot imagine that the language
cannot be altered after the cluster was initialized.

Any other suggestions?

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Tips on how to efficiently debugging PL/PGSQL

2008-10-23 Thread Rainer Bauer
Glyn Astill wrote:

>> From: [EMAIL PROTECTED] <[EMAIL PROTECTED]>
>> Just to seek some tips on how to efficiently debug PL/SQL.
>> 
>edb have a debugger that intigrates with pgadmin
>
>http://pgfoundry.org/projects/edb-debugger/

This debugger is integrated with pgAdminIII that is shipped with PostgreSQL
8.3.

Just right click the desired function and chose an action from the "Debugging"
context menu.

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Cannot remember what to do to get the information_schema, pg_catalog etc showing.

2008-11-03 Thread Rainer Bauer
Johan Nel wrote:

>Did an installation of PostgreSQL on a new laptop and I just cannot 
>remember which .sql script to run to have the above schemas showing in 
>pgAdmin or is it something to do with a checkbox/setting during 
>installation?

File > Options... > Display > [x] Show System Objects in the treeview

After refreshingh the treeview you find the information schema in 
[database] > Catalogs > ANSI (information_schame) > Catalog Objects

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Missing usenet messages. Was: A bit confused about Postgres Plus

2008-12-18 Thread Rainer Bauer
Hi all,

Thomas Kellerer wrote here


> [I'm reading this list through the newsserver at news.gmane.org,
> and several answers did not seem to make it to news.gmane.org
> but only to my private email]

I am reading the Postgres lists through the "official" newsserver
news.postgresql.org. But I have found the same problem: Some messages just
don't appear.

It seems like Alvaros message was sent only via private mail, because it does
not show in the mailing list archive.

I have configured the pgsql-performance mailing list so that I receive the
emails. And I see some messages not turning up on the news server (don't ask
me for exact numbers).

So is this a configuration problem or is there something else going wrong?

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Missing usenet messages. Was: A bit confused about Postgres Plus

2008-12-18 Thread Rainer Bauer
Alvaro Herrera wrote:

>Rainer Bauer wrote:
>> 
>> It seems like Alvaros message was sent only via private mail, because it does
>> not show in the mailing list archive.
>
>It is on the archives, here:
>http://archives.postgresql.org/message-id/20081217122802.GA4453%40alvh.no-ip.org
>
>As Dave said, it was sent to pgsql-www, and I think the newsserver
>doesn't carry that group (which is a bug that we've asked Marc to fix,
>but as all things Marc, it takes quite a while).

Yeah, the www list is not available.

>> I have configured the pgsql-performance mailing list so that I receive the
>> emails. And I see some messages not turning up on the news server (don't ask
>> me for exact numbers).
>> 
>> So is this a configuration problem or is there something else going wrong?
>
>I don't think the news gateway is all that trustworthy nowadays.

That's a pity. For me it's not so critical if I miss a few messages, but I
could imagine Thomas and I are not the only ones using a newsreader program to
follow the discussions on the mailing lists.

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Usenet gateway not working

2009-01-16 Thread Rainer Bauer
Hello,

seems like the usenet gateway is down again (my last successful
contact to news.postgresql.org dates back 10 days).

Is this a known problem?

Rainer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Space for pg_dump

2009-04-18 Thread Rainer Bauer
Greg Smith wrote:

>On Tue, 31 Mar 2009, Scott Marlowe wrote:
>
>> Sadly, there is no exact maths for such things.  If your database has
>> tons of indexes and such, it might be 20 or 100 times bigger on disk
>> than it will be during backup.  If it's all compressible text with few
>> indexes, it might be a 1:1 or so size.
>
>Since running an entire pgdump can take forever on a big database, what I 
>usually do here is start by running the disk usage query at 
>http://wiki.postgresql.org/wiki/Disk_Usage

Interesting. However, the query gives an error if the table name contains
upper case characters, like in my case "tblConnections":

ERROR:  relation "public.tblconnections" does not exist.

Replacing all occurences of  by  <'"' || relname || '"'> fixes the
error.

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Community account

2009-04-18 Thread Rainer Bauer
Hello,

I signed up a couple of months ago but never received the confirmation email.

If I try to create a new account with the same email address, but a different
handle it is denied with this error message: "A user with that email already
exists".

When I click on the 'lost password' link and enter my email address nothing
happens, i.e. no email arrives. In fact I never received *any* email from the
community registration.

How should I proceed? Is there someone I can contact directly regarding this
matter?

Rainer

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Community account

2009-04-18 Thread Rainer Bauer
Hello Scott,

Saturday, April 18, 2009, 4:21:29 PM, you wrote:

SM> On Sat, Apr 18, 2009 at 3:49 AM, Rainer Bauer  wrote:
>> Hello,
>>
>> I signed up a couple of months ago but never received the confirmation email.
>>
>> If I try to create a new account with the same email address, but a different
>> handle it is denied with this error message: "A user with that email already
>> exists".

SM> The mailing lists don't support user names, just email addresses.

Yes, I know. I was talking about a community account.

SM> Where exactly are you signing up?

Here: <http://www.postgresql.org/community/signup>

Rainer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Community account

2009-04-19 Thread Rainer Bauer
Hello Magnus,

Sunday, April 19, 2009, 11:05:49 AM, you wrote:

MH> I assume you are trying to register as inqu...@munnin.com?

Yep.

MH> So we are trying to deliver this to your domain, but appear to be
MH> repeatedly graylisted by the different MX machines. And when we try
MH> again, we're put back on the graylist again.

MH> We are trying to send from @wwwmaster.postgresql.org, which doesn't have
MH> an MX record, that's true. This is because it's an A record, which means
MH> it doesn't *need* an MX. So it appears your mailserver is misconfigured
MH> in this regard.

MH> You need to have the email system give us at least a chance to get off
MH> the graylist :-) Or you need to whitelist the wwwmaster.postgresql.org
MH> domain.

Thanks for the info. I have contacted my provider regarding this issue.

Rainer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Community account

2009-04-19 Thread Rainer Bauer
Hello Magnus,

Sunday, April 19, 2009, 11:05:49 AM, you wrote:

MH> You need to have the email system give us at least a chance to get off
MH> the graylist :-) Or you need to whitelist the wwwmaster.postgresql.org
MH> domain.

It was whitelisted and works now like a charm. Thanks, Magnus!

I suppose there is no way to display an error message to the user if
the email cannot be delivered?

Rainer


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general