Re: [GENERAL] Database takes up MUCH more disk space than it should

2012-01-22 Thread Dan Charrois
>> 
>> SELECT relname, pg_size_pretty(relpages::bigint * 8 *1024) AS size, CASE
>> WHEN relkind = 't' THEN (SELECT pgd.relname FROM pg_class pgd WHERE
>> pgd.relfilenode::text = SUBSTRING(pg.relname FROM 10)) ELSE (SELECT
>> pgc.relname FROM pg_class pgc WHERE pg.reltoastrelid = pgc.relfilenode)
>> END AS refrelname, relfilenode, relkind, reltuples::bigint, relpages FROM
>> pg_class pg ORDER BY relpages DESC;
> 
> If I follow the query above correctly, it is not getting the information you 
> think it is. In particular this part:
> 
> ...SELECT pgc.relname FROM pg_class pgc WHERE pg.reltoastrelid = 
> pgc.relfilenode) 
> 
> Per the docs:
> http://www.postgresql.org/docs/8.4/interactive/catalog-pg-class.html
> reltoastrelid = The OID of the TOAST table not the relfilenode 
> When I table is created those numbers are the same, but they can diverge over 
> time.
> 
> I would do something like 
> select oid, relfilenode, relname from pg_class where relname = 
> 'pg_toast_101748';
> 
> This will get you the OID and also show if it differs from the relfilenode.
> 
> Then something like:
> select * from pg_class where relkind='r' and reltoastrelid=[oid from above]
> 
> This should show you if the TOAST table has been orphaned and if not what 
> table 
> it is associated with.

Thank you Adrian.  I think that you seem to have found the trouble.  For most 
of the TOAST tables I have, oid=relfilenode, but not for that one.  I found the 
table that has reltoastrelid linking to that huge TOAST table.. and it makes 
some sense, since it is also the largest "regular" table too (79 GB).

So perhaps there are no orphaned TOAST tables after all, as now I know who its 
parent is.  The database still takes up a lot more physical storage than I'd 
anticipated it would, but at least it appears as though that space can be 
accounted for.

It's too bad \dt+ doesn't take into account the related TOAST table too - if it 
had, I would have expected that much disk space right from the get-go, and 
never thought twice about it.  I suppose that's the danger of not learning 
enough about administration of PostgreSQL and trying to troubleshoot a 
perceived problem that may not even have been a problem in the first place.  
Until a few days ago, I hadn't even heard of TOAST tables, and just presumed 
all the data was stuffed into the database I created directly.  From what I've 
read about them since, they sound like a great idea - but I never anticipated 
them, or their effect on trying to sort out exactly where my data went.

Thanks a lot for shedding the light on this subject that I needed!

Dan
--
Syzygy Research & Technology
Box 83, Legal, AB  T0G 1L0 Canada
Phone: 780-961-2213


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database takes up MUCH more disk space than it should

2012-01-22 Thread Dan Charrois

On 2012-Jan-21, at 6:39 PM, Scott Marlowe wrote:

> On Sat, Jan 21, 2012 at 1:37 AM, Dan Charrois  wrote:
>> Hi everyone.  I'm currently in the situation of administering a rather large 
>> PostgreSQL database which for some reason seems to be even much larger than 
>> it should be.
>> 
>> I'm currently running version 8.4.5 - not the latest and greatest, I know - 
>> but this is a live database that would problematic to take down to upgrade 
>> unless all else fails - especially considering its size if it does need to 
>> be rebuilt somehow.
>> 
>> Anyway, I'm no stranger to SQL, but new to PostgreSQL - all my SQL 
>> administration in the past has been with MySQL.  So I'm somewhat bumbling my 
>> way through administrative commands trying to solve this - please bear with 
>> me.
>> 
>> The size of the tables reported by \dt+ add up to around 120 GB.  The size 
>> of the indexes reported with \di+ adds up to around 15 GB.  This is pretty 
>> consistent with what I would expect the data to require.
>> 
>> The problem is, the disk usage of the pgsql directory where the data is kept 
>> (as reported by 'du') comes to 647 GB - significantly more than it should.  
>> select pg_database_size('mydatabase') confirms this, returning 690830939920.
>> 
>> Vacuuming the tables (full and otherwise) hasn't helped, but then 
>> considering how the database is used, I didn't really expect it to.  It's 
>> strictly a read-only database, with the exception of once a month when it is 
>> refreshed by loading new data into newly created tables, and once that is 
>> done, vacuum analyzing the new tables, dropping the old tables, then 
>> renaming the new ones to have the name of the old ones.  Vacuums never claim 
>> to recover any space, and the disk usage stays the same.
>> 
>> So how do I find out what's eating up all this extra space?
> 
> Real quick, if you run pg_database_size(name) for each db, including
> template1 and postgres, what do you get back?


Thanks for your reply, Scott.

My database: 697490323216
postgres: 5537796
template0: 5537796
template1: 5537796

pg_database_size(mydatabase) did return a value consistent with disk usage.  
But that value didn't jive with what I expected from summing up the sizes I got 
from \dt+

It looks like a large TOAST table, not reported by \dt+ was the biggest 
culprit, but I thought it was orphaned.  Due to some help by Adrian Klaver, it 
looks like I was mistaken - it was in fact used by one of my tables.  So it 
looks like there wasn't really a problem at all - other than my data still 
taking up a lot more physical storage than I thought it did. I don't think 
there's much I can do about it, but that's much better than assuming it was 
related to a problem that needed fixing.

Thanks again!

Dan
--
Syzygy Research & Technology
Box 83, Legal, AB  T0G 1L0 Canada
Phone: 780-961-2213


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database takes up MUCH more disk space than it should

2012-01-22 Thread John R Pierce

On 01/22/12 12:32 AM, Dan Charrois wrote:

It looks like a large TOAST table, not reported by \dt+ was the biggest 
culprit, but I thought it was orphaned.  Due to some help by Adrian Klaver, it 
looks like I was mistaken - it was in fact used by one of my tables.  So it 
looks like there wasn't really a problem at all - other than my data still 
taking up a lot more physical storage than I thought it did. I don't think 
there's much I can do about it, but that's much better than assuming it was 
related to a problem that needed fixing.


well, there's a good possibiltiy that clustering that table will cause a 
big chunk of that space to be freed up.   first, make sure you have no 
old ' in transasction' processes, then run CLUSTER table USING 
indexname;(probably the primary key, unless there's another index 
thats used more frequently).   this is a high overhead operation that 
will copy all the data in that table to new disk space, then delete the 
old files, ordering it by the specified key, it will then rebuild all 
indexes.  naturally, there's a global lock on this table for the 
duration, so this should be done during a maintenance window with your 
application(s) shut down.



--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] hash options

2012-01-22 Thread Little, Douglas
Hello,

I'm working on a data warehouse dimensionalization process   where I need to 
hash a text string to use as the key.
I've implemented with MD5.  It works fine,  the problem I have is the size of 
the md5 (32 bytes) is often longer than the original string - thus not 
accomplishing what I want - space savings.

Does anybody have alternative hash function recommendations?
 I looked at the options I knew of
select length(encode('ar=514','hex')); -- 12
select length(decode('ar=514','base64')); -- 24
select length(DIGEST('ar=514', 'md5')) -- 16bytes
select length(DIGEST('ar=514', 'sha1')) -- 20bytes

function is currently written in pg/plsql,  but I'm considering switching to 
python for broader library choice.



Source data is delimited list of name/value pairs.  Length range from 0-2500 
bytes.
ar=514,cc=CA,ci=Montreal,cn=North+America,co=Sympatico,cs=Canada,nt=Xdsl,rc=QC,rs=Quebec,tp=High,tz=GMT%2D5

Thanks in advance
Doug Little

Sr. Data Warehouse Architect | Business Intelligence Architecture | Orbitz 
Worldwide
douglas.lit...@orbitz.com
 [cid:image001.jpg@01CCD8EB.FDD3C490]   orbitz.com | 
ebookers.com | 
hotelclub.com | 
cheaptickets.com | 
ratestogo.com | 
asiahotels.com

<>

Re: [GENERAL] hash options

2012-01-22 Thread Chris Angelico
On Mon, Jan 23, 2012 at 2:59 AM, Little, Douglas
 wrote:
>
> I’ve implemented with MD5.  It works fine,  the problem I have is the size of 
> the md5 (32 bytes) is often longer than the original string – thus not 
> accomplishing what I want – space savings.

You can always use a truncated hash - for instance, take the first 6-8
hex digits of the MD5 or SHA1 hash. For human readability, that's
likely to be all you need (for instance, git references commits by
their SHA1 hashes, but you can work with just the first six digits
quite happily). Otherwise, can you provide more details on why you
need a hash, and why it wants to be shorter than the original?

Chris Angelico

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database takes up MUCH more disk space than it should

2012-01-22 Thread Tom Lane
Dan Charrois  writes:
> It's too bad \dt+ doesn't take into account the related TOAST table
> too - if it had, I would have expected that much disk space right from
> the get-go, and never thought twice about it.

FWIW, that's been changed as of 9.1.

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Database takes up MUCH more disk space than it should

2012-01-22 Thread Adrian Klaver
On Sunday, January 22, 2012 12:26:22 am Dan Charrois wrote:

> 
> Thank you Adrian.  I think that you seem to have found the trouble.  For
> most of the TOAST tables I have, oid=relfilenode, but not for that one.  I
> found the table that has reltoastrelid linking to that huge TOAST table..
> and it makes some sense, since it is also the largest "regular" table too
> (79 GB).

The reason for that is found here:

http://www.postgresql.org/docs/9.0/interactive/storage-file-layout.html

"
Caution
Note that while a table's filenode often matches its OID, this is not 
necessarily 
the case; some operations, like TRUNCATE, REINDEX, CLUSTER and some forms of 
ALTER TABLE, can change the filenode while preserving the OID. Avoid assuming 
that filenode and table OID are the same. Also, for certain system catalogs 
including pg_class itself, pg_class.relfilenode contains zero. The actual 
filenode 
number of these catalogs is stored in a lower-level data structure, and can be 
obtained using the pg_relation_filenode() function.
"

> 
> So perhaps there are no orphaned TOAST tables after all, as now I know who
> its parent is.  The database still takes up a lot more physical storage
> than I'd anticipated it would, but at least it appears as though that
> space can be accounted for.
> 
> It's too bad \dt+ doesn't take into account the related TOAST table too -
> if it had, I would have expected that much disk space right from the
> get-go, and never thought twice about it. 

In pre 9.1 databases you  can use:

"
pg_total_relation_size accepts the OID or name of a table or toast table, and 
returns the total on-disk space used for that table, including all associated 
indexes. This function is equivalent to pg_table_size + pg_indexes_size.

pg_table_size accepts the OID or name of a table and returns the disk space 
needed for that table, exclusive of indexes. (TOAST space, free space map, and 
visibility map are included.)
"

See details here:
http://www.postgresql.org/docs/9.0/interactive/functions-admin.html

> 
> Dan
> --
> Syzygy Research & Technology
> Box 83, Legal, AB  T0G 1L0 Canada
> Phone: 780-961-2213

-- 
Adrian Klaver
adrian.kla...@gmail.com

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] indexes no longer used after shutdown during reindexing

2012-01-22 Thread Tomas Vondra
On 20.1.2012 19:47, Matt Dew wrote:
> On 01/13/2012 02:49 PM, Tomas Vondra wrote:
>> On 13.1.2012 22:20, Tom Lane wrote:
>>> Matt Dew  writes:
 An interesting sidenote we realized.  the nice system shutdown script
 /etc/init.d/postgres doesn't actually wait for the db to be down, it
 just waits for pg_ctl to return.
>>>
>>> By default, "pg_ctl stop" does wait for the server to shut down ...
>>
>> Not really. It waits for up to 60 seconds and if the shutdown was not
>> successful (as there was a connected client), it prints a message to
>> the log
>>
>> pg_ctl: server does not shut down
>> HINT: The "-m fast" option immediately disconnects sessions
>> rather than
>>
>> and returns 1.
>>
>> If you really need to wait for shutdown, you need to add "-w" to the
>> command line, use "-m fast" or "-m immediate".
>>
>> But even ignoring the return value should not cause corruption IMHO.
> 
> Thanks Tom and Tomas,
> I remember -w   now,  but I'd long forgotten about it.
> 
> If the pg_ctl returns a 1 but the machine physically powers off,  there
> is a chance for corruption though right?  Postgres is trying to write
> stuff to disk and clean up and BAM power goes out.   ?
> 
> There is a chance for corruption though if the machine physically powers
> off after the pg_ctl return

There are various types of corruption. If you power off the system
before the database properly shuts down, the data files will be
corrupted. But this should be fixed on the next database startup - the
database should find out it was not switched properly and perform a
recovery (replay the WAL logs).

So yes, it would be corrupted but fixed on the next startup. And thus
should not cause issues like the one you describe.

Tomas

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] Question about (probably wrong) index scan cost for conditional indexes

2012-01-22 Thread Maxim Boguk
I not sure it is bug or just planner work that way.
Postgresql 9.1.2 on Linux.

But it seems that index scan cost for very narrow/selective conditional
indexes is greatly overestimated at least in some cases.

In my case I have an special conditional index like:

"news_dtime_in208section_active_key2" btree (dtime) WHERE status = 1 AND
class::text = 'Sports::News'::text AND sections && '{208}'::integer[]

And query:

db=# EXPLAIN ANALYZE select * from news  where  (status = 1)  and  (class =
'Sports::News')  and  (sections && '{208}')  order by dtime  limit 10;

QUERY PLAN
-
 Limit  (cost=0.00..26.38 rows=10 width=1262) (actual time=0.040..0.082
rows=10 loops=1)
   ->  Index Scan using news_dtime_in208section_active_key2 on news
(cost=0.00..1429.55 rows=542 width=1262) (actual time=0.038..0.073 rows=10
loops=1)
 Total runtime: 0.142 ms
(3 rows)

I see no reasons why cost of that query that high... i think it should be
very close equvalent in cost of query:

"news_pkey" PRIMARY KEY, btree (id)

db=# EXPLAIN ANALYZE select * from news  order by id limit 10;
QUERY PLAN
---
 Limit  (cost=0.00..0.33 rows=10 width=1262) (actual time=0.043..0.085
rows=10 loops=1)
   ->  Index Scan using news_pkey on news  (cost=0.00..25944.34 rows=775090
width=1262) (actual time=0.041..0.077 rows=10 loops=1)
 Total runtime: 0.147 ms
(3 rows)

(and if you compare real execution times - they are same but cost is
different by 2 orders).
No changes of costing setting have an effect that difference.


That problem leads to switching to very slow plan for medium limits:

db=# EXPLAIN ANALYZE select * from news  where  (status = 1)  and  (class =
'Sports::News')  and  (sections && '{208}')  order by dtime  limit 40;

QUERY PLAN

 Limit  (cost=91.97..92.07 rows=40 width=1262) (actual
time=630.865..630.889 rows=40 loops=1)
   ->  Sort  (cost=91.97..93.32 rows=542 width=1262) (actual
time=630.862..630.872 rows=40 loops=1)
 Sort Key: dtime
 Sort Method: top-N heapsort  Memory: 89kB
 ->  Bitmap Heap Scan on news  (cost=6.18..74.83 rows=542
width=1262) (actual time=145.816..412.254 rows=262432 loops=1)
   Recheck Cond: ((sections && '{208}'::integer[]) AND (status
= 1) AND ((class)::text = 'Sports::News'::text))
   ->  Bitmap Index Scan on news_sections_gin2_special
(cost=0.00..6.05 rows=542 width=0) (actual time=98.954..98.954 rows=262754
loops=1)
 Index Cond: (sections && '{208}'::integer[])
 Total runtime: 632.049 ms
(9 rows)

Kind regards,
Maksym

-- 
Maxim Boguk
Senior Postgresql DBA.

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

Skype: maxim.boguk
Jabber: maxim.bo...@gmail.com

LinkedIn profile: http://nz.linkedin.com/in/maximboguk
"If they can send one man to the moon... why can't they send them all?"

МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."


[GENERAL] Best way to create unique primary keys across schemas?

2012-01-22 Thread panam
Hi,

If I'd like to have primary keys generated ("numeric" style, no UUIDs) that
are unique across schemas is the best option to allocate a fixed sequence
range (min,max) to the sequences of all schemas?

Thanks
panam

--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/Best-way-to-create-unique-primary-keys-across-schemas-tp5165043p5165043.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about (probably wrong) index scan cost for conditional indexes

2012-01-22 Thread Tom Lane
Maxim Boguk  writes:
> But it seems that index scan cost for very narrow/selective conditional
> indexes is greatly overestimated at least in some cases.

I realized in connection with
http://archives.postgresql.org/pgsql-general/2012-01/msg00459.php
that btcostestimate is not correctly estimating numIndexTuples for
partial indexes.  But it's impossible to tell from this amount of
information whether you're seeing an effect of that, or something else.
Can you provide a self-contained test case?

regards, tom lane

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] hash options

2012-01-22 Thread David W Noon
On Sun, 22 Jan 2012 09:59:55 -0600, Little, Douglas wrote about
[GENERAL] hash options:

>I'm working on a data warehouse dimensionalization process   where I
>need to hash a text string to use as the key. I've implemented with
>MD5.  It works fine,  the problem I have is the size of the md5 (32
>bytes) is often longer than the original string - thus not
>accomplishing what I want - space savings.
>
>Does anybody have alternative hash function recommendations?

Try CRC32, possibly augmented by a CRC16 in a separate attribute.

I have CRC functions for PostgreSQL, written in C, and will make them
available to anybody who wants them.
-- 
Regards,

Dave  [RLU #314465]
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
dwn...@ntlworld.com (David W Noon)
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*


signature.asc
Description: PGP signature


Re: [GENERAL] hash options

2012-01-22 Thread dwnoon
This message has been digitally signed by the sender.

Re___GENERAL__hash_options.eml
Description: Binary data


-
Hi-Tech Gears Ltd, Gurgaon, India


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Question about (probably wrong) index scan cost for conditional indexes

2012-01-22 Thread Maxim Boguk
On Mon, Jan 23, 2012 at 11:28 AM, Tom Lane  wrote:

> Maxim Boguk  writes:
> > But it seems that index scan cost for very narrow/selective conditional
> > indexes is greatly overestimated at least in some cases.
>
> I realized in connection with
> http://archives.postgresql.org/pgsql-general/2012-01/msg00459.php
> that btcostestimate is not correctly estimating numIndexTuples for
> partial indexes.  But it's impossible to tell from this amount of
> information whether you're seeing an effect of that, or something else.
> Can you provide a self-contained test case?
>
>regards, tom lane
>

Prorably simpliest test case:

set random_page_cost to 4;
set seq_page_cost to 1;
drop table  if exists test;
CREATE TABLE test (id integer primary key, value1 float, value2 float,
value3 float, value4 float);
INSERT into test select id,random() as value1,random() as value2, random()
as value3,random() as value4 from generate_series(1,100) as g(id);
CREATE INDEX test_special_key on test(value1) where value2*2<0.01 and
value3*2<0.01 and value4*2<0.01;
ANALYZE test;

postgres=# EXPLAIN ANALYZE select * from test order by id limit 100;
QUERY PLAN
---
 Limit  (cost=0.00..3.43 rows=100 width=36) (actual time=0.042..0.170
rows=100 loops=1)
   ->  Index Scan using test_pkey on test  (cost=0.00..34317.36
rows=100 width=36) (actual time=0.040..0.108 rows=100 loops=1)
 Total runtime: 0.243 ms
(3 rows)

vs

postgres=# EXPLAIN ANALYZE select * from test where value2*2<0.01 and
value3*2<0.01 and value4*2<0.01 order by value1 limit 100;
  QUERY PLAN
--
 Limit  (cost=0.00..92.52 rows=100 width=36) (actual time=0.072..0.072
rows=0 loops=1)
   ->  Index Scan using test_special_key on test  (cost=0.00..34264.97
rows=37037 width=36) (actual time=0.070..0.070 rows=0 loops=1)
 Total runtime: 0.113 ms
(3 rows)

cost difference:
(cost=0.00..3.43 rows=100 width=36)
vs
(cost=0.00..92.52 rows=100 width=36)

An actual speed (and theoretical performance) almost same.

More selective conditions added to conditional index - worse situation with
wrong costing.

Kind Regards,
Maksym

-- 
Maxim Boguk
Senior Postgresql DBA.

Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678

Skype: maxim.boguk
Jabber: maxim.bo...@gmail.com

LinkedIn profile: http://nz.linkedin.com/in/maximboguk
"If they can send one man to the moon... why can't they send them all?"

МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."


Re: [GENERAL] PGbouncer for Windows 2008

2012-01-22 Thread Edison So
Thank you for the reply.

I downloaded the 64-bit version and extracted it. Unfortunately. it seems
that the pbbouncer.exe file is needed to be rebulit (ie. recompiled)
according to the following statements from the README file:


-
Building for WIN32
--

At the moment only build env tested is MINGW32 / MSYS.  Cygwin
and Visual $ANYTHING are untested.  Libevent 2.x is required
for DNS hostname lookup.

Then do the usual:

$ ./configure ...
$ make

If cross-compiling from Unix:

$ ./configure --host=i586-mingw32msvc ...

Running on WIN32


Running from command-line goes as usual, except -d (daemonize),
-R (reboot) and -u (switch user) switches will not work.

To run pgbouncer as a Windows service, you need to configure
`service_name` parameter to set name for service.  Then:

$ pgbouncer -regservice config.ini

To uninstall service:

$ pgbouncer -unregservice config.ini

To use Windows Event Log, set "syslog = 1" in config file.
But before you need to register pgbevent.dll:

$ regsvr32 pgbevent.dll

To unregister it, do `regsvr32 /u pgbevent.dll`.
-

I do not have a SDE environment to rebuild the ,exe file. Any other idea?

Cheers,



On Sat, Jan 21, 2012 at 8:40 AM, tuanhoanganh  wrote:

> http://winpg.jp/~saito/pgbouncer/
>
>
> On Fri, Jan 20, 2012 at 11:01 PM, Edison So  wrote:
>
>> Hello,
>>
>> Can anyone please tell me where I can find the PGbouncer executable and
>> tutorial for Windows 2008? I found one but need to recompile on Windows. I
>> also need intuitive instructions to configure it too.
>>
>> Thanks in advance.
>> --
>> Edison
>>
>
>


-- 
Edison


Re: [GENERAL] PGbouncer for Windows 2008

2012-01-22 Thread John R Pierce

On 01/22/12 4:55 PM, Edison So wrote:


I downloaded the 64-bit version and extracted it. Unfortunately. it 
seems that the pbbouncer.exe file is needed to be rebulit (ie. 
recompiled) according to the following statements from the README file:


those statements you reference may well be from the generic pgbouncer 
source readme.  if someone has already built and tested a 64bit binary, 
whats the problem?




--
john r pierceN 37, W 122
santa cruz ca mid-left coast


--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Best way to create unique primary keys across schemas?

2012-01-22 Thread Scott Marlowe
On Sun, Jan 22, 2012 at 5:19 PM, panam  wrote:
> Hi,
>
> If I'd like to have primary keys generated ("numeric" style, no UUIDs) that
> are unique across schemas is the best option to allocate a fixed sequence
> range (min,max) to the sequences of all schemas?

That's how I do it.  If you use a bigserial / bigint for it, then the
space is usually plenty big for each schema.

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] "could not accept SSPI security context"

2012-01-22 Thread Ahmed Shinwari
Hi,

In my email, I mistakenly assumed that the next version would be 2.0.12.0,
which was not. My bad.

I checked the source and confirmed that the 2.0.11.0 has the bug, and the
immediate next version (2.0.11.91) has the fix. You can use the version
2.0.11.92 ,
which is the latest stable release.


-Ahmed


On Sat, Jan 21, 2012 at 7:31 AM, G_Hosa_Phat [via PostgreSQL] <
ml-node+s1045698n5162113...@n5.nabble.com> wrote:

>  Brar Piening wrote
> Just a guess: You don't have something like "host all all 127.0.0.1/32
> sspi" in your pg_hba.conf do you?
>
> Regards,
>
> Brar
>
> I don't have that set.  Here's a sample from my pg_hba (slightly redacted
> to obfuscate our internal network address scheme).
>
> # TYPE DATABASE USER CIDR-ADDRESS METHOD
> # -
> #IPv4 LOCAL CONNECTIONS
> # -
> # Connections made from the server computer itself are only allowed if
> # the user is a member of the Developers group, or a Super User.
> # -
> host all pgsuper 127.0.0.1/32 md5
> host all +"ITDept" 127.0.0.1/32 sspi
> host all all 127.0.0.1/32 reject
> # -
> #   IPv4 INTRANET CONNECTIONS
> # -
> # If the IP address from which the request comes indicates that the
> # user is on the Courtesy network and is physically located in one of
> # our offices (Oklahoma City or Tulsa), use sspi authentication to
> # validate the users credentials against Courtesy’s Active Directory.
> # -
> # Internal Network
> # -
> hostall pgsuper 172.16.10.0/24 md5
> hostall +"Laptop" 172.16.10.50/32 ldap
> ldapserver=ADSERVERNAME ldapprefix="MYDOMAIN\"
> hostall +"ITDept" 172.16.10.0/24 sspi
> hostappdb +"Users" 172.16.10.0/24 sspi
> # -
> # Deny connection attempts from any source not explicitly identified
> # in the rules above.
> # -
> host all all 0.0.0.0/0 reject
>
>
> # IPv6 local connections:
> host all pgsuper ::1/128 md5
> host all +"ITDept" ::1/128 sspi
> host all all ::1/128 reject
>
> There are some specific requirements addressed in the configuration file,
> and I'd love to ask some more questions about how to implement some of
> them, but those aren't in the scope of this thread.  On this topic,
> however, this configuratoin correctly uses the SSPI authentication when I
> try to connect to the database through PGAdmin (I'm a member of the
> "ITDept" group), but not when I'm testing my VB.NET application, it fails
> on the authentication with the error from the thread title.
>
> --
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://postgresql.1045698.n5.nabble.com/could-not-accept-SSPI-security-context-tp3275102p5162113.html
>  To unsubscribe from "could not accept SSPI security context", click 
> here
> .
> NAML
>