Re: [GENERAL] How large can a PostgreSQL database get?

2013-04-17 Thread Aleksey Tsalolikhin
or even break things into silos. Oracle isn't magic pixie > dust that suddenly gets hardware with 250MB/s seq read arrays to read > at 1GB/s, etc. > > With oracle partitioning is easier, and everything else on the > freaking planet is harder. > > On Wed, Apr 17, 2013 at 8:1

[GENERAL] How large can a PostgreSQL database get?

2013-04-17 Thread Aleksey Tsalolikhin
Hi. I was promoting PostgreSQL to an AIX/Oracle shop yesterday, they are looking to switch to open source to cut their licensing costs, and was asked how large a database does PostgreSQL support? Is there an upper bound on database size and if so, what it is? Aleksey Tsalolikhin

Re: [GENERAL] Need advice on best system to choose

2013-03-24 Thread Aleksey Tsalolikhin
Hi, Kenroy. Can you make a test suite so that you could run a performance test on each platform? I see you will have different hardware and operating systems. Best, Aleksey On Sun, Mar 24, 2013 at 11:40 AM, Kenroy Bennett wrote: > > > On a hourly basis 13 tables with number of columns between

Re: [GENERAL] How to mix psql commands and SQL commands on the shell command line?

2012-12-07 Thread Aleksey Tsalolikhin
Thank you very much, Adrian and Pavel. And I really appreciate the documentation reference, Adrian! Yours fondly, Aleksey -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] How to mix psql commands and SQL commands on the shell command line?

2012-12-07 Thread Aleksey Tsalolikhin
How can I mix psql commands and SQL commands on the shell command line, please? $ psql psql (9.2.1) Type "help" for help. ddcKeyGen=> \timing \\ select count(*) from auth_users; Timing is on. count --- 276 (1 row) Time: 1.730 ms $ psql -c "\timing \\ select count(*) from auth_users;" Ti

Re: [GENERAL] C locale versus en_US.UTF8. (Was: String comparision in PostgreSQL)

2012-09-21 Thread Aleksey Tsalolikhin
On Sun, Sep 2, 2012 at 10:08 PM, Peter Eisentraut wrote: > On Wed, 2012-08-29 at 10:31 -0700, Aleksey Tsalolikhin wrote: >> What is the difference between C and en_US.UTF8, please? > > There are many differences, but here is a simple one: > > $ (echo a; echo A; echo b; ec

Re: [GENERAL] Why do I have holes in my pages?

2012-09-21 Thread Aleksey Tsalolikhin
On Fri, Sep 21, 2012 at 12:45 PM, Jeff Janes wrote: > On Fri, Sep 21, 2012 at 11:41 AM, Victor Yegorov wrote: >> >> It seems that this also matches your explanation, correct me if I'm wrong. > > In general, doing "select ctid..." is a poor way of figuring out > where the space in your database i

Re: [GENERAL] Why do I have holes in my pages?

2012-09-20 Thread Aleksey Tsalolikhin
On Thu, Sep 20, 2012 at 1:53 PM, John R Pierce wrote: > On 09/20/12 1:34 PM, Aleksey Tsalolikhin wrote: >> >> Right on. I got that out of my pgstatspack report. >> >> \l+ in psql tells me the same thing - 400 GB > > > it might be interesting to see the output

Re: [GENERAL] Why do I have holes in my pages?

2012-09-20 Thread Aleksey Tsalolikhin
On Thu, Sep 20, 2012 at 12:34 PM, Bill Moran wrote: > In response to Aleksey Tsalolikhin : >> >> Current DB size is 400 GB and it takes up 2.7 TB (on a 6.6 TB filesystem). > > I expect that the first thing that others are going to ask > is "what is telling you that

[GENERAL] Why do I have holes in my pages?

2012-09-20 Thread Aleksey Tsalolikhin
Why do I have holes in my pages? Postgres 8.4.12 "select ctid from big_table" on my master shows that pages have "holes" in them. Here is example for page 431665: (431665,2) (431665,5) (431665,8) (431665,11) (431665,14) (431665,17) (431665,20) (431665,23) Current DB size is 400 GB and

Re: [GENERAL] postgres process got stuck in "notify interrupt waiting" status

2012-09-05 Thread Aleksey Tsalolikhin
On Wed, Sep 5, 2012 at 10:03 AM, Tom Lane wrote: > > That sounded a bit fishy to me too. But unless you can reproduce it in > something newer than 8.4.x, nobody's likely to take much of an interest. > The LISTEN/NOTIFY infrastructure got completely rewritten in 9.0, so > any bugs in the legacy ve

Re: [GENERAL] postgres process got stuck in "notify interrupt waiting" status

2012-09-05 Thread Aleksey Tsalolikhin
Got it, thanks, Kevin, Tom. So how about that this process that was in "notify interrupt waiting waiting" status after I SIGTERM'ed it. Is the double "waiting" expected? Aleksey -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://

Re: [GENERAL] postgres process got stuck in "notify interrupt waiting" status

2012-09-05 Thread Aleksey Tsalolikhin
On Wed, Sep 5, 2012 at 7:38 AM, Tom Lane wrote: > Having said that, a kill -9 on an individual backend (*not* the > postmaster) should be safe enough, if you don't mind the fact that > it'll kill all your other sessions too. > Got it, thanks. Why will it kill all your other sessions too? Isn't

Re: [GENERAL] Are there any options to parallelize queries?

2012-09-04 Thread Aleksey Tsalolikhin
Hi, Seref. You might want to take a look at Stado: http://www.slideshare.net/jim_mlodgenski/scaling-postresql-with-stado Best, -at -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] postgres process got stuck in "notify interrupt waiting" status

2012-09-04 Thread Aleksey Tsalolikhin
On Tue, Sep 4, 2012 at 7:21 PM, John R Pierce wrote: > On 09/04/12 7:09 PM, Aleksey Tsalolikhin wrote: >> >> BTW, after I signalled TERM, the process status changed from >> >> notify interrupt waiting >> >> to >> >> notify interrupt waiti

Re: [GENERAL] postgres process got stuck in "notify interrupt waiting" status

2012-09-04 Thread Aleksey Tsalolikhin
BTW, after I signalled TERM, the process status changed from notify interrupt waiting to notify interrupt waiting waiting which I thought looked kind of odd. Then I signalled KILL. Aleksey On Tue, Sep 4, 2012 at 6:21 PM, Aleksey Tsalolikhin wrote: > Hi. > > We use LISTEN/NOTIFY qu

[GENERAL] postgres process got stuck in "notify interrupt waiting" status

2012-09-04 Thread Aleksey Tsalolikhin
Hi. We use LISTEN/NOTIFY quite a bit but today something unusual (bad) happened. Number of processes waiting for a lock just started to go up up up. I finally found the object being locked was pg_listener which RhodiumToad on IRC kindly informed happens during LISTEN/NOTIFY. The process that ha

[GENERAL] C locale versus en_US.UTF8. (Was: String comparision in PostgreSQL)

2012-08-29 Thread Aleksey Tsalolikhin
On Wed, Aug 29, 2012 at 9:45 AM, Merlin Moncure wrote: > citext unfortunately doesn't allow for index optimization of LIKE > queries, which IMNSHO defeats the whole purpose. to the best way > remains to use lower() ... > this will be index optimized and fast as long as you specified C > locale fo

Re: [GENERAL] how to amend SQL standard to add comments?

2012-05-09 Thread Aleksey Tsalolikhin
On Sun, May 6, 2012 at 10:49 AM, Thomas Kellerer wrote: > > You can use multi-line comments with /* .. */ to send this information to > the server: > > SELECT /* Query Author: Bob Programmer. >          Purpose: Pull the student ID number, we'll need it to enroll the > student for classes */ >    

[GENERAL] how to amend SQL standard to add comments?

2012-05-06 Thread Aleksey Tsalolikhin
g code is located. I'd like to propose such capability be added to the SQL standard... is anybody on this list involved with the SQL standard? What do you think about the idea? Best regards, Aleksey Tsalolikhin -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

[GENERAL] nice'ing the postgres COPY backend process to make pg_dumps run more "softly"

2012-03-18 Thread Aleksey Tsalolikhin
Hi. When pg_dump runs, our application becomes inoperative (too slow). I was going to ask if nice'ing the postgres backend process that handles the COPY would help but I just realized probably the pg_dump takes out locks when it runs and nice'ing it would just make it run longer... However the m

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-16 Thread Aleksey Tsalolikhin
On Fri, Mar 16, 2012 at 2:03 PM, Steve Crawford wrote: > > > I *think* you can get away with only sufficient free space to store the > *new* table and indexes Yeah; I don't have that much free space. Just 30 GB short. :( > Depending on your schema and which tables are using space, you might b

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-16 Thread Aleksey Tsalolikhin
On Thu, Mar 15, 2012 at 6:43 AM, Aleksey Tsalolikhin wrote: > Is there any way to consolidate the pages on the slave without taking > replication offline? Filip Rembiałkowski suggested: maybe CLUSTER? Greg Williamson suggested: pg_reorg Thank you, Filip and Greg. They would both wor

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-14 Thread Aleksey Tsalolikhin
On Wed, Mar 14, 2012 at 9:57 PM, Scott Marlowe wrote: > Are you sure you're checking the toast table that goes with whatever > parent table? Yep. I find out the relation id of the TOAST table: "select reltoastrelid from pg_class where relname = 'parent_table_name';" Find out the relation name

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-14 Thread Aleksey Tsalolikhin
On Wed, Mar 14, 2012 at 8:06 PM, Aleksey Tsalolikhin wrote: > Executive summary:  Why would the TOAST table on the slave have 4x the > page count of the master? Hypothesis: the pages of the TOAST table on the slave have more unused space in them than the pages of the TOAST table on the

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-14 Thread Aleksey Tsalolikhin
Executive summary: Why would the TOAST table on the slave have 4x the page count of the master? Is there a way to compact it if I don't have enough disk space to duplicate the table? How do I prevent this situation from recurring? On Wed, Mar 14, 2012 at 7:38 PM, Scott Marlowe wrote: > > The g

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-14 Thread Aleksey Tsalolikhin
On Tue, Mar 13, 2012 at 5:14 AM, Vick Khera wrote: > I'll bet what happened was postgres re-wrote your table for you, > effectively doing a compaction.  You can get similar effect by doing > an alter table and "changing" an INTEGER field to be INTEGER. > Postgres does not optimize that do a no-op,

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-11 Thread Aleksey Tsalolikhin
the size of the bigtable from production... if anybody has any other suggestions, I am all ears. Yours very truly, Aleksey On 3/8/12, Scott Marlowe wrote: > On Thu, Mar 8, 2012 at 1:10 PM, Aleksey Tsalolikhin > wrote: >>> On Thu, Mar 8, 2012 at 12:10 PM, Aleksey Tsalolikhin &g

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-08 Thread Aleksey Tsalolikhin
> On Thu, Mar 8, 2012 at 12:10 PM, Aleksey Tsalolikhin > wrote: >> On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin >> wrote: >>>  We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x >>> >>> >>>  My biggest table

Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-08 Thread Aleksey Tsalolikhin
On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin wrote: >  We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x > > >  My biggest table measures 154 GB on the origin, and 533 GB on >  the slave. > >  Why is my slave bigger than my master?  How can I co

Re: [GENERAL] Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-07 Thread Aleksey Tsalolikhin
On Wed, Mar 7, 2012 at 8:12 PM, Joshua D. Drake wrote: > >> "check_postgres --action=bloat" returns OK [after VACUUM FULL].  So it's not >> bloat. >> What else could it be? > > I would have to double check but I think > check_postgres --action=bloat only checks for dead space, not usable space, >

[GENERAL] Re: [Slony1-general] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-07 Thread Aleksey Tsalolikhin
Dear Joshua, You wrote: > Try disabling replication on that table and clustering the table and then > re-enabling > replication. ... > Alternatively you could disable replication on that table, truncate the > table, and then > re-enable replication for that table. A concern would be is that i

[GENERAL] Re: A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-07 Thread Aleksey Tsalolikhin
On Tue, Mar 6, 2012 at 7:05 PM, Aleksey Tsalolikhin wrote: >  We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x > >  The origin database "data/base" directory is 197 GB in size. > >  The slave database "data/base" directory is 562

Re: [GENERAL] phantom long-running query (check_postgres.pl reports a long-running query, but the database logs don't show it)

2012-03-07 Thread Aleksey Tsalolikhin
On Thu, Feb 9, 2012 at 12:57 AM, Achilleas Mantzios wrote: > On Πεμ 09 Φεβ 2012 02:32:37 Aleksey Tsalolikhin wrote: >> I got an alert from check_postgres.pl today on a long-running query on >> our production database, but our PostgreSQL 8.4.9 server log, which is >> configure

[GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?

2012-03-06 Thread Aleksey Tsalolikhin
We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x The origin database "data/base" directory is 197 GB in size. The slave database "data/base" directory is 562 GB in size and is over 75% filesystem utilization which has set off the "disk free" siren. My biggest table* measures

[GENERAL] phantom long-running query (check_postgres.pl reports a long-running query, but the database logs don't show it)

2012-02-08 Thread Aleksey Tsalolikhin
I got an alert from check_postgres.pl today on a long-running query on our production database, but our PostgreSQL 8.4.9 server log, which is configured to log queries over 5 seconds long ("log_min_duration_statement = 5000") does not show the query. check_postgres.pl showed: Date/Time: Wed Feb 8

[GENERAL] pg_dump does not include triggers - should it?

2011-11-16 Thread Aleksey Tsalolikhin
Hi, We are using Postgres 8.4.8 and it is a very fine database. We just noticed our pg_dumps (generated with "pg_dump -Fc -n public $our_db_name") does not include triggers on tables in database $our_db_name. Should it? (We'd like it to, we are counting on pg_dump to backup all our data, in

[GENERAL] multi-master replication (Was: Has Pg 9.1.0 been released today?)

2011-09-15 Thread Aleksey Tsalolikhin
On Tue, Sep 13, 2011 at 6:55 PM, Joshua D. Drake wrote: > >> On 09/13/2011 04:01 AM, Aleksey Tsalolikhin wrote: >>> >>> Now that you have streaming replication both async and sync, are you >>> working on multi-master replication? *excited* Or what's th

Re: [GENERAL] Has Pg 9.1.0 been released today?

2011-09-12 Thread Aleksey Tsalolikhin
Congratulations on the release of 9.1.0! Lots of great features, I for one can't wait to try out unlogged tables, that should help a lot in our environment. Now that you have streaming replication both async and sync, are you working on multi-master replication? *excited* Or what's the roadmap?

[GENERAL] How did I get 8 Exclusive locks on the same table? And how many locks is too many?

2011-06-29 Thread Aleksey Tsalolikhin
Hi, We use the fine Bucardo check_postgres Nagios plugin, and it reported a "CRITICAL" level spike in database locks (171 locks). I looked closely at my database logs and found an even bigger spike just a few minutes earlier (208 locks). I saw 8 EXCLUSIVE locks on table X. All of these queri

[GENERAL] How can you have an Exclusive Lock on nothing? please help me understand this lock monitoring query output

2011-04-14 Thread Aleksey Tsalolikhin
Hi. I use the following query (from http://wiki.postgresql.org/wiki/Lock_Monitoring) to monitor locks; and I've got an ExlusiveLock that does not have a relation name associated with it. What is locked with the Exclusive Lock in this case, please? (it's between "d" and "e" tables below) psql -

Re: [GENERAL] autovacuum issue after upgrade to 9.0.1

2011-03-16 Thread Aleksey Tsalolikhin
Dear George, Do you see this issue on 9.0.3, the current and the recommended 9.x version? Best, Aleksey On Tue, Mar 15, 2011 at 11:38 AM, George Woodring wrote: > We recently upgraded  from 8.3.something to 9.0.1.  With 9.0.1, we have a > huge spike in vacuums every 8 days only on one of our

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-03-07 Thread Aleksey Tsalolikhin
On Fri, Mar 4, 2011 at 7:19 PM, Adrian Klaver wrote: > On Friday, March 04, 2011 5:11:04 pm Aleksey Tsalolikhin wrote: >> On Fri, Mar 4, 2011 at 4:45 PM, Adrian Klaver >> wrote: >> > On Friday, March 04, 2011 2:03:23 pm Aleksey Tsalolikhin wrote: >> >> On

Re: [GENERAL] Web Hosting

2011-03-07 Thread Aleksey Tsalolikhin
On Sat, Mar 5, 2011 at 5:12 PM, Ogden wrote: > > On Mar 5, 2011, at 7:07 PM, Bret Fledderjohn wrote: > >  I am using A2 Hosting (www.a2hosting.com ) which offers 8.4...  They are > inexpensive and so far reliable. > > > Wow, that's super cheap. Is there some catch - $5.57 / month for unlimited > e

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-03-04 Thread Aleksey Tsalolikhin
On Fri, Mar 4, 2011 at 4:45 PM, Adrian Klaver wrote: > On Friday, March 04, 2011 2:03:23 pm Aleksey Tsalolikhin wrote: >> On Fri, Mar 4, 2011 at 7:53 AM, Adrian Klaver >> wrote: >> >What is the data being stored in the table? >> >> For the main part, it

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-03-04 Thread Aleksey Tsalolikhin
On Fri, Mar 4, 2011 at 7:53 AM, Adrian Klaver wrote: > > Weird. The pgstattuple data shows that the tables are essentially the same, > the > only difference being the dead tuples, as expected, on the production table. > The > TOAST size information shows approximately a doubling in size of the T

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-03-03 Thread Aleksey Tsalolikhin
On Tue, Mar 1, 2011 at 7:24 AM, Tom Lane wrote: > Adrian Klaver writes: >> Looks like the TOAST compression is not working on the second machine. Not >> sure >> how that could come to be. Further investigation underway:) > > Somebody carelessly messed with the per-column SET STORAGE settings, >

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-28 Thread Aleksey Tsalolikhin
On Sun, Feb 27, 2011 at 2:52 AM, Alban Hertroys wrote: Thank you for your kind replies. > I noticed in your table definition that you seem to store timestamps in > text-fields. Restoring those from text-fields shouldn't make any difference, > but perhaps your locales are set up differently betw

Re: [GENERAL] pg_dump makes our system unusable - any way to pg_dump in the middle of the day? (postgres 8.4.4)

2011-02-27 Thread Aleksey Tsalolikhin
Our disk service times and % utilization (according to sar -d) while running pg_dump are low. For example: 01:23:08 AM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 01:23:09 AM sda 1473.00 0.00 98128.00 66.62 0.41 0.28 0.03

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-27 Thread Aleksey Tsalolikhin
OK, just to recap: database A has a table that is 50 GB in size (according to: SELECT relname as "Table", pg_size_pretty(pg_total_relation_size(relid)) As "Size" from pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC; ) I pg_dump -Fc this table, which gives m

[GENERAL] pg_dump makes our system unusable - any way to pg_dump in the middle of the day? (postgres 8.4.4)

2011-02-24 Thread Aleksey Tsalolikhin
I read in the pg_dump man page that pg_dump does not block other users accessing the database (readers or writers). In practice, if I pg_dump our 100 GB database, our application, which is half Web front end and half OLTP, at a certain point, slows to a crawl and the Web interface becomes unrespon

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Aleksey Tsalolikhin
Hi. We're running Postgres 8.4.4 everywhere. I already have a pg_dump -Fc of the big table from the source, now I am running a pg_dump -Fc on the recipient, to see if the size is different. Then I will run a pg_dump as text, so I can diff the two files if they are different in size. Thanks!! Al

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Aleksey Tsalolikhin
On Thu, Feb 24, 2011 at 11:46 AM, John R Pierce wrote: > On 02/24/11 11:02 AM, Aleksey Tsalolikhin wrote: > >> How do I check the fillfactor on the table, please? > > its in the field reloptions in pg_class.   so... > >    select reloptions from pg_class where relname=&#

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-24 Thread Aleksey Tsalolikhin
Hi. Thanks for your replies. How do I check the fillfactor on the table, please? (http://www.postgresql.org/docs/8.4/static/sql-createtable.html tells me how to set it, but I haven't found yet how to read it.) Same CPU, same filesystem, same blocksize - identical systems. Same model of server.

Re: [GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-23 Thread Aleksey Tsalolikhin
a new database. Would like to understand what is going on.And would like to not have such a swell of data upon transfer. Is there anything I can do, please? Best, Aleksey On Tue, Feb 22, 2011 at 12:44 PM, Aleksey Tsalolikhin wrote: > Hi.  Last week our 60 GB database (per psql \l+)

[GENERAL] database is bigger after dump/restore - why? (60 GB to 109 GB)

2011-02-22 Thread Aleksey Tsalolikhin
Hi. Last week our 60 GB database (per psql \l+) was (re-)replicated to the DR site using SlonyI, and arrived 109 GB in size which caused a problem as it filled up the filesystem on the DR server - we expected the DR database to be the same size. Mystery. Now just past weekend we upgraded our pro

Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-02 Thread Aleksey Tsalolikhin
Thank you for the discussion. I'm on Postgres 8.4, and the hardware between Slony master and slave is identical, as is the autovacuum config. We do have transactions that fail to commit, transactions that roll back. I'm glad to have some idea of the cause of the difference in table size between

Re: [GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-02-01 Thread Aleksey Tsalolikhin
On Mon, Jan 31, 2011 at 7:52 PM, Scott Marlowe wrote: > On Mon, Jan 31, 2011 at 5:54 PM, Aleksey Tsalolikhin > wrote: >> Situation:  Disk usage on production server root filesystem is at 68% >> utilization (80 GB used), on DR is at 51% (56 GB used).   We use >> SlonyII-1.

[GENERAL] Why does my DB size differ between Production and DR? (Postgres 8.4)

2011-01-31 Thread Aleksey Tsalolikhin
Situation: Disk usage on production server root filesystem is at 68% utilization (80 GB used), on DR is at 51% (56 GB used). We use SlonyII-1.2.x to keep the DR up to date. I would like to account for the 24 GB difference. Symptoms: 1. Database size reported by psql c '\l+' Produc

[GENERAL] feature request: log bytes out for each request

2010-11-17 Thread Aleksey Tsalolikhin
Hi. Could you please consider logging size of reply, like Apache httpd does with its logfile? We've started having intermittent spikes in network usage (the network interface on the DB server is getting maxed out) and I suspect there is a single very large query (just a working hypothesis at this

[GENERAL] How to see what SQL queries are associated with pg_locks?

2010-10-01 Thread Aleksey Tsalolikhin
How to see what SQL queries are associated with pg_locks, please? Could somebody help with the query? I can then add it to http://wiki.postgresql.org/wiki/Lock_Monitoring Best, -at -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http

Re: [GENERAL] ERROR: cache lookup failed for type 14237017

2010-09-21 Thread Aleksey Tsalolikhin
On Tue, Sep 21, 2010 at 3:10 PM, Jens Wilke wrote: > On Dienstag, 21. September 2010, Aleksey Tsalolikhin wrote: > >> So PostgreSQL was trying to lookup a row in a system table and >> did not find it in a cache. > > yes, > select * from pg_type where oid =14237017 Th

Re: [GENERAL] ERROR: cache lookup failed for type 14237017

2010-09-21 Thread Aleksey Tsalolikhin
On Tue, Sep 21, 2010 at 2:06 PM, Jens Wilke wrote: > On Dienstag, 21. September 2010, Aleksey Tsalolikhin wrote: >> Yesterday, I had twelve thousand  "cache lookup failed for type N" >> messages, like this: > >> What does "type 14237017" mean? &

[GENERAL] ERROR: cache lookup failed for type 14237017

2010-09-21 Thread Aleksey Tsalolikhin
Yesterday, I had twelve thousand "cache lookup failed for type N" messages, like this: 2010-09-20 00:00:00 PDT ERROR: cache lookup failed for type 14237017 2010-09-20 00:00:00 PDT CONTEXT: SQL statement "INSERT INTO mycluster.sl_log_2 (log_origin, log_xid, log_tableid, log_actionseq, log_cmdtyp

[GENERAL] Re: check_postgres reports most of my tables are unanalyzed for 8 weeks; why isn't autovacuum analyzing them?

2010-07-20 Thread Aleksey Tsalolikhin
I am sorry for the multiple posts; just noticed there are two tests for analyze: last_analyze and last_autoanalyze last_autoanalyze matches last_autovacuum - 7 weeks ago Aleksey -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://

[GENERAL] Re: check_postgres reports most of my tables are unanalyzed for 8 weeks; why isn't autovacuum analyzing them?

2010-07-20 Thread Aleksey Tsalolikhin
I just ran the "last_autovacuum" test of check_postgres, and it reported 7 (of my 100) tables have been autovacuumed more than 1 day ago; the oldest autovacuum time was 7 weeks ago. 8 more tables in pg_catalog were autovacuumed more than 1 day ago. Thanks, -at -- Sent via pgsql-general mailing

[GENERAL] check_postgres reports most of my tables are unanalyzed for 8 weeks; why isn't autovacuum analyzing them?

2010-07-20 Thread Aleksey Tsalolikhin
Hi. I've just discovered the check_postgres utility and am running all the tests against my database. The "last_analyze" test comes out critical - many tables unanalyzed for 8 weeks. I am running PostgreSQL 8.4.4 with default autovacuum settings. I thought autovacuum was supposed to take care o

Re: [GENERAL] database response slows while pg_dump is running (8.4.2)

2010-06-11 Thread Aleksey Tsalolikhin
On Fri, Jun 11, 2010 at 08:43:53AM +0200, Adrian von Bidder wrote: > > Just speculation, I've not tried this. Perhaps pipe the output of pg_dump > through a software that bandwidth-limits the throughput? Perhaps. However, moving the pg_dump to a Slony slave has solved my problem. Thanks!! Al

[GENERAL] database response slows while pg_dump is running (8.4.2)

2010-06-10 Thread Aleksey Tsalolikhin
PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database. Specifically, I am monitoring how long it takes to login to our Web app - usually under 2 seconds, but when pg_dump is running, that changes to 2 - 18 seconds. Is there any way I can keep my response fast but still get my pg

Re: [GENERAL] database response slows while pg_dump is running (8.4.2)

2010-06-10 Thread Aleksey Tsalolikhin
On Thu, Jun 10, 2010 at 3:41 PM, Steve Crawford wrote: > So are you writing the backup to the same disk(s) that support the db? What > happens if you do the dump from a different machine? You'll increase network > traffic, of course, but reduce your disk load. And you won't be keeping your > dump

Re: [GENERAL] database response slows while pg_dump is running (8.4.2)

2010-06-10 Thread Aleksey Tsalolikhin
On Thu, Jun 10, 2010 at 1:25 PM, Aleksey Tsalolikhin wrote: > PostgreSQL 8.4.2 response time slows during pg_dump of a 14 GB database. > ... > > Is there any way I can keep my response fast but still get my pg_dump? > Or should I move my pg_dump to a Slony-I slave? Thank yo

[GENERAL] What does PGDG mean, in the Postgres YUM repositories?

2010-06-08 Thread Aleksey Tsalolikhin
Hi. What does "PGDG" mean, as in PGDG84, the PostgreSQL repository containing PostgreSQL 8.4.x, etc? Thanks, Aleksey -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Postgres 8.4 segfaults on CentOS 5.5 (using EnterpriseDB installers)

2010-06-08 Thread Aleksey Tsalolikhin
Dear Steve, Thanks for your suggestions! I ended up switching from the Enterprise DB installer to the Postgres PGDG84 YUM repository, since (a) it provides PostgreSQL 8.4.4 AND Slony 1.x (unlike Enterprise DB installer which only has Slony 2.0.2 which has a potential data loss problem), and n

[GENERAL] Postgres 8.4 segfaults on CentOS 5.5 (using EnterpriseDB installers)

2010-06-04 Thread Aleksey Tsalolikhin
Hi. We've been running PostgreSQL 8.4.2 on CentOS 5.4 (64-bit), installed with the EnterpriseDB installer. This has been excellent. Now we have our first CentOS 5.5 server (64-bit) and I installed PostgreSQL 8.4.4 using the EnterpriseDB installer, and it is unable to start the database ins

[GENERAL] WAL archiving is stuck on an old file that was deleted -- how to get it going again? (8.4.2)

2010-01-06 Thread Aleksey Tsalolikhin
Hi. Need some help getting WAL log archiving going, please. PostgreSQL 8.4.2 archive_command = '/usr/local/bin/rsync -e /usr/bin/ssh %p postg...@remoteserver:directory/%f http://www.postgresql.org/mailpref/pgsql-general

Re: [GENERAL] Why are there pg_dump errors in my Apache httpd error_log ? (Postgres 8.3.7 on CentOS 5)

2009-11-03 Thread Aleksey Tsalolikhin
Thank you for your replies. We don't have a CMS and none of our PHP scripts call pg_dump so I am concerned about the security of our system. On Tue, Nov 3, 2009 at 1:00 PM, Russell Smith wrote: > Aleksey Tsalolikhin wrote: >> >> # grep pg_dump /var/log/httpd/error_log &

[GENERAL] Why are there pg_dump errors in my Apache httpd error_log ? (Postgres 8.3.7 on CentOS 5)

2009-11-03 Thread Aleksey Tsalolikhin
Hi. I just found pg_dump errors in my Apache httpd log and am really confused. Has anybody seen this before? My syslog.conf does not mention the httpd error_log. How did the errors get in there? # grep pg_dump /var/log/httpd/error_log pg_dump: [archiver] could not open output file: Permission

Re: [GENERAL] trouble building pgbench on CentOS 5.3: /usr/bin/ld: cannot find -lpgport

2009-08-12 Thread Aleksey Tsalolikhin
On Wed, Aug 12, 2009 at 4:19 PM, Tom Lane wrote: > Aleksey Tsalolikhin writes: >> Hi.  I am trying to build pgbench on CentOS 5.3 x86_64. >> make complains that it cannot find -lpgport > >> # cd contrib >> # make all > > You need to "make" the rest

[GENERAL] trouble building pgbench on CentOS 5.3: /usr/bin/ld: cannot find -lpgport

2009-08-12 Thread Aleksey Tsalolikhin
Hi. I am trying to build pgbench on CentOS 5.3 x86_64. make complains that it cannot find -lpgport # cd contrib # make all ... make[1]: Entering directory `/home/tsalolia/postgresql-8.3.7/contrib/pgbench' gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith -Winline -Wdeclaration-after-statement -

Re: [GENERAL] possible small contribution to the PostgreSQL manual? Example for two-phase commit section.

2009-04-02 Thread Aleksey Tsalolikhin
On Thu, Apr 2, 2009 at 1:23 AM, Craig Ringer wrote: > > You might want to add this as a comment on the interactive version of the > online documentation, too, so it's not lost when revising the docs for 8.4 / > 8.5 . Done, sir. Thanks! Aleksey -- Aleksey Tsalol

[GENERAL] possible small contribution to the PostgreSQL manual? Example for two-phase commit section.

2009-04-01 Thread Aleksey Tsalolikhin
Hi. We're trying to implement two-phase commit and did not find a complete working example in the manual. We found examples of the separate pieces, but not the sequence in which to put them together. Then we found this text, "PREPARE TRANSACTION is used in place of regular COMMIT to perform

Re: [GENERAL] SELinux problem rsync'ing WAL logs

2009-04-01 Thread Aleksey Tsalolikhin
Dear Tom, Thanks for your reply and insight! I much appreciate it. I certainly look forward to getting off FC6! In the meantime, I did get it to work - I remembered SELinux protects /home directories especially. So I moved "postgres" user's home directory from /home/postgres to /data/postgres

Re: [GENERAL] SELinux problem rsync'ing WAL logs

2009-03-31 Thread Aleksey Tsalolikhin
On Tue, Mar 31, 2009 at 6:35 PM, David Wilson wrote: > On Tue, Mar 31, 2009 at 9:18 PM, Aleksey Tsalolikhin > wrote: > >> Could not create directory '/home/postgres/.ssh'. >> Host key verification failed. > > Have you tested "ssh node2" as the postg

[GENERAL] SELinux problem rsync'ing WAL logs

2009-03-31 Thread Aleksey Tsalolikhin
Ok, this is not strictly a PostgreSQL issue, but I am trying to enable WAL log shipping on our PostgreSQL 8.1.10 (upgrade to 8.3.7 is in the works). My archive_command is 'rsync %p postg...@node2:/file/to/$f http://www.verticalsysadmin.com/ LinkedIn - http://www.linkedin.com/in/atsaloli -- Sent