Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Gregory Haase
Tom is correct: Vacuum verbose shows that their is an associated toast table. Neither the check_postgres.pl script or the query on http://wiki.postgresql.org/wiki/Show_database_bloat appear to take this into consideration. Both rely on null_frac and avg_width from pg_stats to estimate how big the t

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Tom Lane
Gregory Haase writes: > I've isolated the problem to the json field not showing up in pg_stats, > which affects the calculation of the avg row size in the bloat query. > I'm not sure if this is a json issue or some other kind of issue. Possibly your "bloat query" is failing to consider the toast

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Gregory Haase
I've isolated the problem to the json field not showing up in pg_stats, which affects the calculation of the avg row size in the bloat query. I'm not sure if this is a json issue or some other kind of issue. db_name=# select c.column_name, c.data_type from information_schema.columns c where table

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Tom Lane
John R Pierce writes: > On 10/29/2013 12:41 PM, Gregory Haase wrote: >> db_name=# VACUUM FULL VERBOSE table_schema.table_name; >> INFO: vacuuming "table_schema.table_name" >> INFO: "table_name": found 2 removable, 29663 nonremovable row >> versions in 1754 pages >> DETAIL: 0 dead row versions

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Gregory Haase
One more thing I just tried: create table table_schema.table_name_new (like table_schema.table_name); insert into table_schema.table_name_new select * from table_schema.table_ name; The new tables shows the same amount of wasted bytes and pages as the old. So I think based on that I'm going to t

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Gregory Haase
So, between yesterday and today we actually failed over to our hot-standby instance and the issue hasn't changed. I don't think you can have a pending transaction across streaming replication. On Tue, Oct 29, 2013 at 12:49 PM, John R Pierce wrote: > On 10/29/2013 12:41 PM, Gregory Haase wrote:

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread John R Pierce
On 10/29/2013 12:41 PM, Gregory Haase wrote: db_name=# VACUUM FULL VERBOSE table_schema.table_name; INFO: vacuuming "table_schema.table_name" INFO: "table_name": found 2 removable, 29663 nonremovable row versions in 1754 pages DETAIL: 0 dead row versions cannot be removed yet. CPU 0.07s/0.10

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Gregory Haase
Following up. I don't see any rolled back transactions in the logs. The part that troubles me really is that vacuum full doesn't actually fix the problem. If there were bad data that had been corrected via mass updates, I'd expect the bloat issue to be fixed by a vacuum full. When I run the vacu

Re: [GENERAL] Cursor Example Needed

2013-10-29 Thread Perry Smith
On Oct 28, 2013, at 6:50 PM, Adrian Klaver wrote: > On 10/28/2013 04:36 PM, Perry Smith wrote: >> >> On Oct 28, 2013, at 6:13 PM, John R Pierce wrote: >> >>> On 10/28/2013 3:58 PM, Adrian Klaver wrote: The docs do a good job of illustrating: http://www.postgresql.org/docs/9.3/

Re: [GENERAL] Replication and fsync

2013-10-29 Thread Kevin Grittner
Tomas Vondra wrote: > I think it's safe as long as you don't try to reuse the cluster > after a crash (be it due to OS error, power outage, ...). If the > primary crashes for any reasons, you have to start from scratch, > otherwise there might be silent corruption as you've described. I agree. 

Re: [GENERAL] INSERT/UPDATE statements sometimes choosing not to use PK index?

2013-10-29 Thread Matt
Since this is an upsert, the staging table simply accepts a copy of pipe-delimited data via a COPY FROM STDIN, and a subsequent SQL script performs the UPDATE/INSERT. The staging table is then truncated (or delete without where) for the next run: Truncate staging, COPY into staging, update then ins

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Merlin Moncure
On Tue, Oct 29, 2013 at 5:38 AM, Chris Travers wrote: > > > > On Mon, Oct 28, 2013 at 4:17 PM, Gregory Haase > wrote: >> >> I have a table that is triggering my nagios database bloat alert >> regularly. Usually, I have to give it the vacuum full more than once to get >> it under the threshold. To

[GENERAL] Increasing CPU usage of PostgreSQL

2013-10-29 Thread Rahila Syed
> iostat -xd 10 > > have to say about it? Yes i was indeed maxing out my IO subsystem. % CPU utilization for io is around 90 percent. I could not notice this earlier because i was firing CPU load queries from a remote machine hence network overhead was coming into picture .Thanks a lot for your

[GENERAL] Increasing CPU usage of PostgreSQL

2013-10-29 Thread Rahila Syed
>Rahila, if you want to saturate the CPU and don't care about the > particular benchmark, try to use read-only transactions. Either just add > "-S" at the pgbench command line, or write something SELECT-only on your > own. Anyway, use '-j' in such cases. Thanks a lot for your suggestion. Using se

Re: [GENERAL] INSERT/UPDATE statements sometimes choosing not to use PK index?

2013-10-29 Thread Tom Lane
Matt writes: > In most cases, EXPLAIN and runtime tell me the index is utilized. However, > sometime on back to back runs (5 minute intervals) the runtime explodes and > EXPLAIN shows me that the PK index is not used, as both the staging table > and destination table a sequentially scanned. You h

Re: [GENERAL] json datatype and table bloat?

2013-10-29 Thread Chris Travers
On Mon, Oct 28, 2013 at 4:17 PM, Gregory Haase wrote: > I have a table that is triggering my nagios database bloat alert > regularly. Usually, I have to give it the vacuum full more than once to get > it under the threshold. Today I tried repeatedly and cannot get the alert > to resolve. > > I had

Re: [GENERAL] copy 5.3000000000e+01 to bigint column error while Insert ok

2013-10-29 Thread Jov
thanks,that make sense.changing the column type from bigint to double pecision solves the problem. Jov blog: http:amutu.com/blog 2013/10/29 John R Pierce > On 10/28/2013 11:34 PM, Jov wrote: > > when use insert command,we can insert 5.30e+01 to table,but when >