Hello,
PostgreSQl 9.6.1: after a pg_dump/restore procedure it scans all pages
(at least for some of the tables, analyze-only switch is specified).
I would expect that only the sample rows are scanned.
"log_details": scanned 2133350 of 2133350 pages
vacuumdb --analyze-only --all --verbose
IN
On 29.12.2016 16:10, Tom Lane wrote:
Adrian Klaver writes:
On 12/28/2016 11:54 PM, Gerhard Wiesinger wrote:
vacuumdb --analyze-only --all --verbose
INFO: analyzing "public.log"
INFO: "log": scanned 3 of 30851 pages, containing 3599899 live rows
and 0 dead rows;
Hello,
We are running PostgreSQL 9.4.5 on FreeBSD 10.1 and have multiple worker
processes connected via persistent connections to PostgreSQL, they
perform just simple queries with SELECT on primary keys and simple
INSERTS/UPDATES. Normally nearly all the workers are idle but they still
consum
maintenance_work_mem = 512MB
effective_cache_size = 10GB
work_mem = 892MB
wal_buffers = 8MB
checkpoint_segments = 16
shared_buffers = 7080MB
max_connections = 80
autovacuum_max_workers = 3
Thnx.
Ciao,
Gerhard
On 13.12.2015 08:49, Gerhard Wiesinger wrote:
Hello,
We are running PostgreSQL 9.4.5 on FreeBSD 10.1
Hello Bill,
Thank you for your response, comments inline:
On 13.12.2015 16:05, Bill Moran wrote:
On Sun, 13 Dec 2015 09:57:21 +0100
Gerhard Wiesinger wrote:
some further details from the original FreeBSD 10.1 machine:
Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M
On 13.12.2015 18:17, Tom Lane wrote:
Gerhard Wiesinger writes:
Mem: 7814M Active, 20G Inact, 2982M Wired, 232M Cache, 1661M Buf, 30M Free
Swap: 512M Total, 506M Used, 6620K Free, 98% Inuse
OK, but why do we then get: kernel: swap_pager_getswapspace(4): failed?
Just judging from the name of
On 13.12.2015 21:14, Bill Moran wrote:
Wait ... this is a combined HTTP/Postgres server? You didn't mention that
earlier, and it's kind of important.
What evidence do you have that Postgres is actually the part of
this system running out of memory?
For me the complete picture doesn't look cons
Hello,
I'm trying to convert a select after a CTE into a function for generic
use. The CTE is normally a complex query but I want to capsulate then
the calculation of the Gini coefficient it into a function:
Based on:
http://www.heckler.com.br/blog/2010/06/15/gini-coeficient-having-fun-in-both
Hello!
I think I found a proper utility for that: pg_filedump
http://sources.redhat.com/rhdb/utilities.html
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Fri, 18 May 2007, Gerhard Wiesinger wrote:
Hello!
Are there any tools available to dump the files of the pg_xlog, pg_clog
Thnx.
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Sat, 26 May 2007, Gerhard Wiesinger wrote:
Hello!
I think I found a proper utility for that: pg_filedump
http://sources.redhat.com/rhdb/utilities.html
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Fri, 18 May 2007, Gerhard Wiesinger
Hello!
I want to upgrade from 8.2 to 8.3.1 but I've problems:
I did a pg_dumpall but this doesn't work. I found the migration guide with
a trick to load the new contrib/tsearch2 module. But how is this done
exactly?
-
http
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Thu, 15 May 2008, Gerhard Wiesinger wrote:
Hello!
I want to upgrade from 8.2 to 8.3.1 but I've problems:
I did a pg_dumpall but this doesn't work. I found the migration guide with a
trick to load the new contrib/tsearch2 module. But
Hello!
I've the following data:
datetime | val1 | val2
time1|4 | 40%
time2|7 | 30%
time3| 12 | 20%
...
I'd like to sum up the following:
(7-4)*30% + (12-7)*20% + ...
datetime is ordered (and unique and has also an id).
Rows are in the area of millions.
How is it done be
Any hints for an 8.3 environment (currently)?
Thnx.
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Mon, 8 Jun 2009, David Fetter wrote:
On Mon, Jun 08, 2009 at 08:35:20AM +0200, Gerhard Wiesinger wrote:
Hello!
I've the following data:
datetime | val1 | val2
time1|4 | 40%
Hello,
It is unclear to me how implicit conversion/comparision of timestamp with
and without timezone works.
--
-- datetime TIMESTAMP WITH TIME ZONE
-- datetime entries are with UTC+01 and UTC+02 done
-- 2009-03-09: UTC+01
-- 2009-06-12: UTC+02
-
On Sun, 21 Jun 2009, Richard Huxton wrote:
Gerhard Wiesinger wrote:
Hello,
It is unclear to me how implicit conversion/comparision of timestamp with
and without timezone works.
It's not entirely clear where the problem is. You don't say the results
you're getting or what y
Hello,
I'd like to understand the PostgreSQL internals in "backup mode".
When I understood it correctly pg_start_backup() make a checkpoint and
stops writing to the data/ directory except the WAL.
All new transaction go into WAL which is also logical. But how is data
consistency done when th
ock on backup is corrupt => An then even the WAL can't be applied.
Why does it work correctly? Or is there some design problem?
Thnx.
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Thu, 25 Jun 2009, Richard Huxton wrote:
Gerhard Wiesinger wrote:
Hello,
I'd like to understand t
Hello Richard,
OK, understood it and looks to me a good system with whole block overwrite
and then the deltas.
Thnx.
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Fri, 26 Jun 2009, Richard Huxton wrote:
Gerhard Wiesinger wrote:
Hello,
OK, what's then the difference do
Hello,
Research on the pg_bufferache showed that a lot of pages don't have any
assigned relfilenodesin pg_class, even when they are dirty (in this case
inserts are done).
SELECT
relname IS NULL AS relame_is_null,
isdirty,
COUNT(*) AS count
FROM
pg_buffercache b
LEFT OUTER JOIN pg_clas
On Sat, 26 Sep 2009, Tom Lane wrote:
Gerhard Wiesinger writes:
Research on the pg_bufferache showed that a lot of pages don't have any
assigned relfilenodesin pg_class, even when they are dirty (in this case
inserts are done).
There should never be any buffered pages for nonexistent t
; FROM row.datetime - datetime_old) *
row.col3;
END IF;
i = i + 1;
old = row.old;
datetime_old = row.datetime;
END IF;
END LOOP;
CLOSE curs;
psum.sum_m1 = sum_m1;
psum.sum_m2 = sum_m2;
RETURN psum;
END;
$$ LANGUAGE plpgsql;
On Mon, 8 Jun 2009, Gerhard Wiesinger wrote:
flushed the OS caches:
echo 3 > /proc/sys/vm/drop_caches
and rpm -V was correct. => RAM issue.
A memtest86+ showed very fast a defect RAM.
So PostgreSQL didn't have any issue :-)
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Wed, 9 May 2007, Gerhard Wiesinger wrote:
Hello Tom!
I
Hello,
I think the limit of bgwriter_lru_maxpages of max. 1000 doesn't make any
sense because in fact it limits performance of the database (version 8.3).
On heavy write operations buffer cached must be freed. With the default
config this is practically limited to:
bgwriter_delay=200ms
bgwri
Hello,
As blocksizes, random I/O and linear I/O are critical I/O performance
parameters I had a look on PostgreSQL and a commercial software vendor.
Therefore I enhanced the system tap script:
http://www.wiesinger.com/opensource/systemtap/disktop_gw.stp
Output per 5 seconds on a sequence sc
Hello,
As blocksizes, random I/O and linear I/O are critical I/O performance
parameters I had a look on PostgreSQL and a commercial software vendor.
Therefore I enhanced the system tap script:
http://www.wiesinger.com/opensource/systemtap/disktop_gw.stp
Output per 5 seconds on a sequence sc
On Sun, 27 Sep 2009, Sam Mason wrote:
On Sun, Sep 27, 2009 at 06:05:51PM +0200, Gerhard Wiesinger wrote:
A google research has shown that Gregory Stark already worked on that issue
(see references below) but as far as I saw only on bitmap heap scans.
Greg Stark's patches are about g
On Fri, 2 Oct 2009, Greg Smith wrote:
On Sun, 27 Sep 2009, Gerhard Wiesinger wrote:
I think this is one of the most critical performance showstopper of
PostgreSQL on the I/O side.
I wish, this is an easy problem compared to the real important ones that need
to be resolved. Situations
On Fri, 2 Oct 2009, Greg Smith wrote:
On Sun, 27 Sep 2009, Gerhard Wiesinger wrote:
I think this is one of the most critical performance showstopper of
PostgreSQL on the I/O side.
I wish, this is an easy problem compared to the real important ones that need
to be resolved. Situations
On Fri, 2 Oct 2009, Greg Smith wrote:
On Sun, 27 Sep 2009, Gerhard Wiesinger wrote:
Lowering bgwriter_delay is possible, but I think overhead is too much and
still there is a limit of 800MB/s involved:
Stuff written by the background writer turns into largely random I/O. 800MB/s
of random
On Fri, 2 Oct 2009, Greg Smith wrote:
On Fri, 2 Oct 2009, Gerhard Wiesinger wrote:
Larger blocksizes also reduce IOPS (I/Os per second) which might be a
critial threshold on storage systems (e.g. Fibre Channel systems).
True to some extent, but don't forget that IOPS is always relative
On Fri, 2 Oct 2009, Simon Riggs wrote:
On Sun, 2009-09-27 at 18:05 +0200, Gerhard Wiesinger wrote:
So I saw, that even on sequential reads (and also on bitmap heap scan acces)
PostgreSQL uses only 8k blocks. I think that's a major I/O bottleneck.
A commercial software database vendor s
On Fri, 2 Oct 2009, Greg Smith wrote:
On Fri, 2 Oct 2009, Scott Marlowe wrote:
I found that lowering checkpoint completion target was what helped.
Does that seem counter-intuitive to you?
I set it to 0.0 now.
Generally, but there are plenty of ways you can get into a state where a
short
On Mon, 5 Oct 2009, Greg Smith wrote:
On Sun, 4 Oct 2009, Gerhard Wiesinger wrote:
On Fri, 2 Oct 2009, Scott Marlowe wrote:
I found that lowering checkpoint completion target was what helped.
Does that seem counter-intuitive to you?
I set it to 0.0 now.
If you set that to 0.0, the
On Fri, 9 Oct 2009, Greg Smith wrote:
On Sat, 3 Oct 2009, Gerhard Wiesinger wrote:
I wouldn't read 128k blocks all the time. I would do the following:
When e.g. B0, B127, B256 should be read I would read in 8k random block
I/O.
When B1, B2, B3, B4, B5, B7, B8, B9, B10 are needed I
On Fri, 16 Oct 2009, Christophe Pettus wrote:
The video archive for the 10/13 SFPUG meeting, "The Mighty GUCS: A guide to
the essential PostgreSQL settings you need to know," is now available:
http://thebuild.com/blog/2009/10/16/the-mighty-gucs/
It's also available on Vimeo:
On Fri, 16 Oct 2009, Christophe Pettus wrote:
The video archive for the 10/13 SFPUG meeting, "The Mighty GUCS: A guide to
the essential PostgreSQL settings you need to know," is now available:
http://thebuild.com/blog/2009/10/16/the-mighty-gucs/
It's also available on Vimeo:
Hello,
I'm having a problem with the following:
CREATE TYPE Sums AS (sum_m1 double precision, sum_m2 double precision);
CREATE TYPE date_m1_m2 AS (cur_date date, sum_m1 double precision, sum_m2
double precision);
CREATE OR REPLACE FUNCTION getSums(IN start_ts timestamp with time
zone, IN stop
On Sun, 18 Oct 2009, Tom Lane wrote:
Gerhard Wiesinger writes:
Since getSums() is a cursor and is complex and takes long time getSums
should only be evaluated once. Is there a better solution available to
get both columns from the function in the select?
You need a sub-select, along the
On Mon, 19 Oct 2009, Tom Lane wrote:
Gerhard Wiesinger writes:
On Sun, 18 Oct 2009, Tom Lane wrote:
The OFFSET bit is a kluge, but is needed to keep the planner from
flattening the subquery and undoing your work.
Thnx Tom. It also works without the OFFSET kludge. Any ideas why?
Probably
On Tue, 20 Oct 2009, Christophe Pettus wrote:
On Oct 20, 2009, at 8:32 AM, Viktor Rosenfeld wrote:
@Christophe, I enjoyed your talk very much, particularly because I
learned about pgfouine, which from the looks of it, will make my current
project vastly simpler. So, thank you.
You should re
Hello,
Did someone fix the double iteration in the buffer cache code as discussed
at in the meantime:
http://www.mail-archive.com/pgsql-general@postgresql.org/msg137230.html
Thnx.
Ciao,
Gerhard
--
http://www.wiesinger.com/
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org
Hello,
With a database admin of a commercial database system I've discussed
that they have to provide and they also achieve 2^31 transactions per
SECOND!
As PostgreSQL uses transaction IDs (XIDs) in the range of 2^31 they
would turn around in about one second.
How can one achieve this with P
On 23.03.2012 11:16, Jan Kesten wrote:
On 23.03.2012 06:45, Gerhard Wiesinger wrote:
With a database admin of a commercial database system I've discussed
that they have to provide and they also achieve 2^31 transactions per
SECOND!
As PostgreSQL uses transaction IDs (XIDs) in the range of
On 23.03.2012 11:16, Jan Kesten wrote:
On 23.03.2012 06:45, Gerhard Wiesinger wrote:
With a database admin of a commercial database system I've discussed
that they have to provide and they also achieve 2^31 transactions per
SECOND!
As PostgreSQL uses transaction IDs (XIDs) in the range of
On Fri, 23 Mar 2012, John R Pierce wrote:
On 03/23/12 9:17 AM, Gerhard Wiesinger wrote:
So in that time autovacuum is triggered.
autovacuum runs pretty much continuously in the background, its not an on/off
thing.
Yes, I know. I ment that it runs at least once in 1.5 days.
Ciao,
Gerhard
On Fri, 23 Mar 2012, Tom Lane wrote:
Merlin Moncure writes:
On Fri, Mar 23, 2012 at 11:35 AM, Gerhard Wiesinger wrote:
So I think XID overflow should be planned for one of the next PostgreSQL
releases.
two mitigating factors:
1. read only transactions do not increment xid counter
Yes
Hello!
Is there some information in meta tables available about the number of
pages currently unused, row versions of tables and indices which are
unused?
I'm asking because I want to measure how efficient HOT is working and
whether vacuum should be run or not saving diskspace (I know this i
Hello!
I want to translate the following Oracle PL/SQL script into plpgsql.
Especially I'm having problems with the transaction thing. i tried START
TRANSACTION and COMMIT without success.
Any ideas?
Thanx.
Ciao,
Gerhard
CREATE OR REPLACE PROCEDURE insert_1Mio
IS
maxcommit NUMBER;
BEGIN
deas?
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Thu, 25 Dec 2008, Gerhard Wiesinger wrote:
Hello!
I want to translate the following Oracle PL/SQL script into plpgsql.
Especially I'm having problems with the transaction thing. i tried START
TRANSACTION and COMMIT without success.
COALESCE((SELECT
SUM(pg_relation_size(indexrelid))::bigint
FROM pg_index WHERE cl.oid=indrelid), 0)
ELSE pg_relation_size(cl.oid)
END AS indexsize,
CASE
WHEN reltoastrelid=0 THEN 0
ELSE pg_relation_size(reltoastrelid)
Hello,
Aren't there any drawbacks in postgrs on such large transaction (like in
Oracle), e.g if I would use 500.000.000 or even more?
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Fri, 26 Dec 2008, Pavel Stehule wrote:
Hello
why do you need commit?
pavel
2008/12/26 Gerhard Wies
Hello!
The transaction model is discussed in several areas:
http://www.packtpub.com/article/transaction-model-of-postgresql
The POSTGRES Data Model (1987)
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.8578
The Design of POSTGRES (1986)
http://citeseerx.ist.psu.edu/viewdoc/summary?doi
Hello!
I recently read some Mail on the mailinglist where some parts of
PostgreSQL were not dumped with pg_dumpall and additionally some pg_dump
was necessary (it was something like internals, catalog, etc.)
Any ideas what additionally has to be dumped to pg_dumpall for a full
backup?
Thnx
O'Donnell wrote:
On 29/01/2009 16:31, Gerhard Wiesinger wrote:
I recently read some Mail on the mailinglist where some parts of
PostgreSQL were not dumped with pg_dumpall and additionally some pg_dump
was necessary (it was something like internals, catalog, etc.)
It's the other
Hello!
Is HOT in 8.3 used in the following scenario:
INSERT ...
DELETE ...
at the same ratio.
So for example current data is added every minute and all old data
older than 2 years are deleted.
Would the heap be bloated or is this avoided by the HOT feature of 8.3 and
nearly kept constant?
Are there any plans to support this kind of scenario?
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Wed, 20 May 2009, Merlin Moncure wrote:
On Wed, May 20, 2009 at 3:11 PM, Gerhard Wiesinger wrote:
Hello!
Is HOT in 8.3 used in the following scenario:
INSERT ...
DELETE ...
at the same
On Thu, 21 May 2009, Simon Riggs wrote:
On Wed, 2009-05-20 at 16:01 -0400, Merlin Moncure wrote:
On Wed, May 20, 2009 at 3:11 PM, Gerhard Wiesinger wrote:
Hello!
Is HOT in 8.3 used in the following scenario:
INSERT ...
DELETE ...
at the same ratio.
So for example current data is added every
Hello!
I would do the following (in that order):
1.) Check for a performant application logic and application design (e.g.
degree of granularity of the Java Hibernate Mapping, are there some
object iterators with hundreds of objects, etc.)
2.) Check the hibernate generated queries and whether t
Hello!
I'm new to Postgresql and I did make some import with about 2.8
Mio with normal insert commands.
Config was (difference from default config):
listen_addresses = '*'
temp_buffers = 20MB# min 800kB
work_mem = 20MB# min 64kB
maintenance_w
in FC6 and maybe there is
a slight glibc library conflict or any other incompatibility.
Ciao,
Gerhard
--
http://www.wiesinger.com/
On Wed, 9 May 2007, Tom Lane wrote:
Gerhard Wiesinger <[EMAIL PROTECTED]> writes:
LOG: could not fsync segment 0 of relation 1663/16386/42726: In
Hello!
Are there some presentations or documents of the internals of PostgreSQL
available?
Especially I'm looking for the concepts and detailed internals of general
transaction handling, internals of commit log, transaction logs,
pg_multixact, pg_subtrans, pg_tblspc and pg_twophase.
Also s
Hello!
Are there any tools available to dump the files of the pg_xlog, pg_clog,
... directories in human readable format to understand how transaction
handling is done?
Thanx.
Ciao,
Gerhard
--
http://www.wiesinger.com/
---(end of broadcast)--
63 matches
Mail list logo