Hello list,
I am creating a plpgsql procedure in Postgres 9.4 (also testing in 9.3.6)
to move all of the tables that are not in a default tablespace (pg_default,
pg_global, or 0) into the tablespace pg_default. However when it executes I
get an error 'ERROR: invalid input syntax for type oid:' wh
Thanks Adrian! Changing the declaration row_data to be of type RECORD
(rather than pg_catalog.pg_class%ROWTYPE) resolved the error :)
- Will
*Will J Dunn*
*willjdunn.com <http://willjdunn.com/>*
On Thu, Apr 16, 2015 at 4:36 PM, Adrian Klaver
wrote:
> On 04/16/2015 07:52 AM, Wil
Additional things to consider for decreasing pressure on the cheap drives:
- Another configuration parameter to look into
is effective_io_concurrency. For SSD we typically set it to 1 io per
channel of controller card not including the RAID parity drives. If you
decrease this value Po
The streaming replication built into PostgreSQL would work fine for your
use case, assuming that you are OK with having only one primary supporting
writes and any slaves being read only as it currently (9.0-9.4) only
supports a single master. This will put minimal load on your primary server
and in
Alex,
Note that you should be weary of suggestions to make your replication
synchronous. Synchronous replication is rarely used for this kind of use
case (Cisco Jabber) where the most complete durability of the standby is
not of the utmost concern (as it would be in a banking application). Not
only
Alex,
Note that you should be weary of suggestions to make your replication
synchronous. Synchronous replication is rarely used for this kind of use
case (Cisco Jabber) where the most complete durability of the standby is
not of the utmost concern (as it would be in a banking application). Not
only
PgFoundry.org went down some months ago, I contacted webmaster Marc
Fournier and he was able to get it back up but a lot of it no longer works
and I don't think he responded to my follow-up. For the most part top pages
are broken but sub-pages are still there (just very hard to navigate to and
find
Hello Steve,
Great monitoring query (https://gist.github.com/skehlet/36aad599171b25826e82).
I suggest modifying the value "autovacuum_freeze_table_age" to
"LEAST(autovacuum_freeze_table_age,(0.95*autovacuum_freeze_max_age))
AS autovacuum_freeze_table_age" since PostgreSQL implicitly
limits vacuum_
Hello Francesco,
You should probably set timing on, run an explain analyze, and use pgbadger
to diagnose your performance issue.
While it may be the case that comparison in the index might be slightly
faster because of the modulo arithmetic, those in-memory operations are
extremely fast and it is
Hello Maks,
As I think Sam suspects the issue might be that you may not have enough
RAM, or not enough RAM is allocated to shared_buffers, or you may have this
table's data being evicted from shared_buffers because of some other
queries, so while you are identifying all the rows in your fast index
Hello Sachin,
I hate to respond by suggesting an alternative but it may be good to try
using pg_basebackup (Doc:
http://www.postgresql.org/docs/9.1/static/app-pgbasebackup.html) to back-up
your database. It takes a copy of the file system files rather than
querying the data as an ordinary connecti
Hello François,
- With read-only work loads you can make shared_buffers very large, like
40% of RAM available to the database. Usually you would keep it lower
because in a write heavy workload large shared_buffers causes checkpoints
to have huge IO, but since you are not making changes
Hello Ben,
Looks like you need to tune autovacuum to be more aggressive. Make sure
autovacuum=ON (the default), increase autovacuum_max_workers (at least 1
per database, more if autovacuum is falling
behind), autovacuum_vacuum_scale_factor to be ~half of the default and can
be set per table to be
On Mon, May 18, 2015 at 10:54 AM, François Battail <
francois.batt...@sipibox.fr> wrote:
> Le 18/05/2015 16:38, William Dunn a écrit :
>
> * You can also run a CLUSTER command on one of your indexes to group
>> data that is frequently accessed together into the same s
Hello,
Just had an idea and could use some feedback. If we start a transaction,
leave it idle, and use pg_export_snapshot() to get its snapshot_id MVCC
will hold all the tuples as of that transaction's start and any other
transaction can see the state of the database as of that time using SET
TRAN
Melvin - thanks for sharing.
Here is the query I use which lists the percent of queries against the
table which use the index ordered by least used first.
The 'pg_relation_size(relid)>(5*8192)' is used to remove any tables that
would be so small the optimizer would just choose a table scan.
SELE
)>(5*8192)
AND NOT ((pg_stat_user_indexes.idx_scan=0
OR pg_stat_user_indexes.idx_scan=NULL)
AND pg_stat_user_tables.seq_scan=0)
ORDER BY perc_idx_used;
*Will J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Tue, May 26, 2015 at 10:31 AM, William Dunn wrote:
<http://willjdunn.com>*
On Fri, May 22, 2015 at 4:50 AM, Albe Laurenz
wrote:
> William Dunn wrote:
> > Just had an idea and could use some feedback. If we start a transaction,
> leave it idle, and use
> > pg_export_snapshot() to get its snapshot_id MVCC will hold all the
> tuples as
In 9.1+ you can monitor the state of your slave easily with
the standby_state field of pg_stat_replication:
SELECT standby_pid,
standby_usesysid,
standby_usename,
standby_client_addr,
standby_client_port,
standby_state
FROM pg_stat_replication;
If the standby is
Hello,
PostgreSQL has a fully standards compliant ODBC driver (See:
https://odbc.postgresql.org/). Any application designed to communicate with
DBMS over ODBC connection should be able to use that driver to communicate
with PostgreSQL. Most applications that interact with databases come with
ODBC
Hello Adrian,
May I ask why you need a non-standard model? By standard models I mean the
following:
1) When you don't need to have subclass specific database constraints: All
subclasses in the same table, subclasses that do not have an attribute have
that column null. This has the best performanc
>
> ORM is a given in my case. This is not a high performance application.
>
> Freundliche Grüsse
>
> Adrian Stern
> unchained - web solutions
>
> adrian.st...@unchained.ch
> +41 79 292 83 47
>
> On Tue, Jun 2, 2015 at 5:35 PM, William Dunn wrote:
>
>> H
Though I'm sure you've already looked into it, for your specific issue of
getting row counts:
- In PostgreSQL 9.2 and above this operation can be made much faster with
index-only scans so ensure you are on a recent version and do your count on
a column of a candidate key with an index (for example,
Hello,
Does anyone which is a more accurate estimate of a table's live
rows: pg_class.reltuples (
http://www.postgresql.org/docs/current/static/catalog-pg-class.html)
OR pg_stat_all_tables.n_live_tup (
http://www.postgresql.org/docs/current/static/monitoring-stats.html#PG-STAT-ALL-TABLES-VIEW)?
In
Thanks so much Tom!
*Will J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Wed, Jun 17, 2015 at 3:48 PM, Tom Lane wrote:
> William Dunn writes:
> > Does anyone which is a more accurate estimate of a table's live
> > rows: pg_class.reltuples (
> > http:/
Hello Rick,
As I understand it you are correct. Oracle/DB2/Postgres and I think the SQL
Standards to not implement constraints against tables on foreign servers.
Although it would be possible to develop the DBMS to handle such
constraints in a heterogeneous distributed environment it would be unwi
Hello Lukasz,
On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel <
lukasz.wro...@motorolasolutions.com> wrote:
>
> There doesn't seem to be any issues with disk space, memory or CPU, as
> neither of those is even 50% used (as per df and top).
>
Are you using the default PostgreSQL configuration sett
ot; prevent vacuum from removing old tuples (because they are
still in scope for that transaction)
*Will J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Tue, Jun 30, 2015 at 4:27 PM, William Dunn wrote:
> Hello Lukasz,
>
> On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wrobel <
&g
Sorry I meant to say, "To track transactions that *have been* left idle but
not committed or rolled back you would..."
Typo
*Will J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Tue, Jun 30, 2015 at 4:33 PM, William Dunn wrote:
> On Tue, Jun 30, 2015 at 8:57 AM, Lukasz Wr
the view's fields and their
datatype but also their meaning,what they will be in their specific
Postgres version, and any additional notes the community deemed useful
*Will J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Tue, Jun 30, 2015 at 5:31 PM, Jerry Sievers
wrote:
>
On Wed, Jul 8, 2015 at 5:20 PM, Joshua D. Drake
wrote:
>
> On 07/08/2015 12:47 PM, John McKown wrote:
>
>
>> Why are they converting?
>>
>> Would EnterpriseDB (a commercial version of PostgreSQL which has
>> extensions to make it a "drop in" replacement for Oracle) be a
>> possibility?
>> http:
On Tue, Jul 14, 2015 at 2:19 PM, Marc Mamin wrote:
>
> Any rule of the thumb with which number of pages per relation it is worth
> to start indexing ?
The code for the monitoring tool check_postgres uses table size larger than
5.12kb as a rule of thumb, expecting that for tables smaller than 5.
Hello Aviel,
On Tue, Jul 21, 2015 at 3:56 AM, Aviel Buskila wrote:
>
> How can I set a highly available postgresql in a share-nothing
> architecture?
>
I suggest you review the official documentation on high-availability
configurations linked below:
http://www.postgresql.org/docs/current/static/
J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Tue, Jul 21, 2015 at 1:55 PM, Aviel Buskila wrote:
> Can you link me up to a good tutorial using pgpool-II?
>
> 2015-07-21 20:02 GMT+03:00 Joshua D. Drake :
>
>>
>> On 07/21/2015 08:34 AM, William Dunn wrote:
>
source tool instead of developing the fail-over logic by
> myself?
>
> 2015-07-21 18:34 GMT+03:00 William Dunn :
>
>> Hello Aviel,
>>
>> On Tue, Jul 21, 2015 at 3:56 AM, Aviel Buskila wrote:
>>>
>>> How can I set a highly available postgresql in a share
As I am aware, you would have two options depending on your configuration:
1. Change the primary_conninfo value on the second standby's
recovery.conf to point to the standby that has been promoted to master.
However, I think this would require that instance to be rebooted for the
confi
goes down but pgpool-II is fine? The watchdog appears
to be monitoring the pgpool-II process, not the postgres/postmaster process.
*Will J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Tue, Jul 21, 2015 at 3:31 PM, Joshua D. Drake
wrote:
>
> On 07/21/2015 11:04 AM, William Dunn wrote
.com <http://willjdunn.com>*
On Tue, Jul 21, 2015 at 4:27 PM, Joshua D. Drake
wrote:
>
> On 07/21/2015 01:21 PM, William Dunn wrote:
>
>> That's pretty cool! But the intended use of watchdog is so you can have
>> multiple pgpool-II instances and failover among
the pgpool-II instances with each other. It is for
maintaining availability of pgpool-II and monitoring for failure of
pgpool-II backends, not Postgres/postmaster.
*Will J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Tue, Jul 21, 2015 at 5:12 PM, Joshua D. Drake
wrote:
>
> On 07/2
ux-HA well enough to know of any limitations or whether it should
be recommend
http://linux-ha.org/doc/man-pages/re-ra-pgsql.html
*Will J. Dunn*
*willjdunn.com <http://willjdunn.com>*
On Tue, Jul 21, 2015 at 5:35 PM, William Dunn wrote:
> On Tue, Jul 21, 2015 at 5:12 PM, Joshua
oss.co.jp/index_en.php
> Japanese:http://www.sraoss.co.jp
>
> > *Will J. Dunn*
> > *willjdunn.com <http://willjdunn.com>*
> >
> > On Tue, Jul 21, 2015 at 4:27 PM, Joshua D. Drake
> > wrote:
> >
> >>
> >> On 07/21/2015 01:21 PM, Will
Hello Jan,
I think your calculation is slightly off because per the docs when
PostgreSQL comes within 1 million of the age at which an actual wraparound
occurs it will go into the safety shutdown mode. Thus the calculation
should be ((2^32)-1)/2-100 rather than just ((2^32)-1)/2 as I think you
42 matches
Mail list logo