On Thu, Oct 12, 2017 at 5:13 AM, ROS Didier wrote:
> Hi
>
>I would like your advice and recommendation about the
> following infrastructure problem :
>
> What is the best way to optimize synchronization between an instance
> PostgreSQL on Windows 7 workstation and an Oracle 11gR2
I use materialized views to cache results from a foreign data wrapper to a
high latency, fairly large (cloud) Hadoop instance. In order to boost
refresh times I split the FDW and materialized views up into partitions.
Note: I can't use pg_partman or native partitioning because those don't
really
Moving that many gigs of data across your network could also take a long
time simply depending on your network configuration. Before spending a
huge amount of energy tuning postgresql, I'd probably look at how long it
takes to simply copy 20 or 30 G of data between the two machines.
> El 14 ago
be much higher than the
actual number of spindles. It is worth experimenting with. If you can,
try several values. You can use pg_bench to put consistent workloads on
your database for measurement purposes.
Charles
>
> On Mon, Jul 10, 2017 at 5:25 PM, Rick Otten
> wrote:
>
>> Alt
Although probably not the root cause, at the least I would set up hugepages
(
https://www.postgresql.org/docs/9.6/static/kernel-resources.html#LINUX-HUGE-PAGES
), and bump effective_io_concurrency up quite a bit as well (256 ?).
On Mon, Jul 10, 2017 at 10:03 AM, Charles Nadeau
wrote:
> I’m run
>
>
> If you _can't_ do
>> that due to cloud restrictions, you'd actually be better off doing an
>> atomic swap.
>>
>> CREATE MATERIALIZED VIEW y AS ...;
>>
>> BEGIN;
>> ALTER MATERIALIZED VIEW x RENAME TO x_old;
>> ALTER MATERIALIZED VIEW y RENAME TO x;
>> DROP MATERIALIZED VIEW x_old;
>> COMMIT;
On Thu, Jul 6, 2017 at 11:25 AM, Shaun Thomas
wrote:
> > I'm curious if I'm overlooking other possible architectures or tools
> that might make this simpler to manage.
>
> One of the issues with materialized views is that they are based on
> views... For a concurrent update, it essentially perfor
I'm pondering approaches to partitioning large materialized views and was
hoping for some feedback and thoughts on it from the [perform] minds.
PostgreSQL 9.6.3 on Ubuntu 16.04 in the Google Cloud.
I have a foreign table with 250M or so rows and 50 or so columns, with a
UUID as the primary key.
ient and the server. If I use connection pooling surely this
> introduced latency – getting a server from the pool establishing the
> connection?
>
>
>
> Am I missing something?
>
>
>
>
>
> *From:* Rick Otten [mailto:rottenwindf...@gmail.com]
> *Sent:* 27 May 2017 1
You should have a layer such as pgbouncer between your pg instance and your
application. It is designed to mitigate the access latency issues you
describe.
On May 26, 2017 10:03 AM, "kevin.hug...@uk.fujitsu.com" <
kevin.hug...@uk.fujitsu.com> wrote:
> Hi,
>
>
>
> This is a genera
On Thu, May 25, 2017 at 3:48 PM, Ravi Tammineni <
rtammin...@partner.aligntech.com> wrote:
> Hi,
>
>
>
> What is the best monitoring tool for Postgres database? Something like
> Oracle Enterprise Manager.
>
>
>
> Specifically I am interested in tools to help:
>
>
>
> Alert DBAs to problems with bo
You need to include "%u" in the log_line_prefix setting in your
postgresql.conf. Like this:
log_line_prefix = '%m %d %h %u '
>
> #log_line_prefix = '' # special values:
>
> # %a = application name
>
> # %u = user name
>
> #
Would turning on logging of temp files help? That often reports the query
that is using the temp files:
log_temp_files = 0
It probably wouldn't help if the cursor query never pulls from a temp file,
but if it does ...
On Fri, May 19, 2017 at 7:04 PM, Jeff Janes wrote:
> I'm spoiled by using pg
I always bump it up, but usually just to 4096, because I often have queries
that are longer than 1024 and I'd like to be able to see the full query.
I've never seen any significant memory impact. I suppose if you had
thousands of concurrent queries it would add up, but if you only have a few
doze
On Fri, Apr 7, 2017 at 11:29 AM, David G. Johnston <
david.g.johns...@gmail.com> wrote:
> On Fri, Apr 7, 2017 at 8:18 AM, Dinesh Chandra 12108 <
> dinesh.chan...@cyient.com> wrote:
>
>> Dear Vinny,
>>
>> Thanks for your valuable replay.
>>
>> but I need a select query, which select only that recor
On Wed, Apr 5, 2017 at 6:40 AM, Johann Spies wrote:
> On 4 April 2017 at 14:07, Johann Spies wrote:
>
> > Why would that be?
>
> To answer my own question. After experimenting a lot we found that
> 9.6 uses a parallel seqscan that is actually a lot faster than using
> the index on these large t
COPY
> database.
> What exactly it doing ??
>
> Regards,
> Dinesh Chandra
>
> -Original Message-
> From: vinny [mailto:vi...@xs4all.nl]
> Sent: 27 February, 2017 7:31 PM
> To: John Gorman
> Cc: Rick Otten ; Dinesh Chandra 12108 <
> dinesh.chan...@cyient.c
Although it doesn't really tell if the pg_dump was successful (you'll need
to do a full restore to be sure), I generate an archive list. If that
fails, the backup clearly wasn't successful, and if it succeeds, odds are
pretty good that it worked:
-- bash code snippet --
archiveList=`pg_restore -l
Actually, I think this may be the way Oracle Hot Backups work. It was my
impression that feature temporarily suspends writes into a specific
tablespace so you can take a snapshot of it. It has been a few years since
I've had to do Oracle work though and I could be mis-remembering. People
may be
I suggest SymmetricDS. ( http://symmetricds.org )
I've had good luck using them to aggregate data from a heterogeneous suite
of database systems and versions back to a single back-end data mart for
exactly this purpose.
On Fri, Jan 6, 2017 at 2:24 PM, Ivan Voras wrote:
> Hello,
>
> I'm inves
If I construct the materialized view with an 'order by', I can use a BRIN
index to a sometimes significant performance advantage, at least for the
primary sort field. I have observed that even though the first pass is a
little lossy and I get index rechecks, it is still much faster than a
regular
> Rick, what did you mean by kernel configuration? The OS is a standard
Ubuntu 16.04:
>
> - Linux 4.4.0-45-generic #66-Ubuntu SMP Wed Oct 19 14:12:37 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
>
> Do you think losing half the number of cores can explain my performance
issue ? (AMD 8 cores down to Hasw
How did you migrate from one system to the other?
[ I recently moved a large time series table from 9.5.4 to 9.6.1 using dump
and restore. Although it put the BRIN index on the time column back on, it
was borked. Reindexing didn't help. I had to switch it to a regular btree
index. I think the
Are the tables constantly being written to, or is this a mostly read
scenario? One architecture possibility, if the writes are not so
frequent, is to create just a handful of very big tables for writing, and
then make smaller tables as materialized views for reading. The vacuum and
bloat manage
In Unix/Linux with many of the common file system types, if you delete a
file, but a process still has it open, it will continue to "own" the disk
space until that process closes the file descriptor or dies. If you try
"ls" or other file system commands, you won't actually see the file there,
yet
Would a bit string column work? --
http://www.postgresql.org/docs/9.5/static/datatype-bit.html
You might need to use a lot of bitwise OR statements in the query though if
you are looking at very sparse sets of specific values...
Something like the get_bit() function might allow you to select a sp
I have another suggestion. How about putting the images in RethinkDB?
RethinkDB is easy to set up and manage, and is scalable and easy (almost
trivial) to cluster. Many of the filesystem disadvantages you mention
would be much more easily managed by RethinkDB.
A while back I wrote a Foreign Dat
There is parallel sequence scanning coming in 9.6 --
http://rhaas.blogspot.com/2015/11/parallel-sequential-scan-is-committed.html
And there is the GPU extension - https://wiki.postgresql.org/wiki/PGStrom
If those aren't options, you'll want your table as much in memory as
possible so your scan d
eating a 8 disk RAID10.
> Would LVM make this type of addition easier?
>
>
> On Wed, Feb 24, 2016 at 6:08 AM, Rick Otten
> wrote:
>
>>
>> 1) I'd go with xfs. zfs might be a good alternative, but the last time I
>> tried it, it was really unstable (on Linux)
1) I'd go with xfs. zfs might be a good alternative, but the last time I
tried it, it was really unstable (on Linux). I may have gotten a lot
better, but xfs is a safe bet and well understood.
2) An LVM is just an extra couple of commands. These days that is not a
lot of complexity given what y
Having gotten used to using cloud servers over the past few years, but been
a server hugger for more than 20 before that, I have to say the cloud
offers a number of huge advantages that would make me seriously question
whether there are very many good reasons to go back to using local iron at
all.
You can definitely overload most systems by trying to start too many
connections at once. (This is actually true for most relational
databases.) We used to see this scenario when we'd start a bunch web
servers that used preforked apache at the same time (where each fork had
its own connection).
Why does it index scan when I use where, but not when I do a join?
On Fri, Dec 11, 2015 at 7:20 PM, Andreas Kretschmer wrote:
>
> > Rick Otten hat am 11. Dezember 2015 um 23:09
> > geschrieben:
>
> >
> > The query performance hit for sequence scanning isn
ought to be able to build a reproducible test case to
share - at that time I'll see if I can open it up as a real bug. For now
I'd rather focus on understanding why my select uses an index and a join
won't.
On Fri, Dec 11, 2015 at 4:44 PM, Andreas Kretschmer wrote:
>
>
&g
e couple
with only a few rows), and doesn't sequence scan for the mypk column at all.
On Fri, Dec 11, 2015 at 2:44 PM, Andreas Kretschmer <
akretsch...@spamfence.net> wrote:
> Rick Otten wrote:
>
> > I'm using PostgreSQL 9.5 Beta 2.
> >
> > I am working
I'm using PostgreSQL 9.5 Beta 2.
I am working with a partitioned table set.
The first thing I noticed, when creating indexes on the 20 or so
partitions, was that if I create them too fast they don't all succeed. I
have to do a few at a time, let them breathe for a few seconds, and then do
a few
In our use case, the default autovacuum settings did not work, I guess we are
in the 5% group of users. The default settings were too aggressive when it ran
against some of our larger tables (example: 100M rows by 250 columns) in our
front end OLTP database causing severe performance degradati
>> I not convinced about the need for BBU with SSD - you *can* use them
>> without one, just need to make sure about suitable longevity and also
>> the presence of (proven) power off protection (as discussed
>> previously). It is worth noting that using unproven or SSD known to be
>> lacking po
I recommend SymmetricDS - http://www.symmetricds.org
-Original Message-
From: pgsql-performance-ow...@postgresql.org
[mailto:pgsql-performance-ow...@postgresql.org] On Behalf Of suhas.basavaraj12
Sent: Wednesday, November 28, 2012 1:12 PM
To: pgsql-performance@postgresql.org
Subject: [PE
5:08 PM
To: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] average query performance measuring
On 21.8.2012 20:35, Rick Otten wrote:
> I have a PostgreSQL 9.1 cluster. Each node is serving around 1,000
> queries per second when we are at a 'steady state'.
>
> W
still have to parse the logs to get the
data.)
It seems like we almost have everything we need to track this in the stats
tables, but not quite. I was hoping the folks on this list would have some
tips on how to get query performance trends over time out of each node in my
cluster.
Tha
41 matches
Mail list logo