Hallo, we have a Postgres database Server and around that there are 8
clients get data from it.
All servers are the the same datacenter i think (traceroute only 4-5 hops)
Now we plan to move the server to an other hoster/datacenter.
I have done simple test with ping and traceroute.
Is there a wa
Hello,
is there a recomment to set max_worker_processes and
max_parallel_workers_per_gather per cpu core or thread?
Best regards
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Dear sirs.
I have an issue which I am at a loss to solve. Because of the behaviour I am
not certain that this is a “slow query” problem, so I send my question here.
My question regards a query using deep subqueries. EXPLAIN gives a maximum cost
of about 2.000 (details below), but the query runs
Try to change from_collapse_limit values at first, for instance run SET
from_collapse_limit = 1; and then your query in the same session.
have you ran analyze lately?
--
View this message in context:
http://www.postgresql-archive.org/Query-which-never-terminates-tp5958734p5958736.html
Sent fr
On Thu, Apr 27, 2017 at 6:18 AM, basti wrote:
> Hallo, we have a Postgres database Server and around that there are 8
> clients get data from it.
>
> All servers are the the same datacenter i think (traceroute only 4-5 hops)
>
> Now we plan to move the server to an other hoster/datacenter.
>
> I h
Hello,
and thanks for your reply.
SET from_collapse_limit=1;
did no difference, I’m afraid.
I ran both VACUUM and ANALYZE before I posted, and there was no noticable
difference from either of them.
Regards,
/Viktor
> On 27 apr 2017, at 15:54, pinker wrote:
>
> Try to change from_collaps
Hi all,
currently I am preparing my session (PostgreSQL upgrade best practices) for the
Swiss PGDay 2017 (http://www.pgday.ch/2017/). What I plan to come up with is:
- Support policy
- Release notes
- Why it is important to upgrade (or at least to patch to the latest minor
release)
- Minor
Hi,
On 2017-04-24 21:17:43 -0700, Andres Freund wrote:
> I've lately seen more and more installations where the generation of
> write-ahead-log (WAL) is one of the primary bottlenecks. I'm curious
> whether that's primarily a "sampling error" of mine, or whether that's
> indeed more common.
>
>
On 04/27/2017 08:41 AM, Daniel Westermann wrote:
Hi all,
currently I am preparing my session (PostgreSQL upgrade best practices)
for the Swiss PGDay 2017 (http://www.pgday.ch/2017/). What I plan to
come up with is:
- Support policy
- Release notes
- Why it is important to upgrade (or at least t
On 04/27/2017 08:59 AM, Andres Freund wrote:
Ok, based on the, few, answers I've got so far, my experience is indeed
skewed. A number of the PG users I interacted with over the last couple
years had WAL write ranges somewhere in the range of 500MB/s to 2.2GB/s
(max I'veseen). At that point WA
On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
> On 04/27/2017 08:59 AM, Andres Freund wrote:
>
> >
> > Ok, based on the, few, answers I've got so far, my experience is indeed
> > skewed. A number of the PG users I interacted with over the last couple
> > years had WAL write ranges somewhe
On 04/27/2017 09:34 AM, Andres Freund wrote:
On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
On 04/27/2017 08:59 AM, Andres Freund wrote:
I would agree it isn't yet a widespread issue.
I'm not yet sure about that actually. I suspect a large percentage of
people with such workloads ar
On 2017-04-27 10:29:48 -0700, Joshua D. Drake wrote:
> On 04/27/2017 09:34 AM, Andres Freund wrote:
> > On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
> > > On 04/27/2017 08:59 AM, Andres Freund wrote:
> > >
>
> > > I would agree it isn't yet a widespread issue.
> >
> > I'm not yet sure ab
I have three very similar functions, two of which are fast and one is
slow and I cannot explain the slowness of the third. All of which, with
the correct arguments should return the same "optimal set". I present
one of the two fast versions and the slow one, in whole and in part.I'm
using postg
It may interest some to see the Perl Developer 2017 Survey results. Of most
interest to this crowd is question 20:
Q20 What do you use for databases?
MySQL is the most used tool for databases with 46% (367) followed by
PostgreSQL with 30% (241) and SQLite 15% (119). Oracle was the choic
Citus is also now just an extension.
Sounds pretty awesome, I'll certainly consider your system if/when we
decide to make changes.
On 22 April 2017 at 08:41, Andrew Staller wrote:
> Samuel,
>
> Short answer to your questions: (1) TimescaleDB and CitusDB are focusing on
> solving different probl
On 04/27/2017 06:34 PM, Andres Freund wrote:
On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
On 04/27/2017 08:59 AM, Andres Freund wrote:
Ok, based on the, few, answers I've got so far, my experience is
indeed skewed. A number of the PG users I interacted with over
the last couple years
On 04/27/2017 07:35 PM, Andres Freund wrote:
On 2017-04-27 10:29:48 -0700, Joshua D. Drake wrote:
On 04/27/2017 09:34 AM, Andres Freund wrote:
On 2017-04-27 09:31:34 -0700, Joshua D. Drake wrote:
On 04/27/2017 08:59 AM, Andres Freund wrote:
I would agree it isn't yet a widespread issue.
On 2017-04-28 01:29:14 +0200, Tomas Vondra wrote:
> I can confirm this observation. I bought the Intel 750 NVMe SSD last year,
> the device has 1GB DDR3 cache on it (power-loss protected), can do ~1GB/s of
> sustained O_DIRECT sequential writes. But when running pgbench, I can't push
> more than ~3
Hi all,
I am working with the following topology:
1. Physical replication between server A1 and A2
2. Physical replication between server B1 and B2
3. A1 and B1 are masters, and both have a virtual IP attached
4. Logical replication between A1 and B1 using the virtual IP addresses
Hi all,
I'm trying to use json_agg to generate some json array from the rows (which
contain some json columns) I selected. But it seems the returned json contains
some extraneous whitespace characters. Any efficient way I can get rid of them?
Also a quick question, if the final result is meant
21 matches
Mail list logo