Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).
I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 column
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).
I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830 column
On Fri, Nov 17, 2017 at 09:32:23PM +0100, Luca Ferrari wrote:
> Hi all,
> maybe this is trivial, but I need an hint on a way to see a table form
> of the MCVs and MCFs out of pg_stats with a query. Is it possible to
> get a set of rows each with a most common value on one column and the
> correspon
Is this expected behavior ?
This works:
ts=# SELECT x'000F'::int;
int4|15
.. but an additional leading zero causes it to fail:
ts=# SELECT x'F'::int;
ERROR: 22003: integer out of range
LOCATION: bittoint4, varbit.c:1575
|/* Check that the
On Sun, Apr 16, 2017 at 08:02:54PM -0700, Guyren Howe wrote:
> Seems like a simple question, but I’ve never found a good answer to this and
> similar issues.
>
> I would think it was safe to let me add columns to a view on which other
> views depend, but Postgres won’t let me.
>
> I can imagine
When doing a dump+restore upgrade, it's commonly recommended to use the later
version of pg_restore:
https://www.postgresql.org/docs/current/static/upgrading.html
"It is recommended that you use the pg_dump and pg_dumpall programs from the
newer version of PostgreSQL, to take advantage of enhancem
On Mon, May 08, 2017 at 10:12:18AM -0700, Paul A Jungwirth wrote:
> I'm working on a problem where partitioning seems to be the right
> approach, but we would need a lot of partitions (say 10k or 100k).
> Everywhere I read that after ~100 child tables you experience
> problems. I have a few questio
On Mon, May 15, 2017 at 12:55:48PM -0700, Ken Tanzer wrote:
> Hi. On a Centos 6.9 server (in the cloud with Rackspace), I'm wanting to
> install PGDG 9.6 alongside the already-running 9.2. After installing the
> 9.6 packages (and even before doing an initdb), I am no
> longer able to make a local
On Wed, May 24, 2017 at 08:24:15AM -0400, Bill Moran wrote:
> ... I tried allocating 64G to shared buffers and we had a bunch of problems
> with inconsistent performance, including "stall" periods where the database
> would stop responding for 2 or 3 seconds. After trying all sorts of tuning
> opt
On Sat, Jun 10, 2017 at 03:16:26PM -0400, Tom Lane wrote:
> Rob Nikander writes:
> >> On Jun 10, 2017, at 10:34 AM, Tom Lane wrote:
> >> […] but it'd be better to adjust the query to ensure a deterministic
> >> update order.
>
> > Thank you for the answer. Since `update` has no `order by` clause
I know PG 10 will have support "CREATE STATISTICS.." for this..
..but I wondered if there's a recommended workaround in earlier versions ?
We had two issues:
1) improper estimate caused poor plans (nested loops due to ::date, GROUP BY,
cross-column stats, and maybe more).
2) memory explosion in
On Mon, Jun 12, 2017 at 08:46:57PM -0700, Jeff Janes wrote:
> On Mon, Jun 12, 2017 at 8:17 PM, Justin Pryzby wrote:
>
> > I know PG 10 will have support "CREATE STATISTICS.." for this..
> >
> > ..but I wondered if there's a recommended workaround in earlier
I wondered if anyone had considered allowing inheritence children to have
different column types than the parent (and each other).
I'm thinking of the trivial (?) case of smallint/int/bigint. Reason is that
when we load data which exceeds the theshold for the current data type we have
to promote
I've seen this before while doing SET STATISTICS on a larger number of columns
using xargs, but just came up while doing ADD of a large number of columns.
Seems to be roughly linear in number of children but superlinear WRT columns.
I think having to do with catalog update / cache invalidation with
On Tue, Jul 25, 2017 at 06:21:43PM +0530, Krithika Venkatesh wrote:
> I have a table that is partitioned on a numeric column (ID).
>
> Partitioning works when I query the table with no joins.
>
> SELECT * FROM TABLE A a WHERE ID IN (SELECT ID FROM TABLE B b WHERE
> CREATED_TS = CURRENT_TIMESTAMP)
On Mon, Jul 31, 2017 at 09:15:29AM +1000, Alex Samad wrote:
> Hi
>
> I was about to partition a large (?) approx 3T of data 2B rows into
> partition tables but broken up into MM ...
>
> Now I have been reading about limiting the number of partitions otherwise
> it could slow down the parser.
On Mon, Jul 31, 2017 at 10:25:54AM +1000, Alex Samad wrote:
> I note that you link to P10 and I am currently looking at 9.6. The changes
> do look nice for partitioning for p10.
Yes sorry, pg10 is beta - avoid using it except for testing purposes.
> I will add currently we don't delete anything,
On Fri, Aug 18, 2017 at 01:01:45PM +0200, Rob Audenaerde wrote:
> I don't understand why this query:
>
>select count(base.*) from mytable base;
>
> does return multiple rows.
>
>select count(1) from mytable base;
>
> returns the proper count.
>
> There is a column with the name 'count'
On Fri, Aug 18, 2017 at 10:47:37PM +0200, Peter J. Holzer wrote:
> On 2017-08-18 06:37:15 -0500, Justin Pryzby wrote:
> > On Fri, Aug 18, 2017 at 01:01:45PM +0200, Rob Audenaerde wrote:
> > > I don't understand why this query:
> > >
> > >
On Fri, Sep 15, 2017 at 12:25:58PM +0200, s19n wrote:
> 1. with "\set AUTOCOMMIT off" in my psqlrc, issue a
> "SELECT * FROM pg_stat_activity;" and leave it there
This probably obtains a read lock on some shared, system tables/indices..
> 2. in a different connection, issue a database REINDEX (of
On Fri, Sep 15, 2017 at 06:49:06AM -0500, Ron Johnson wrote:
> On 09/15/2017 06:34 AM, Justin Pryzby wrote:
> [snip]
> >But you might consider: 1) looping around tables/indices rather than "REINDEX
> >DATABASE", and then setting a statement_timeout=9s for each REINDEX
On Wed, Sep 20, 2017 at 01:14:14PM -0500, Jerry Sievers wrote:
> Be curious to hear of issues encountered and particular to eager to know
> if disabling any kernel 4.x features helped.
What was the old kernel/OS ? wheezy / kernel 3.x ?
Perhaps try these ideas ?
https://www.postgresql.org/message
On Tue, Oct 10, 2017 at 01:40:07PM -0700, pinker wrote:
> Hi to all!
>
> We've got problem with a very serious repetitive incident on our core
> system. Namely, cpu load spikes to 300-400 and the whole db becomes
> unresponsive. From db point of view nothing special is happening, memory
> looks fi
On Wed, Oct 11, 2017 at 10:43:26AM -0300, Seamus Abshere wrote:
> I've had an `INSERT INTO x SELECT FROM [...]` query running for more
> then 2 days.
>
> Is there a way to see how big x has gotten? Even a very rough estimate
> (off by a gigabyte) would be fine.
On linux:
Run ps -fu postgres (or
On Tue, Oct 17, 2017 at 09:06:59AM +0300, Allan Kamau wrote:
> Is there a way to instruct psql not to try reading ~/.pgpass file?
https://www.postgresql.org/docs/current/static/libpq-envars.html
PGPASSFILE behaves the same as the passfile connection parameter.
passfile
Specifies the name of the fi
On Fri, Oct 20, 2017 at 03:08:26PM -0800, Israel Brewster wrote:
> Summary: the following query takes around 12 seconds on my test machine. On
> my production machine, it's at half an hour and counting. What's going on?
>
> which, when run on my test server, has this explain analyze output:
> ht
After installing parray_gin extension and pg_upgrading another instance,
\d is failing like so:
[pryzbyj@database ~]$ psql ts -c '\d pg_class'
ERROR: operator is not unique: "char"[] @> unknown
LINE 6: (stxkind @> '{d}') AS ndist_enabled,
^
HINT: Could not choose a best cand
On Sun, Oct 22, 2017 at 02:36:12PM -0400, Tom Lane wrote:
> Justin Pryzby writes:
> > After installing parray_gin extension and pg_upgrading another instance,
> > \d is failing like so:
>
> > [pryzbyj@database ~]$ psql ts -c '\d pg_class'
> > ERROR: ope
On Fri, Oct 27, 2017 at 09:24:40PM +0200, Thomas Kellerer wrote:
> Under which situation does a SELECT query change a block?
https://wiki.postgresql.org/wiki/Hint_Bits
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgres
On Mon, Oct 30, 2017 at 12:29:03PM -0400, Adam Brusselback wrote:
> I have some queries that were working in 9.6 which suddenly broke when
> moving to 10.
>
> Digging in, the error i'm getting is: ERROR: source for a
> multiple-column UPDATE item must be a sub-SELECT or ROW() expression
> So the
On Wed, Nov 01, 2017 at 12:19:21PM -0700, David G. Johnston wrote:
> On Wed, Nov 1, 2017 at 11:59 AM, Scott Marlowe
> wrote:
>
> > So some of my output from an explain analyze here has a line that says
> > this:
> >
> > ex Scan using warranty_order_item_warranty_order_id_idx on
> > warranty_order
On Wed, Nov 01, 2017 at 04:11:07PM -0400, Adam Brusselback wrote:
> I have something going on, and i'm not sure what is causing it. I
> recently upgraded our development environment to PG10, and the error
> in the subject appeared with one of my analytical functions.
What relation is that ? I gu
On Thu, Nov 02, 2017 at 08:51:23PM +, Rhhh Lin wrote:
> However, this query will run for days without completing. I suspect it has to
> do with the timestamp predicate and lack of using an appropriate index access
> path. This is what I need to verify/establish.
Perhaps the timestamp index i
On Thu, Nov 02, 2017 at 09:13:05PM +, Rhhh Lin wrote:
> Yes, it may be an issue with the index, but I'd like to have some evidence
> towards that before dropping and recreating (It does not appear that bloat is
> a problem here or dead tuples either).
Why do you say those aren't an issue? Jus
On Fri, Nov 03, 2017 at 01:43:32AM +, tao tony wrote:
> I had an asynchronous steaming replication HA cluster.Each node had 64G
> memory.pg is 9.6.2 and deployed on centos 6.
>
> Last month the database was killed by OS kernel for OOM,the checkpoint
> process was killed.
If you still have l
On Fri, Nov 03, 2017 at 09:12:02PM +, Rhhh Lin wrote:
> I checked for dead tuples against that particular table initially as I have
> seen performance problems before in a related 'busy' environment which needed
> its frequency of vacuuming to be increased. So I have a query to check for
> t
When we upgraded from linux-2.6.24 to ./linux-2.6.27, our pg_dump
duration increased by 20%. My first attempt at resolution was to boot
with elevator=deadline. However that's actually the default IO
scheduler in both kernels.
The two dmesg's are at:
https://www.norchemlab.com/tmp/linux-2.6.24-22
37 matches
Mail list logo