Greetings!
I am getting the following message using DEBUG logging, telling me that a
particular GIST index is suboptimal. This is for a table with ids and date
ranges that are supposed to be non-overlapping. Here is the index def:
"my_fact_table_id_as_of_date_excl" EXCLUDE USING gist (id WITH =
Greetings!
We are running postgres 11.9 (were running 11.7 prior to recent restart) on
a large db (10s of TB) with 5 or 6 tablespaces and 1000s of tables/indexes.
Within the past few days we have started to see a few queries running for
over 8 hours which we then attempt to terminate, but will no
On Mon, Feb 17, 2020 at 10:46 AM Adrian Klaver
wrote:
>
> How about?:
>
> https://www.postgresql.org/docs/9.6/sql-altertable.html
>
> "Adding a column with a DEFAULT clause or changing the type of an
> existing column will require the entire table and its indexes to be
> rewritten. As an exceptio
On Mon, Feb 17, 2020 at 8:21 AM Tom Lane wrote:
> Jeremy Finzel writes:
> > I have a table foo with 100 million rows, and a column:
> >- id character varying(20)
> > The following command is the one that we expect to execute very quickly
> (we
> > are not
Good morning!
We are a little bit puzzled because running the following command on a 9.6
cluster is apparently requiring a table rewrite, or at least a very long
operation of some kind, even though the docs say that as of 9.2:
-
Increasing the length limit for a varchar or varbit column,
On Thu, Oct 10, 2019 at 3:09 PM Lev Kokotov wrote:
> Hi Miles,
>
> One issue is keeping the subscriber and the publisher schema identical.
> Running migrations on both the publisher and subscriber does not seem
> atomic to me, therefore I don't have a way to enforce consistency between
> the two.
On Thu, Aug 22, 2019 at 1:46 PM Jeremy Finzel wrote:
> Good afternoon!
>
> I am finding it difficult to understand how to maintain my BRIN index from
> the docs. Specifically, this is the documentation on the
> function brin_summarize_range which isn't clear to me:
>
Good afternoon!
I am finding it difficult to understand how to maintain my BRIN index from
the docs. Specifically, this is the documentation on the
function brin_summarize_range which isn't clear to me:
brin_summarize_range(index regclass, blockNumber bigint) integer
- summarize the page ran
>
> Yes: BRIN indexes don't provide any ordering information. A btree
> index on created_at could be used to optimize this query, but without
> one of those, seqscanning the whole table is the only possibility.
>
Thanks Tom. So, this is a very general question, but would it be possible
to develo
Hello -
I have started to make much more use of BRIN indexes on timestamp fields on
tables which are insert-only. I have seen great performance with these and
of course far less overhead.
However, I am noticing that a simple aggregate is not using the index. I
don't find anything obvious in the
On Sun, Jun 2, 2019 at 6:07 PM Tom Lane wrote:
> Anyway, the larger point here is that right now btree_gin is just a quick
> hack, and it seems like it might be worth putting some more effort into
> it, because the addition of duplicate-compression changes the calculus
> for whether it's useful.
On Fri, May 24, 2019 at 10:25 AM Jeremy Finzel wrote:
> I have been hoping for clearer direction from the community about
> specifically btree_gin indexes for low cardinality columns (as well as low
> cardinality multi-column indexes). In general there is very little
> discussio
I have been hoping for clearer direction from the community about
specifically btree_gin indexes for low cardinality columns (as well as low
cardinality multi-column indexes). In general there is very little
discussion about this both online and in the docs. Rather, the emphasis
for GIN indexes d
>
> A sample of the data you are cleaning up.
>
> I think what people are trying to wrap there head around is how 800
> lines in the file is being split into two subsets: the organization data
> and the people data. In particular how that is being done to preserve
> the relationship between organiz
On Wed, May 22, 2019 at 12:53 PM Rich Shepard
wrote:
> On Wed, 22 May 2019, Francisco Olarte wrote:
>
> > Also, when I speak of "unique identifier" I'm not speaking of the one if
> > your FINAL tables, I assume you would have at least the *_id field as
> > PKEY, so nothing else needed, but the on
On Wed, May 22, 2019 at 11:07 AM Rich Shepard
wrote:
> On Wed, 22 May 2019, Francisco Olarte wrote:
>
> > You are not reading what we write to you. Note YOU AND ONLY YOU are the
> > one speaking of PK. We are speaking of "unique identifier" ( that would
> > be, IIRC, "candidate keys", you can pee
On Wed, May 22, 2019 at 7:40 AM Rich Shepard
wrote:
> On Wed, 22 May 2019, Francisco Olarte wrote:
>
> > I'm curious, what org_id do you put (manually) to the people? you must
> > have some way to match it ( like, say, "I have an organization line,
> > followed by lines for people in this organiz
On Tue, May 21, 2019 at 12:24 PM Rich Shepard
wrote:
> On Tue, 21 May 2019, Francisco Olarte wrote:
>
> > From how you say it, I assume you have some data in your original
> > dumps which can relate boths, lets assume it's org_name, but may be an
> > org-code. If you do not have it it means you c
>
> My question was about how you (think you are) setting log_error_verbosity,
> not what you're doing to produce the message. The decisions in elog.c
> about which fields to print in the server log do not depend on the latter.
>
Roger that.
> However... looking again at what elog.c does, I'm p
On Mon, Apr 22, 2019 at 3:47 PM Tom Lane wrote:
> Jeremy Finzel writes:
> > I have a DO block which is raising a log message with number of rows
> > deleted. It also shows CONTEXT messages every time, which I don't want.
> > But setting in the client log_error_verbo
I have a DO block which is raising a log message with number of rows
deleted. It also shows CONTEXT messages every time, which I don't want.
But setting in the client log_error_verbosity = terse does not work to get
rid of the messages. I can't get it to work even setting it on a per-user
level.
On Fri, Mar 15, 2019 at 9:48 AM Adrian Klaver
wrote:
> On 3/14/19 8:23 AM, Jeremy Finzel wrote:
> > I don't find a clear mention in the docs of superuser processes that are
> > exempt from counting toward superuser_reserved_connections. So I would
> > think that i
I don't find a clear mention in the docs of superuser processes that are
exempt from counting toward superuser_reserved_connections. So I would
think that it's possible that postgres autovac workers ought to count
toward that. Am I wrong about that? I actually have the same question
about pglogi
>
> SET lock_timeout TO '1s';
>
No, my assumption is that lock_timeout does not cover my use case here. My
point is actually that any one statement is not hitting lock_timeout, but
as a whole the transaction takes too long. For example if I set
lock_timeout to 1 second, but my migration actually
I was hoping to use idle_in_transaction_session_timeout to prevent schema
change migrations from running too long and thereby locking up the
application for an extended period even if any one statement in the
migration is very short.
I am not finding predictable behavior
using idle_in_transaction_
>
> It doesn't write out all of RAM, only the amount in use by the
> particular backend that crashed (plus all the shared segments attached
> by that backend, including the main shared_buffers, unless you disable
> that as previously mentioned).
>
> And yes, it can take a long time to generate a la
>
> In Linux, yes. Not sure about other OSes.
>
> You can turn off the dumping of shared memory with some unusably
> unfriendly bitwise arithmetic using the "coredump_filter" file in /proc
> for the process. (It's inherited by children, so you can just set it
> once for postmaster at server start
I am trying to determine the upper size limit of a core file generated for
any given cluster. Is it feasible that it could actually be the entire
size of the system memory + shared buffers (i.e. really huge)?
I've done a little bit of testing of this myself, but want to be sure I am
clear on this
On Mon, Jan 28, 2019 at 3:32 PM Ruben Rubio Rey wrote:
> Hi Jeremy,
>
> > Why don't you consider upgrading from postgres 9.4 and with it to a
> supported version of BDR? There is nothing better you can do to keep your
> infrastructure up to date, performant, secure, and actually meet your
> mult
>
> The problem is that the version for BDR 1.0.7, which has an implementation
> for postgres 9.4, will be on end of live at the end of this year.
> Unfortunately the paid solution is out of our budget, so we currently have
> two options: find an alternative or remove the multi-region implementatio
On Tue, Jan 22, 2019 at 8:19 PM Achilleas Mantzios <
ach...@matrix.gatewaynet.com> wrote:
>
> On 22/1/19 10:18 μ.μ., Jeremy Finzel wrote:
>
> On Tue, Jan 22, 2019 at 2:16 PM Jeremy Finzel wrote:
>
>> any one of those WAL files will not be archived and recycled as long
On Tue, Jan 22, 2019 at 2:16 PM Jeremy Finzel wrote:
> any one of those WAL files will not be archived and recycled as long as it
>> no longer needed by a replication slot
>>
>
:(. s/no longer/still. I meant to say any one of those WAL files will not
be archived and recy
>
> wal retention in pg_wal and wal archiving are different things. The OP got
> problems cause he deliberately went on to delete files in pg_wal which
> means he used the wrong method to address a wrong situation.
>
> However, if the OP has still those WAL files archived he can use them to
> bring
>
> Note replication slots only prevent old *catalog* rows from being
> removed, not old row versions in user created tables.
>
Thank you for that clarification. I can see this is noted clearly in the
CAUTION statement here:
https://www.postgresql.org/docs/current/logicaldecoding-explanation.html
On Tue, Jan 22, 2019 at 12:52 PM Andres Freund wrote:
> Hi,
>
> On 2019-01-22 11:10:27 -0600, Jeremy Finzel wrote:
> > P.S. do heed the advice of the others and get more familiar with the docs
> > around WAL archiving.
>
> Logical replication doesn't normally in
>
> That is news to me. Can you provide a citation for this?
>
I can see the confusion in what I said. To clarify, I really meant that in
order to retrieve that data that you need in really old WAL segments, you
need to keep your replication slot in a position that will hold that WAL in
place. A
>
> Thanks, I see... So if I understand it correctly - since I have quite big
> partitions like ~30 GB each in one parent table and from ~1GB to ~5 GB in
> several others I presume I had to set wal_keep_segments to some really high
> number and stop our security cronjob cleaning old WAL segments (b
I am having a hard time finding out how I can easily determine at which LSN
a streamer was promoted *after *promotion. A related question is that if I
pause wal replay, I am able to see the last replayed LSN but I can't simply
immediately promote.
I want to know the state of the streamer post-pro
On Fri, Jan 4, 2019 at 4:19 PM Rich Shepard
wrote:
> On Fri, 4 Jan 2019, David G. Johnston wrote:
>
> > That would be the decision to make - does your toolkit support (or can be
> > made to support) the type and are you willing to choose a sub-optimal
> > database model because one or more applic
First of all, my local system which has postgres_fdw installed is this
version:
PostgreSQL 9.6.8 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.9.3, 64-bit
Here are my server configurations (I have tweaked these with no improvement
to my issue):
- fetch_size=100
- fdw_startup_cost=1000
Is there any easy way to determine what the last LSN was prior to
promotion, except for (as I understand it), using pg_xlogdump on the
.partial WAL file after promotion and find the last LSN? Am I missing
something obvious?
The question I am trying to answer has to do with replication slots and h
If I follow your use case, we have written something that just may fit your
scenario and plan to open source it rather soon.
It has several layers but let me boil it down. First we use an open sourced
auditing system to log changes to the source tables. This becomes your
queue. A postgres backgrou
On Tue, Jul 10, 2018 at 5:13 PM Hustler DBA wrote:
> Hi Community,
> A client of mine is looking for an open source tool to deploy and promote
> PostgreSQL DDL changes through database environments as part of SDLC. What
> tools (open source) does the community members use? I normally use scripts,
On Fri, Jun 15, 2018 at 2:57 PM, Adrian Klaver
wrote:
> On 06/15/2018 12:24 PM, Jeremy Finzel wrote:
>
>> Hello!
>>
>> We often prefer to use timestamptz or "timestamp with time zone" in our
>> environment because of its actually storing "objective
Hello!
We often prefer to use timestamptz or "timestamp with time zone" in our
environment because of its actually storing "objective time" with respect
to UTC. But in my own work experience, I have scarcely encountered a case
where business users, and software engineers, do not actually think it
On Fri, Jun 15, 2018 at 11:23 AM, Adrian Klaver
wrote:
> On 06/15/2018 08:26 AM, Jeremy Finzel wrote:
>
>> Several months ago we had some detailed discussions about whether to use
>> separate date columns to indicate a date range, or to use the daterange
>> data type.
Several months ago we had some detailed discussions about whether to use
separate date columns to indicate a date range, or to use the daterange
data type. We opted for the latter because this type is specifically
designed for this use case - a table that has a range of valid dates for
the data it
>
> Is there any chance I could get access
> to a copy of the data? It's very hard to debug something like this
> without something that can reproduce the issue...
>
It would be very difficult for us to be able to clean and anonymize this
data and provide a snapshot publicly. But I am very willin
On Fri, Jun 8, 2018 at 1:08 PM Andres Freund wrote:
> On 2018-06-08 12:38:03 -0500, Jeremy Finzel wrote:
> > On Tue, Jun 5, 2018 at 8:42 PM, Alexandre Arruda
> wrote:
> >
> > > Em seg, 28 de mai de 2018 às 16:44, Andres Freund
> > > escreveu:
> > >
On Tue, Jun 5, 2018 at 8:42 PM, Alexandre Arruda wrote:
> Em seg, 28 de mai de 2018 às 16:44, Andres Freund
> escreveu:
> >
> > Hi,
> >
> > I think I found the bug, and am about to post a fix for it belo
> > https://postgr.es/m/20180525203736.crkbg36muzxrj...@alap3.anarazel.de.
> >
> > Greetings
On Wed, Jun 6, 2018 at 1:13 PM, Jeremy Finzel wrote:
>
>
> On Wed, Jun 6, 2018 at 12:12 PM, Tom Lane wrote:
>
>> Jeremy Finzel writes:
>> > We have an odd scenario on one of our OLTP systems, which behaves the
>> same
>> > way on a streamer, of a 700-
On Wed, Jun 6, 2018 at 12:12 PM, Tom Lane wrote:
> Jeremy Finzel writes:
> > We have an odd scenario on one of our OLTP systems, which behaves the
> same
> > way on a streamer, of a 700-1000ms planning time for a query like this:
>
> > SELECT *
> > FROM table1
&
Hello -
We have an odd scenario on one of our OLTP systems, which behaves the same
way on a streamer, of a 700-1000ms planning time for a query like this:
SELECT *
FROM table1
WHERE source_id IN (SELECT id FROM table2 WHERE customer_id = $1);
The actual execution time is sub-ms.
We initially t
>
> BTW is it just a coincidence or are all the affected tables pg_authid?
> Maybe the problem is shared relations ..? Maybe the fact that they have
> separate relfrozenxid (!?) in different databases?
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24
On Tue, May 22, 2018 at 2:42 PM, Maxim Boguk wrote:
>
>
> On Tue, May 22, 2018 at 10:30 PM, Andres Freund
> wrote:
>
>> Hi,
>>
>> On 2018-05-22 22:18:15 +0300, Maxim Boguk wrote:
>> > On Tue, May 22, 2018 at 9:47 PM, Andres Freund
>> wrote:
>> > > > select relfrozenxid from pg_class where reln
On Sat, Apr 28, 2018 at 12:01 PM Pavel Stehule
wrote:
> Hi
>
> 2018-04-28 18:52 GMT+02:00 Jeremy Finzel :
>
>> It appears that neither pg_get_function_arguments
>> nor pg_get_function_identity_arguments could be used for this. I want to
>> get function argument
It appears that neither pg_get_function_arguments
nor pg_get_function_identity_arguments could be used for this. I want to
get function argument data types from the catalog by ordinal position,
without the argument name.
For example, I want the same information for these 2 functions:
foo(p_1 int
On Wed, Apr 4, 2018 at 3:20 AM Artur Formella wrote:
> Hello!
> We have a lot of big CTE (~40 statements, ~1000 lines) for very dynamic
> OLTP content and avg response time 50-300ms. Our setup has 96 threads
> (Intel Xeon Gold 6128), 256 GB RAM and 12 SSD (3 tablespaces). DB size <
> RAM.
> Simpl
On Thu, Mar 22, 2018 at 3:20 PM, Peter Geoghegan wrote:
> On Thu, Mar 22, 2018 at 12:27 PM, Jeremy Finzel wrote:
> > Thank you for the recommendation. I ran both amcheck functions on all 4
> > indexes of those 2 tables with heapallindexed = true, but no issues were
> >
On Wed, Mar 21, 2018 at 4:29 PM, Peter Geoghegan wrote:
> On Wed, Mar 21, 2018 at 1:38 PM, Jeremy Finzel wrote:
> > A server restart and upgrade to 9.5.12 (at the same time), as expected,
> made
> > the issue go away. Still doesn't give us any answers as to what
> h
On Tue, Mar 20, 2018 at 11:19 AM, Jeremy Finzel wrote:
>
>
> On Mon, Mar 19, 2018 at 3:55 PM, Jeremy Finzel wrote:
>
>>
>>
>> On Mon, Mar 19, 2018 at 3:53 PM, Peter Geoghegan wrote:
>>
>>> On Mon, Mar 19, 2018 at 1:01 PM, Jeremy Finzel
>>
On Mon, Mar 19, 2018 at 3:55 PM, Jeremy Finzel wrote:
>
>
> On Mon, Mar 19, 2018 at 3:53 PM, Peter Geoghegan wrote:
>
>> On Mon, Mar 19, 2018 at 1:01 PM, Jeremy Finzel wrote:
>> > SELECT heap_page_items(get_raw_page('pg_authid', 7));
>>
>> C
On Mon, Mar 19, 2018 at 4:12 PM, Peter Geoghegan wrote:
> On Mon, Mar 19, 2018 at 1:55 PM, Jeremy Finzel wrote:
> > @Peter :
> >
> > staging=# SELECT * FROM page_header(get_raw_page('pg_authid', 7));
> > lsn | checksum | flags | lower | upp
On Mon, Mar 19, 2018 at 3:53 PM, Peter Geoghegan wrote:
> On Mon, Mar 19, 2018 at 1:01 PM, Jeremy Finzel wrote:
> > SELECT heap_page_items(get_raw_page('pg_authid', 7));
>
> Can you post this?
>
> SELECT * FROM page_header(get_raw_page('pg_authid'
On Mon, Mar 19, 2018 at 3:01 PM, Jeremy Finzel wrote:
>
>
> On Mon, Mar 19, 2018 at 2:56 PM, Andres Freund wrote:
>
>> Hi,
>>
>> On 2018-03-19 14:53:58 -0500, Jeremy Finzel wrote:
>> > FWIW, if I remove the last filter, I get these rows and I believe
On Mon, Mar 19, 2018 at 2:56 PM, Andres Freund wrote:
> Hi,
>
> On 2018-03-19 14:53:58 -0500, Jeremy Finzel wrote:
> > FWIW, if I remove the last filter, I get these rows and I believe row
> 7/57/
> > 2906288382 is the one generating error:
>
> Oh, yea, that makes
On Mon, Mar 19, 2018 at 2:41 PM, Andres Freund wrote:
> On 2018-03-19 14:37:24 -0500, Jeremy Finzel wrote:
> > We upgraded to 9.5.5, and today we are running 9.5.11. And actually we
> > upgraded from 9.3, not 9.4. We are still trying to figure out which
> point
> > r
On Mon, Mar 19, 2018 at 1:17 PM, Andres Freund wrote:
> Hi Jeremy, Alvaro,
>
> On 2018-03-19 13:00:13 -0500, Jeremy Finzel wrote:
> > On Mon, Mar 19, 2018 at 12:46 PM, Alvaro Herrera <
> alvhe...@alvh.no-ip.org>
> > wrote:
> >
> > > Jeremy Finzel wro
On Mon, Mar 19, 2018 at 12:46 PM, Alvaro Herrera
wrote:
> Jeremy Finzel wrote:
> > Getting some concerning errors in one of our databases that is on 9.5.11,
> > on autovacuum from template0 database pg_authid and pg_auth_members. I
> > only saw some notes on the list about
mum columns in an index: 32
Maximum size of a TOAST chunk:1996
Size of a large-object chunk: 2048
Date/time type storage: 64-bit integers
Float4 argument passing: by value
Float8 argument passing: by value
Data page checksum versio
Getting some concerning errors in one of our databases that is on 9.5.11,
on autovacuum from template0 database pg_authid and pg_auth_members. I
only saw some notes on the list about this error related to materialized
views. FWIW, we did use pg_upgrade to upgrade this database from 9.4 to
9.5. H
On Wed, Mar 14, 2018 at 1:29 PM Paul Jungwirth
wrote:
> On 03/14/2018 06:19 AM, Jeremy Finzel wrote:
> > Hello! From all that I can tell, it is not possible using a btree_gist
> > index as a primary key. If so, why not? I have a table with this gist
> > index which
On Wed, Mar 14, 2018 at 8:33 AM, Adrian Klaver
wrote:
> On 03/14/2018 06:19 AM, Jeremy Finzel wrote:
>
>> Hello! From all that I can tell, it is not possible using a btree_gist
>> index as a primary key. If so, why not? I have a table with this gist
>>
>
> http
Hello! From all that I can tell, it is not possible using a btree_gist
index as a primary key. If so, why not? I have a table with this gist
index which truly ought to be its primary key. as_of_date is of range date
type:
EXCLUDE USING gist (id WITH =, as_of_date WITH &&)
Any direction here w
On Fri, Mar 9, 2018 at 10:43 AM, Jeremy Finzel wrote:
> Hello -
>
> Here is our cluster setup:
>
> cluster_a 9.5.11 Ubuntu 16.04.4 LTS
> --> cluster_b (streamer) 9.5.11 Ubuntu 16.04.4 LTS
> --> cluster_c (streamer) 9.5.11 Ubuntu 16.04.4 LTS
>
> Very recentl
Hello -
Here is our cluster setup:
cluster_a 9.5.11 Ubuntu 16.04.4 LTS
--> cluster_b (streamer) 9.5.11 Ubuntu 16.04.4 LTS
--> cluster_c (streamer) 9.5.11 Ubuntu 16.04.4 LTS
Very recently, we started seeing these errors when running a query on a
specific table on the streamer:
2018-03-09 08:28
We want to enforce a policy, partly just to protect those who might forget,
for every table in a particular schema to have a primary key. This can't
be done with event triggers as far as I can see, because it is quite
legitimate to do:
BEGIN;
CREATE TABLE foo (id int);
ALTER TABLE foo ADD PRIMARY
On Tue, Feb 13, 2018 at 1:46 PM greigwise wrote:
> No need! I figured it out.
>
> Had to put this "synchronize_data := false" on the create_subscription
> call.
> Weird that there seem to be redundant parameters for this; one on the
> replication set add and one on the create subscription. Mayb
On Tue, Feb 6, 2018 at 9:48 PM, Jeremy Finzel wrote:
> On Tue, Feb 6, 2018 at 6:02 PM Michael Paquier
> wrote:
>
>> On Tue, Feb 06, 2018 at 01:36:04AM -0600, Jeremy Finzel wrote:
>> > Here is the basic structure - is the gist index significant?:
>> &g
On Tue, Feb 6, 2018 at 6:02 PM Michael Paquier
wrote:
> On Tue, Feb 06, 2018 at 01:36:04AM -0600, Jeremy Finzel wrote:
> > Here is the basic structure - is the gist index significant?:
> >
> > CREATE UNLOGGED TABLE foo (
> > as_of_date daterange NOT NULL,
254\0X\1\0 \4 \0\0\0\0
\237\300\1H\236\260\1"..., 8192) = 8192
read(32, "\347\0\0\0\210\344\6\2226%\0\0\254\0008\1\0 \4
\0\0\0\\237\240\1X\236\260\1"..., 8192) = 8192
read(32, "\347\0\0\0\330\7\7\222\1\360\0\0\254\0h\1\0 \4
\0\0\0\\237\240\1`\236\240\1"..., 8192)
We are running:
PostgreSQL 10.1 on x86_64-pc-linux-gnu, compiled by gcc (Debian 4.7.2-5)
4.7.2, 64-bit
The table I am setting to logged is 32GB with indexes. I see it writing
WAL files like crazy but after about an hour and a half, it has written out
some 2500 WAL segments, then it just sits and
If I want to mimic a log statement that is from an application that uses
PARSE/BIND/EXECUTE, is there any way to mimic this behavior in psql, for
example to troubleshoot execution plans, other than writing a named
prepared statement, or manually subbing the parameters?
It would be nice to take the
On Tue, Dec 26, 2017 at 11:11 AM, Alvaro Herrera
wrote:
> Jeremy Finzel wrote:
> > On Tue, Dec 26, 2017 at 10:28 AM, Alvaro Herrera <
> alvhe...@alvh.no-ip.org>
> > wrote:
> >
> > > Jeremy Finzel wrote:
> > > > >
> > >
On Tue, Dec 26, 2017 at 10:28 AM, Alvaro Herrera
wrote:
> Jeremy Finzel wrote:
> > >
> > >
> > >> Each index build needs to wait for all other transactions
> > >> (Including the ones used by the other index build) to finish.
> > >> So I d
>
> The only thing I can think of is that there's a foreign key from foo to
> bar(id), but the create index on bar shouldn't prevent a share lock on
> foo, even if such a restriction exists.
>
> --
> Martín Marquéshttp://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, T
>
>
>> Each index build needs to wait for all other transactions (Including the
> ones used by the other index build) to finish. So I don't think a deadlock
> here is unexpected.
>
> Cheers,
>
> Jeff
>
Does that mean I should never build more than one concurrent index at a
time within the entire
I am attempting to build several indexes in parallel, guaranteeing that I
never build one on the same table twice. I understand I can't build two on
the same table at once or I will get a deadlock. However, I am also
getting a deadlock when doing several in parallel on different tables.
Here is
Posting here because it seems the pgsql-pkg-debian list is not very
responsive.
This recent package appears to be broken for pgq3, used for Skytools
replication:
https://www.postgresql.org/message-id/E1eR2Lv-0003h7-FC@
atalia.postgresql.org
There is no pgq_node.control file (postgresql-9.6-pgq3),
It's hard to follow how the 2 videos relate, because you don't run the same
SQL both places. You first update where i = 2 in Postgres and i = 1 in
Oracle.
On Thu, Dec 21, 2017 at 4:37 AM, Hans Schou wrote:
> Hi
>
> FYI - if it has any interest
>
> During my preparation for describing what happe
>
> Maybe the nature of the corruption caused is different. It took months
> of running large databases on production for corruption to become
> apparent from multixact bugs, for example. Or maybe because the
> relfrozenxid is fixed by other activity in the system, any bugs are
> masked -- but th
>
> It's been around, but is it trusted? I for one do not trust it. See
> for example
> https://www.postgresql.org/message-id/CA+TgmoaWoU+BpBG4nwz1L
> gkmdsbl6_hm9r8jnqte-5vrdls...@mail.gmail.com
>
>
Needs some discussion. Has anyone actually reported corruption related to
this? I don't doubt t
On Mon, Dec 18, 2017 at 1:03 PM, Rob Nikander
wrote:
> Hi,
>
> I've got a large table from which I'd like to completely reclaim space. I
> read the docs and it sounds like I can’t run `vacuum full`, because this
> table is accessed constantly and can’t have downtime. Assuming that’s true,
> what
On Sun, Dec 17, 2017 at 3:31 PM, Jeremy Finzel wrote:
> Perhaps you'll find the version on the wiki useful as reference, it's an
>> older version of the dependencies, and can be found at
>> https://wiki.postgresql.org/wiki/Pg_depend_display
>>
>
> This is
>
> Perhaps you'll find the version on the wiki useful as reference, it's an
> older version of the dependencies, and can be found at
> https://wiki.postgresql.org/wiki/Pg_depend_display
>
This is where I started. However, I noticed that it do everything I need.
As I said in my original post, as
foo9 composite type public.foo9 1259 24187 11
view foo5 view public.foo5 2618 24180 foo5 8
function foo() function public.foo() 1255 24181 6
materialized view foo8 materialized view public.foo8 2618 24186 foo8 4
view foo4 view public.foo4 2618 24176 foo4 3
view foo3 view public.foo3 2618 24172 foo3 2
If I dr
On Wed, Dec 13, 2017 at 9:54 AM, Melvin Davidson
wrote:
>
>
> On Wed, Dec 13, 2017 at 10:20 AM, Tom Lane wrote:
>
>> Jeremy Finzel writes:
>> > It looks like the very useful dependency tree shown when using DROP
>> CASCADE
>> > is written in C in dep
It looks like the very useful dependency tree shown when using DROP CASCADE
is written in C in dependency.c, but there is no way to leverage this
within Postgres to actually query an object's dependencies. Can we get
this somehow as a Postgres client in SQL?
One of the problems we face over and o
98 matches
Mail list logo