On Sun, Feb 24, 2019 at 10:06:10PM -0500, Gunther wrote:
> The index isn't required at all if all my pending jobs are in a partition of
> only pending jobs. In that case the plan can just be a sequential scan.
..
> because an index really isn't required. The actual pending partition will
> always r
Wow, yes, partition instead of index, that is interesting. Thanks Corey
and Justin.
The index isn't required at all if all my pending jobs are in a
partition of only pending jobs. In that case the plan can just be a
sequential scan.
And Jeff James, sorry, I failed to show the LIMIT 1 clause
On Sun, Feb 24, 2019 at 1:02 PM Gunther wrote:
> Thank you all for responding so far.
>
> David Rowley and Justin Pryzby suggested things about autovacuum. But I
> don't think autovacuum has any helpful role here. I am explicitly doing a
> vacuum on that table. And it doesn't help at all. Almost
On Sat, Feb 23, 2019 at 4:06 PM Gunther wrote:
> the dequeue operation is essentially this:
>
> BEGIN
>
> SELECT jobId, action
> FROM Queue
> WHERE pending
> FOR UPDATE SKIP LOCKED
>
>
There is no LIMIT shown. Wouldn't the first thread to start up just lock
all the rows and everyone else w
On Sun, Feb 24, 2019 at 5:43 PM Justin Pryzby wrote:
> On Sun, Feb 24, 2019 at 04:34:34PM -0500, Corey Huinker wrote:
> > I think your solution may be something like this:
> > 1. Create a new table, same columns, partitioned on the pending column.
> > 2. Rename your existing queue table old_queue
On Sun, Feb 24, 2019 at 04:34:34PM -0500, Corey Huinker wrote:
> I think your solution may be something like this:
> 1. Create a new table, same columns, partitioned on the pending column.
> 2. Rename your existing queue table old_queue to the partitioned table as a
> default partition.
> 3. Rename
>
> Also, the REINDEX command always fails with a deadlock because there is a
> row lock and a complete table lock involved.
>
> I consider this ultimately a bug, or at the very least there is room for
> improvement. And I am on version 11.1.
> regards,
> -Gunther
>
REINDEX doesn't work concurrent
On Sun, Feb 24, 2019 at 10:02 AM Gunther wrote:
> David Rowley and Justin Pryzby suggested things about autovacuum. But I
> don't think autovacuum has any helpful role here.
My suspicion is that this has something to do with the behavior of
B-Tree indexes with lots of duplicates. See also:
htt
On Sun, Feb 24, 2019 at 12:45:34PM -0500, Gunther wrote:
> What I am most puzzled by is that no matter how long I wait, the DROP INDEX
> CONCURRENTLY never completes. Why is that?
https://www.postgresql.org/docs/11/sql-dropindex.html
CONCURRENTLY
[...] With this option, the command instead waits u
Thank you all for responding so far.
David Rowley and Justin Pryzby suggested things about autovacuum. But I
don't think autovacuum has any helpful role here. I am explicitly doing
a vacuum on that table. And it doesn't help at all. Almost not at all.
I want to believe that
VACUUM FREEZE Qu
10 matches
Mail list logo