Re: Encoding/collation question

2019-12-18 Thread Karsten Hilbert
On Thu, Dec 12, 2019 at 08:35:53AM -0500, Tom Lane wrote:

> C collation basically devolves to strcmp/memcmp, which are as standard
> and well-defined as can be.  If you're happy with the way it sorts
> things then there's no reason not to use it.

So that's the collation to use when "technical" sorting is
required (say, when uniqueness does not depend on the notion
of culturally equivalent characters).

> It's actually all the *other* collations where you should worry about
> their behavior being a moving target :-(.

But then that is to be expected.

Thanks,
Karsten
--
GPG  40BE 5B0E C98E 1713 AFA6  5BC0 3BEA AC80 7D4F C89B




How to prevent POSTGRES killing linux system from accepting too much inserts?

2019-12-18 Thread James(王旭)
Hello,I encountered into this kernel message, and I cannot login into the Linux 
system anymore:

Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1Dec 17 23:02:30 hq-pg 
kernel: INFO: task sync:6573 blocked for more than 120 seconds.Dec 17 23:02:30 
hq-pg kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
this message.Dec 17 23:02:30 hq-pg kernel: sync        
    D 965ebabd1040     0  6573  
 6572 0x0080Dec 17 23:02:30 hq-pg kernel: Call Trace:Dec 17 23:02:30 
hq-pg kernel: [

RE: Row locks, SKIP LOCKED, and transactions

2019-12-18 Thread Steven Winfield
>> * I observe this even if I crank up the transaction isolation level to 
>> repeatable read and serializable.
>> 
>> 
>> I'm wondering if row locks are not obeying the same transactional semantics 
>> as row data, 


>Gotta believe it is this:
>
>https://www.postgresql.org/docs/11/transaction-iso.html#XACT-READ-COMMITTED
>
>"UPDATE, DELETE, SELECT FOR UPDATE, and SELECT FOR SHARE commands ..."
>
>If I read correctly, transactions can see the effects of other 
>transactions that commit during their lifetime.

Thanks. I had a look at those docs when I first encountered the issue (if it 
can be called that), which prompted me to try repeatable read and serializable 
isolation levels, but to no avail. I couldn't find anything specifically 
mentioning the visibility of row locks at different isolation levels.





RE: Row locks, SKIP LOCKED, and transactions

2019-12-18 Thread Steven Winfield
> Huh.  SERIALIZABLE shouldn't allow two transactions to see no result row
> for a given ID and then insert a result row for that ID.  One of those
> transactions should have to roll back, because otherwise it'd be
> incompatible with both serial orderings of the two transactions.

Sorry for the misunderstanding - I wasn't suggesting that.
Even at the serializable level, W2 can see a row that is unlocked by W1's 
commit despite W2's snapshot being taken before W1 commits.
Carrying on my example, W2 would indeed fail to insert a result(id=1) row.

> Conceptually, the thing you really need to lock for this to work is the
> result row that isn't there yet, so that some overlapping transaction
> doesn't try to lock the same absent thing.  Unfortunately, our system for
> locking things that aren't there isn't there either.
> Some articles on serializability talk about "materialising the conflict",
> which means locking some other surrogate thing that "covers" a gap you are
> interested in.  You might think the job row would do the trick, but since
> we don't recheck the condition (that is, recheck that there is no
> corresponding result because you don't update the job row), no cigar. 

I like the concept of "materialising the conflict", that’s a useful way of 
thinking about it - thanks.

> You could also use plain old pg_try_advisory_xact_lock(id), because it just
> locks integers, and they're always there.

Yeah, I tried this, and might give it another go. A naïve attempt failed for a 
similar reason.

> 
> SERIALIZABLE deals with that type of magic internally (it locks gaps in
> key ranges by predicate-locking a physical btree or hash page that you'd
> need to write on to insert a row with a matching key, which is how it
> discovers a conflict between one transaction that went looking for key=42
> but didn't find it and another that later writes key=42), but, as
> mentioned, SERIALIZABLE doesn't really allow concurrency with this
> workload, and you specified that you wanted concurrency with SKIP LOCKED
> (but I think you'd have the same problem without it; SKIP LOCKED just gets
> you the wrong answer faster).
> 
> There are various ways you could deal with this, but I'd probably go for a
> simple scheme where you only have to consult a single row to know if you
> can claim it.  You could still put the results into a separate table, but
> use job.state to find work, and set it to DONE when you insert the result.
> It may also be possible to add no new columns but do a dummy update to the
> job row to get the join qual rechecked, but I'm not sure if that'd work.
> Another reason to add a state column to the job table is so that you can
> put a conditional index on it so you can find jobs to be done very
> quickly, if you're not planning to remove the ones that are done.

Thanks. I rejected the idea of doing a dummy update to the locked row as I 
wanted to avoid too much extra WAL - the real table originally had quite a few 
more columns than the toy example, but it's much slimmer now so this could be a 
viable option.




Re: REINDEX VERBOSE unknown option

2019-12-18 Thread Josef Šimánek
My patch was partially merged at
https://git.postgresql.org/gitweb/?p=postgresql.git;a=commitdiff;h=52dcfda48778d16683c64ca4372299a099a15b96
.

Thanks to everyone involved in this discussion.

pá 13. 12. 2019 v 2:11 odesílatel Josef Šimánek 
napsal:

> I was thinking about this problem and possible documentation change again
> and I have prepared documentation change to solve this. See attached
> screenshot and HTML.
>
> Code change can be found at https://github.com/simi/postgres/pull/3 (diff
> - https://github.com/simi/postgres/pull/3.diff, patch -
> https://github.com/simi/postgres/pull/3.patch).
>
> This change is based on idea of Pave Stěhule, thanks a lot for that!
> Similar approach was used recently in
> https://www.postgresql.org/docs/devel/sql-dropdatabase.html.
>
> so 16. 11. 2019 v 18:43 odesílatel Josef Šimánek 
> napsal:
>
>> Ahh, I just tried to do the same with reindexdb cli tool and the
>> actual syntax is REINDEX (VERBOSE) TABLE sales; Sorry for unnecessary
>> question. Anyway maybe we can add this to documentation as a example. I can
>> prepare patch for this if welcomed.
>>
>> so 16. 11. 2019 v 18:40 odesílatel Josef Šimánek 
>> napsal:
>>
>>> Hello,
>>>
>>> according to https://www.postgresql.org/docs/11/sql-reindex.html VERBOSE
>>> option is valid for REINDEX command for 11.3 PostgreSQL server. Anyway I'm
>>> getting error using VERBOSE option.
>>>
>>> project_production=# REINDEX VERBOSE TABLE sales;
>>> ERROR:  syntax error at or near "VERBOSE"
>>> LINE 1: REINDEX VERBOSE TABLE sales;
>>>
>>> Time: 0.235 ms
>>>
>>> I'm wondering if I'm doing anything wrong or actual documentation is
>>> wrong. Any ideas?
>>>
>>


Streaming replication fails after some time with 'incorrect resource manager data checksum'

2019-12-18 Thread Julian Backes
Hello all!

I already posted in the slack chat #help channel but got no answer :-(

We have a read only / hot standby system and are facing the same problem as
described in
https://stackoverflow.com/questions/35752389/incorrect-resource-manager-data-checksum-in-record-at-2-xyz-terminating-walrec
(the post is already 3 years old).

That means after some time (sometimes two days, sometimes half a day),
postgres starts logging 'incorrect resource manager data checksum in record
at xyz' and shuts down wal receiver (and stops streaming replication).

Master and slave are running on Ubuntu 18.04, Postgres 12.1, ext4 file
system (no zfs or btrfs, just lvm on the master); we only use ecc memory
(192 gb on the master and 256 gb on the slave) and nvme ssds on both
servers using a soft raid 1.
When the error occurs, a restart of postgres on the slave "fixes" the
problem.

Any ideas what we can do to prevent/investigate the problem?

Kind regards
Julian


Re: Row locks, SKIP LOCKED, and transactions

2019-12-18 Thread Tom Lane
Steven Winfield  writes:
>> There are various ways you could deal with this, but I'd probably go for a
>> simple scheme where you only have to consult a single row to know if you
>> can claim it.  You could still put the results into a separate table, but
>> use job.state to find work, and set it to DONE when you insert the result.
>> It may also be possible to add no new columns but do a dummy update to the
>> job row to get the join qual rechecked, but I'm not sure if that'd work.
>> Another reason to add a state column to the job table is so that you can
>> put a conditional index on it so you can find jobs to be done very
>> quickly, if you're not planning to remove the ones that are done.

> Thanks. I rejected the idea of doing a dummy update to the locked row as I 
> wanted to avoid too much extra WAL - the real table originally had quite a 
> few more columns than the toy example, but it's much slimmer now so this 
> could be a viable option.

Yeah ... the fundamental reason why this isn't working for you is that
the FOR UPDATE will only lock/check conflicts in the "job" table.
You could add a FOR UPDATE in the sub-select to lock the "result" table,
but that will still only lock rows it read, not rows it didn't read
because they weren't there yet :-(.  Updating the state of the job row
to show that it's claimed is much the most reliable way to fix this.

(Or you could use serializable mode, but that feels like using a hammer
to swat a fly.)

regards, tom lane




Re: Tuple concurrency issue in large objects

2019-12-18 Thread Daniel Verite
Shalini wrote:

> Could you also please state the reason why is it happening in case
> of large objects? Because concurrent transactions are very well
> handled for other data types, but the same is not happening for
> lobs. Is it because the fomer are stored in toast table and there is
> no support for concurrent txns in pg_largeobject table?

Keeping in mind that large objects are not a datatype, but rather a
functionality that is built on top of the bytea and oid datatypes plus
a set of functions, I wouldn't say that concurrent writes would be
better handled if you had a table: document(id serial, contents bytea)
with "contents" being indeed toastable.

To illustrate with a basic example: transactions Tx1 and Tx2
want to update the contents of the same document concurrently,
with this order of execution:

Tx1: begin 
Tx1: update document set contents=... where id=...
Tx2: begin
Tx2: update the same document (gets blocked)
Tx1: commit
Tx2: commit

If using the read committed isolation level, Tx2 will be put to wait
until Tx1 commits, and then the update by Tx1 will be overwritten by
Tx2. That's a well known anomaly known as a "lost update", and
probably not what you want.

If using a better isolation level (repeatable read or serializable),
the update by Tx2 will be rejected with a serialization failure,
which, to me, seems the moral equivalent of the "Tuple concurrently
updated" error you're reporting with large objects.
When this occurs, your application can fetch the latest value in a new
transaction and see how it can apply its change to the new value,
unless another conflict arises and so on.

In short, the best the database can do in case of conflicting writes
is to inform the application. It can't know which write should be
prioritized or if the changes should be merged before being written.


Best regards,
-- 
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite




Re: Tuple concurrency issue in large objects

2019-12-18 Thread Justin
I have a question reading through this email chain.   Does Large Objects
table using these functions work like normal MVCC where there can be two
versions of a large object in pg_largeobject .  My gut says no as
moving/copying potentially 4 TB of data would kill any IO.

I can not find any documentation discussing how these functions actually
work with respect to Transaction Isolation, MVCC and Snapshots??

On Wed, Dec 18, 2019 at 10:05 AM Daniel Verite 
wrote:

> Shalini wrote:
>
> > Could you also please state the reason why is it happening in case
> > of large objects? Because concurrent transactions are very well
> > handled for other data types, but the same is not happening for
> > lobs. Is it because the fomer are stored in toast table and there is
> > no support for concurrent txns in pg_largeobject table?
>
> Keeping in mind that large objects are not a datatype, but rather a
> functionality that is built on top of the bytea and oid datatypes plus
> a set of functions, I wouldn't say that concurrent writes would be
> better handled if you had a table: document(id serial, contents bytea)
> with "contents" being indeed toastable.
>
> To illustrate with a basic example: transactions Tx1 and Tx2
> want to update the contents of the same document concurrently,
> with this order of execution:
>
> Tx1: begin
> Tx1: update document set contents=... where id=...
> Tx2: begin
> Tx2: update the same document (gets blocked)
> Tx1: commit
> Tx2: commit
>
> If using the read committed isolation level, Tx2 will be put to wait
> until Tx1 commits, and then the update by Tx1 will be overwritten by
> Tx2. That's a well known anomaly known as a "lost update", and
> probably not what you want.
>
> If using a better isolation level (repeatable read or serializable),
> the update by Tx2 will be rejected with a serialization failure,
> which, to me, seems the moral equivalent of the "Tuple concurrently
> updated" error you're reporting with large objects.
> When this occurs, your application can fetch the latest value in a new
> transaction and see how it can apply its change to the new value,
> unless another conflict arises and so on.
>
> In short, the best the database can do in case of conflicting writes
> is to inform the application. It can't know which write should be
> prioritized or if the changes should be merged before being written.
>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>
>
>


Re: Tuple concurrency issue in large objects

2019-12-18 Thread Tom Lane
Justin  writes:
> I have a question reading through this email chain.   Does Large Objects
> table using these functions work like normal MVCC where there can be two
> versions of a large object in pg_largeobject .

Yes, otherwise you could never roll back a transaction that'd modified
a large object.

> My gut says no as
> moving/copying potentially 4 TB of data would kill any IO.

Well, it's done on a per-chunk basis (normally about 2K per chunk),
so you won't do that much I/O unless you're changing all of a 4TB
object.

regards, tom lane




Re: Tuple concurrency issue in large objects

2019-12-18 Thread Justin
I now see what is causing this specific issue...

The update and row versions is happening on 2kb chunk at a time,  That's
going to make tracking what other clients are doing a difficult task.

All the clients would have to have some means to notify all the other
clients that an update occurred in this chunk, which could cause total
reload of the data if the update spilled into adjoining rows,
The notifications and re-fetching of data to keep the clients in sync is
going to make this a Network Chatty app.

Maybe  adding a bit to the documentation stating "row versions occurs every
X chunks"

On Wed, Dec 18, 2019 at 11:12 AM Tom Lane  wrote:

> Justin  writes:
> > I have a question reading through this email chain.   Does Large Objects
> > table using these functions work like normal MVCC where there can be two
> > versions of a large object in pg_largeobject .
>
> Yes, otherwise you could never roll back a transaction that'd modified
> a large object.
>
> > My gut says no as
> > moving/copying potentially 4 TB of data would kill any IO.
>
> Well, it's done on a per-chunk basis (normally about 2K per chunk),
> so you won't do that much I/O unless you're changing all of a 4TB
> object.
>
> regards, tom lane
>


Re: Tuple concurrency issue in large objects

2019-12-18 Thread Tom Lane
Justin  writes:
> I now see what is causing this specific issue...
> The update and row versions is happening on 2kb chunk at a time,  That's
> going to make tracking what other clients are doing a difficult task.

Yeah, it's somewhat unfortunate that the chunkiness of the underlying
data storage becomes visible to clients if they try to do concurrent
updates of the same large object.  Ideally you'd only get a concurrency
failure if you tried to overwrite the same byte(s) that somebody else
did, but as it stands, modifying nearby bytes might be enough --- or
not, if there's a chunk boundary between.

On the whole, though, it's not clear to me why concurrent updates of
sections of large objects is a good application design.  You probably
ought to rethink how you're storing your data.

regards, tom lane




Re: Tuple concurrency issue in large objects

2019-12-18 Thread Justin
I agree  completely,

I do not think Postgresql is a good fit for Shalini based on the
conversation so far

tracking Concurrency is going to be a killer...  But i see the temptation
to use a DB for this as the updates are ACID less likely to corrupted data
for X reason

On Wed, Dec 18, 2019 at 12:12 PM Tom Lane  wrote:

> Justin  writes:
> > I now see what is causing this specific issue...
> > The update and row versions is happening on 2kb chunk at a time,  That's
> > going to make tracking what other clients are doing a difficult task.
>
> Yeah, it's somewhat unfortunate that the chunkiness of the underlying
> data storage becomes visible to clients if they try to do concurrent
> updates of the same large object.  Ideally you'd only get a concurrency
> failure if you tried to overwrite the same byte(s) that somebody else
> did, but as it stands, modifying nearby bytes might be enough --- or
> not, if there's a chunk boundary between.
>
> On the whole, though, it's not clear to me why concurrent updates of
> sections of large objects is a good application design.  You probably
> ought to rethink how you're storing your data.
>
> regards, tom lane
>


Re: How to prevent POSTGRES killing linux system from accepting too much inserts?

2019-12-18 Thread Steven Lembark
On Wed, 18 Dec 2019 17:53:26 +0800
"James(王旭)"  wrote:

> Hello,I encountered into this kernel message, and I cannot login into
> the Linux system anymore:
> 
> Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1Dec 17
> 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than 120
> seconds.Dec 17 23:02:30 hq-pg kernel: "echo 0
> > /proc/sys/kernel/hung_task_timeout_secs" disables this
> message.Dec 17 23:02:30 hq-pg kernel: sync       
>     D 965ebabd1040     0 
> 6573   6572 0x0080Dec 17 23:02:30 hq-pg kernel: Call
> Trace:Dec 17 23:02:30 hq-pg kernel: [ generic_write_sync+0x70/0x70 After some google I guess it's the
> problem that IO speed is low, while the insert requests are coming
> too much quickly.So PG put these into cache first then kernel called
> sync.I know I can queue the requests, so that POSTGRES will not
> accept these requests which will result in an increase in system
> cache.But is there any way I can tell POSTGRES, that you can only
> handle 2 records per second, or 4M per second, please don't
> accept inserts more than that speed.For me, POSTGRES just waiting is
> much better than current behavior. Any help will be much appreciated.

There isn't one magic-bullet solution for this. It may be that you can 
tune Linux, PG, or the filesystem to handle the load more 
gracefully; or that you just need more hardware. Streaming inserts might
be better batched and handled via synchronous ETL than pushed in at
random, at that point you can control the resources.

One approach might be tighter timeouts on the server or client, which
will leave the queries failing when the queue gets too high. That
frees up resources on the server, at the obvious expense of having
transactions roll back. On the other hand, you can end up with 
timeouts so tight that you end up thrashing, which doesn't help the
problem.

Catch from this end is that without more informaton on the system
you are dealing with there isn't any quick-and-dirty fix.

I'd suggest looking over:



for suggestions and seeing which ones work or don't. If you have
more specific questions on the parameters or how to evaluate the
stats PG is keeping feel free to ask them here, but you will need
to be specific as to the stats and situation in which they were
acquired so that people have enough context to give you a reasonable
answer.

-- 
Steven Lembark3646 Flora Place
Workhorse ComputingSt. Louis, MO 63110
lemb...@wrkhors.com+1 888 359 3508




Re: How to prevent POSTGRES killing linux system from accepting too much inserts?

2019-12-18 Thread Merlin Moncure
On Wed, Dec 18, 2019 at 3:53 AM James(王旭)  wrote:
>
> Hello,
>>
>> I encountered into this kernel message, and I cannot login into the Linux 
>> system anymore:
>>
>>
>>
>>> Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1
>>>
>>> Dec 17 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than 
>>> 120 seconds.
>>>
>>> Dec 17 23:02:30 hq-pg kernel: "echo 0 > 
>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>>
>>> Dec 17 23:02:30 hq-pg kernel: syncD 965ebabd1040 0  
>>> 6573   6572 0x0080
>>>
>>> Dec 17 23:02:30 hq-pg kernel: Call Trace:
>>>
>>> Dec 17 23:02:30 hq-pg kernel: [] ? 
>>> generic_write_sync+0x70/0x70
>>
>>
>> After some google I guess it's the problem that IO speed is low, while the 
>> insert requests are coming too much quickly.So PG put these into cache first 
>> then kernel called sync.
>>
>> I know I can queue the requests, so that POSTGRES will not accept these 
>> requests which will result in an increase in system cache.
>>
>> But is there any way I can tell POSTGRES, that you can only handle 2 
>> records per second, or 4M per second, please don't accept inserts more than 
>> that speed.
>>
>> For me, POSTGRES just waiting is much better than current behavior.
>>
>>
>> Any help will be much appreciated.

This is more a problem with the o/s than with postgres itself.

synchronous_commit is one influential parameter that can possibly help
mitigate the issue with some safety tradeoffs (read the docs).   For
linux, one possible place to look is tuning dirty_background_ratio and
related parameters.  The idea is you want the o/s to be more
aggressive about syncing to reduce the impact of i/o storm; basically
you are trading off some burst performance for consistency of
performance.  Another place to look is checkpoint behavior.   Do some
searches, there is tons of information about this on the net.

merlin




Re: How to prevent POSTGRES killing linux system from accepting too much inserts?

2019-12-18 Thread Jeff Janes
On Wed, Dec 18, 2019 at 4:53 AM James(王旭)  wrote:

> Hello,
>>
>> I encountered into this kernel message, and I cannot login into the Linux
>> system anymore:
>
>
>>
>> Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1
>>
>> Dec 17 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than
>>> 120 seconds.
>>
>> Dec 17 23:02:30 hq-pg kernel: "echo 0 >
>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>
>> Dec 17 23:02:30 hq-pg kernel: syncD 965ebabd1040 0
>>> 6573   6572 0x0080
>>
>> Dec 17 23:02:30 hq-pg kernel: Call Trace:
>>
>> Dec 17 23:02:30 hq-pg kernel: [] ?
>>> generic_write_sync+0x70/0x70
>>
>>
>> After some google I guess it's the problem that IO speed is low, while
>> the insert requests are coming too much quickly.So PG put these into cache
>> first then kernel called sync
>
>
Could you expand on what you found in the googling, with links?  I've never
seen these in my kernel log, and I don't know what they mean other than the
obvious that it is something to do with IO.  Also, what kernel and file
system are you using?


> .
>
> I know I can queue the requests, so that POSTGRES will not accept these
>> requests which will result in an increase in system cache.
>
> But is there any way I can tell POSTGRES, that you can only handle 2
>> records per second, or 4M per second, please don't accept inserts more than
>> that speed.
>
> For me, POSTGRES just waiting is much better than current behavior.
>
>
I don't believe there is a setting from within PostgreSQL to do this.

There was a proposal for a throttle on WAL generation back in February, but
with no recent discussion or (visible) progress:

https://www.postgresql.org/message-id/flat/2B42AB02-03FC-406B-B92B-18DED2D8D491%40anarazel.de#b63131617e84d3a0ac29da956e6b8c5f


I think the real answer here to get a better IO system, or maybe a better
kernel.  Otherwise, once you find a painful workaround for one symptom you
will just smack into another one.

Cheers,

Jeff

>


Postgres AssertFailedException Exception

2019-12-18 Thread Patil, Prashant
Hi Team,

We are receiving following AssertFailedException exception while executing 
pgrouting query listed below. Anyone have idea about this exception? Is this 
known bug? Possible fix?
Server details -
Postgresql 9.3.25
Ubuntu 16.04.6 LTS
Pgrouting 2.6.0

2019-12-16 19:31:55 EST Passenger AppPreloader: /var/w...er.com/ViewEngine 
(forking...) [41329-3] postgres@odw ERROR:  AssertFailedException: 
p1.tot_cost() == p2.tot_cost() at 
/build/pgrouting-Prt6v2/pgrouting-2.6.0/include/yen/pgr_ksp.hpp:63
   *** Execution path***
   
[bt]/usr/lib/postgresql/9.3/lib/libpgrouting-2.6.so(_Z13get_backtraceB5cxx11v+0x3c)
 [0x7ff5fd368b7c]
   
[bt]/usr/lib/postgresql/9.3/lib/libpgrouting-2.6.so(_ZNK7Pgr_kspIN9pgrouting5graph14Pgr_base_graphIN5boost14adjacency_listINS3_4vecSES5_NS3_11undirectedSENS0_12Basic_vertexENS0_10Basic_edgeENS3_11no_propertyENS3_5listSEEES7_S8_EEE9compPathsclERK4PathSH_+0x2dd)
 [0x7ff5fd3d2bbd]
   
[bt]/usr/lib/postgresql/9.3/lib/libpgrouting-2.6.so(_ZNSt8_Rb_treeI4PathS0_St9_IdentityIS0_EN7Pgr_kspIN9pgrouting5graph14Pgr_base_graphIN5boost14adjacency_listINS7_4vecSES9_NS7_11undirectedSENS4_12Basic_vertexENS4_10Basic_edgeENS7_11no_propertyENS7_5listSEEESB_SC_EEE9compPathsESaIS0_EE16_M_insert_uniqueIRKS0_EESt4pairISt17_Rb_tree_iteratorIS0_EbEOT_+0x4e)
 [0x7ff5fd3d2dae]
   
[bt]/usr/lib/postgresql/9.3/lib/libpgrouting-2.6.so(_ZN7Pgr_kspIN9pgrouting5graph14Pgr_base_graphIN5boost14adjacency_listINS3_4vecSES5_NS3_11undirectedSENS0_12Basic_vertexENS0_10Basic_edgeENS3_11no_propertyENS3_5listSEEES7_S8_EEE11doNextCycleERSC_+0x641)
 [0x7ff5fd3e2251]
   
[bt]/usr/lib/postgresql/9.3/lib/libpgrouting-2.6.so(_ZN7Pgr_kspIN9pgrouting5graph14Pgr_base_graphIN5boost14adjacency_listINS3_4vecSES5_NS3_11undirectedSENS0_12Basic_vertexENS0_10Basic_edgeENS3_11no_propertyENS3_5listSEEES7_S8_EEE3YenERSC_llib+0x2fb)
 [0x7ff5fd3e288b]
   
[bt]/usr/lib/postgresql/9.3/lib/libpgrouting-2.6.so(do_pgr_ksp+0xbd4) 
[0x7ff5fd3e5f84]
   
[bt]/usr/lib/postgresql/9.3/lib/libpgrouting-2.6.so(kshortest_path+0x40e) 
[0x7ff5fd3ced4e]
   [bt]postgres: postgres odw 10.76.0.185(44748) 
SELECT(ExecMakeTableFunctionResult+0x19c) [0x55625ab1dcbc]
   [bt]postgres: postgres odw 10.76.0.185(44748) SELECT(+0x1dc116) 
[0x55625ab31116]
   [bt]postgres: postgres odw 10.76.0.185(44748) 
SELECT(ExecScan+0x2c9) [0x55625ab1feb9]
   [bt]postgres: postgres odw 10.76.0.185(44748) 
SELECT(ExecProcNode+0x1a8) [0x55625ab187e8]
   [bt]postgres: postgres odw 10.76.0.185(44748) 
SELECT(standard_ExecutorRun+0x10e) [0x55625ab15c6e]
   [bt]/usr/lib/postgresql/9.3/lib/pg_stat_statements.so(+0x24a5) 
[0x7ffa2048a4a5]
   [bt]postgres: postgres odw 10.76.0.185(44748) SELECT(+0x2a8717) 
[0x55625abfd717]
   [bt]postgres: postgres odw 10.76.0.185(44748) 
SELECT(PortalRunFetch+0x180) [0x55625abff0a0]
   [bt]postgres: postgres odw 10.76.0.185(44748) SELECT(+0x1e423d) 
[0x55625ab3923d]

2019-12-16 19:31:55 EST Passenger AppPreloader: /var/w...er.com/ViewEngine 
(forking...) [41329-4] postgres@odw HINT:
2019-12-16 19:31:55 EST Passenger AppPreloader: /var/w...er.com/ViewEngine 
(forking...) [41329-5] postgres@odw CONTEXT:  PL/pgSQL function 
pgr_ksp(text,bigint,bigint,integer,boolean,boolean) line 4 at RETURN QUERY
   PL/pgSQL function 
nn_candidate_routes_from_and_to_latlon(geometry,geometry[]) line 75 at RETURN 
QUERY
2019-12-16 19:31:55 EST Passenger AppPreloader: /var/w...er.com/ViewEngine 
(forking...) [41329-6] postgres@odw STATEMENT:
   select st_multi(st_union(geom)) as 
geom,return_target_seq,path_id,st_length(st_union(st_transform(geom,2163))) as 
distance
from 
nn_candidate_routes_from_and_to_latlon(st_setsrid(st_makepoint(-117.99466873924484,33.92814775576908),4326),
 ARRAY(
   select geom from (
   select geom, 
st_distance(st_setsrid(st_makepoint(-117.99466873924484,33.92814775576908),4326),geom)
   from (
  select

(st_dumppoints((st_transform(st_segmentize(st_transform(wkt_geometry,2163),75),4326.geom
from (select * from staging_dw_cables where 
upper(owner) ~ 'OWNED' and start_ne_dw_enclosure_id is not null and 
end_ne_dw_enclosure_id is not null and pop_cable_tf = true
and wkt_geometry && 
st_transform(st_buffer(st_transform(st_setsrid(st_makepoint(-117.99466873924484,33.92814775576908),4326),2163),5),4326)
order by 
st_distance(st_setsrid(st_makepoint(-117.99466873924484,33.92814775576908),4326),wkt_geometry)
 limit 15) cables) sq order by 2 asc limit 15) sq
)) where geom is not null group by 
return_target_seq,path_id order by distance asc limit 25


Regards,
Prashant

This email may contain con

Re: Postgres AssertFailedException Exception

2019-12-18 Thread Tom Lane
"Patil, Prashant"  writes:
> We are receiving following AssertFailedException exception while executing 
> pgrouting query listed below. Anyone have idea about this exception? Is this 
> known bug? Possible fix?

The stack trace shows the exception is well down inside libpgrouting,
so you'd need to talk to the author(s) of that extension.  I dunno
who they are or whether they read this list, but in any case they
probably would prefer some other method for submitting trouble reports.

> Server details -
> Postgresql 9.3.25

Hm, you realize of course that 9.3.x has been out of support for a year.
I rather imagine that the libpgrouting people wouldn't want to provide
bug fixes for that branch either :-(

regards, tom lane




Re: How to prevent POSTGRES killing linux system from accepting too much inserts?

2019-12-18 Thread Osahon Oduware
THE TRUTH CANNOT BE HIDDEN
**Explosion in my car.
https://www.docdroid.net/s11XHOS/the-truth-cannot-be-hidden.pdf

On Wed, 18 Dec 2019, 10:54 James(王旭),  wrote:

> Hello,
>>
>> I encountered into this kernel message, and I cannot login into the Linux
>> system anymore:
>
>
>>
>> Dec 17 23:01:50 hq-pg kernel: sh (6563): drop_caches: 1
>>
>> Dec 17 23:02:30 hq-pg kernel: INFO: task sync:6573 blocked for more than
>>> 120 seconds.
>>
>> Dec 17 23:02:30 hq-pg kernel: "echo 0 >
>>> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
>>
>> Dec 17 23:02:30 hq-pg kernel: syncD 965ebabd1040 0
>>> 6573   6572 0x0080
>>
>> Dec 17 23:02:30 hq-pg kernel: Call Trace:
>>
>> Dec 17 23:02:30 hq-pg kernel: [] ?
>>> generic_write_sync+0x70/0x70
>>
>>
>> After some google I guess it's the problem that IO speed is low, while
>> the insert requests are coming too much quickly.So PG put these into cache
>> first then kernel called sync.
>
> I know I can queue the requests, so that POSTGRES will not accept these
>> requests which will result in an increase in system cache.
>
> But is there any way I can tell POSTGRES, that you can only handle 2
>> records per second, or 4M per second, please don't accept inserts more than
>> that speed.
>
> For me, POSTGRES just waiting is much better than current behavior.
>
>
>> Any help will be much appreciated.
>
>
>>
>> Thanks,
>
> James
>
>
>>


AccessExclusiveLock with pg_locks.locktype of tuple

2019-12-18 Thread Erik Jones
Greetings,

I've seen locks with mode of AccessExclusiveLock and locktype of tuple a
few times now but have never been able to reproduce one nor had a chance to
dig into them and now have a couple questions on them:

* When is this kind of heavy lock on a row/tuple taken out?  I've done some
code spelunking but have yet to find anything.
* An AccessExcluciveLock on a tuple should only block all other access to
the given tuple, yes?  It won't block access at the table level?  I ask
since the manual only discusses AccessExclusiveLocks in the context of
table level locking.

The only thing I've found so far on them is this old pgsql-general thread
wherein Tom Lane essentially just says, "Yeah, it can happen and is an
implementation detail that can change from version to version."  That was
on 9.4.x and the my most recent confirmed sighting was on a server running
11.3.

-- 
Erik Jones
mag...@gmail.com


READ UNCOMMITTED in postgres

2019-12-18 Thread Matthew Phillips
Hi,
With the current READ UNCOMMITTED discussion happening on pgsql-hackers
[1], It did raise a question/use-case I recently encountered and could not
find a satisfactory solution for. If someone is attempting to poll for new
records on a high insert volume table that has a monotonically increasing
id, what is the best way to do it? As is, with a nave implementation, rows
are not guaranteed to appear in monotonic order; so if you were to keep a
$MAX_ID, and SELECT WHERE p_id > $MAX_ID, you would hit gaps. Is there a
clean way to do this? I've seen READ UNCOMMITTED used for this with DB2.

Thanks
Matt

[1]
https://www.postgresql.org/message-id/CANP8%2Bj%2BmgWfcX9cTPsk7t%2B1kQCxgyGqHTR5R7suht7mCm_x_hA%40mail.gmail.com


Re: READ UNCOMMITTED in postgres

2019-12-18 Thread Stephen Frost
Greetings,

* Matthew Phillips (mphillip...@gmail.com) wrote:
> With the current READ UNCOMMITTED discussion happening on pgsql-hackers
> [1], It did raise a question/use-case I recently encountered and could not
> find a satisfactory solution for. If someone is attempting to poll for new
> records on a high insert volume table that has a monotonically increasing
> id, what is the best way to do it? As is, with a nave implementation, rows
> are not guaranteed to appear in monotonic order; so if you were to keep a
> $MAX_ID, and SELECT WHERE p_id > $MAX_ID, you would hit gaps. Is there a
> clean way to do this? I've seen READ UNCOMMITTED used for this with DB2.

There's the LISTEN/NOTIFY system, which at a high level is a better
approach than using a polling system.

Thanks,

Stephen


signature.asc
Description: PGP signature


Re: AccessExclusiveLock with pg_locks.locktype of tuple

2019-12-18 Thread Tom Lane
Erik Jones  writes:
> I've seen locks with mode of AccessExclusiveLock and locktype of tuple a
> few times now but have never been able to reproduce one nor had a chance to
> dig into them and now have a couple questions on them:
> * When is this kind of heavy lock on a row/tuple taken out?  I've done some
> code spelunking but have yet to find anything.

It's a transient state while waiting to acquire a "normal" tuple lock,
that is one recorded in the tuple header.  The explanation can be found
in src/backend/access/heap/README.tuplock, and the relevant code is
mostly in heap_lock_tuple().

regards, tom lane




Re: READ UNCOMMITTED in postgres

2019-12-18 Thread Thomas Kellerer
Matthew Phillips schrieb am 19.12.2019 um 00:12:
> Hi, With the current READ UNCOMMITTED discussion happening on
> pgsql-hackers [1], It did raise a question/use-case I recently
> encountered and could not find a satisfactory solution for. If
> someone is attempting to poll for new records on a high insert volume
> table that has a monotonically increasing id, what is the best way to
> do it? As is, with a nave implementation, rows are not guaranteed to
> appear in monotonic order; so if you were to keep a $MAX_ID, and
> SELECT WHERE p_id > $MAX_ID, you would hit gaps. Is there a clean way
> to do this? I've seen READ UNCOMMITTED used for this with DB2.

In my understanding READ UNCOMMITTED in other databases is typically used to 
avoid read-locks which Postgres doesn't have. 
So I wonder what benefits READ UNCOMMITTED would have to begin with.

But, if you want to poll for new rows, then why don't you use a timestamp 
column?

  select *
  from the_table
  where created_at >=