> On 10 Apr 2016, at 9:07, Durumdara <durumd...@gmail.com> wrote:
> 
> Dear Adrian!
> 
> Again. As I see the beginning blocks are removed by mailing system in the 
> code.
> 
> We have an "ourlocks" table which hold records (TableName, RecordID, 
> SessionInnerID, TimeStamp, etc, with TableName/RecordID prikey).
> 
> If anybody wants to lock record "for long time", "over the transactions" it 
> try to insert a new record here.

Why are those records being locked? Reading on, it seems like you're trying to 
solve a fairly standard concurrency problem. Any RDBMS worth their salt can 
handle that for you, you don't need to manually do any of that.

> If other process want to lock same record, it can see this record (or got 
> violation error), so it stopped.
> 
> This is not for protect all tables, only for protect main entities have many 
> subtables like "Products", "Offers", etc.
> We can't use transactions, because in the editor they must post/commit 
> subdata.
> And because PG is different from other DBs, so if ANY of statements failed, 
> it rollback whole thing automatically (see my prior mail).
> In FireBird we can do simple record lock protection with main connection's 
> second transaction, but in MS and PG not.

This sounds much more like a use-case for sub-transactions and select for 
update (which puts a temporary RDBMS-controlled lock on the relevant records) 
than for manual locking.
See: http://www.postgresql.org/docs/9.5/static/sql-begin.html and 
http://www.postgresql.org/docs/9.5/static/sql-select.html

You might also want to look into transaction isolation levels: 
http://www.postgresql.org/docs/9.5/interactive/mvcc.html

As an example of how a concurrent workflow with the above goes:

Session 1:
        begin;
        savepoint offer_update;
        select product_id from offers where offer_id = 1234567 for update;

Session 2:
        begin;
        savepoint offer_update;
        select product_id from offers where offer_id = 1234567 for update;
        update offers set discount = 0.10 where product_id = 1234567;
#       ERROR (the record is locked by session 1)
        rollback to offer_update;

Session 1:
        update offers set discount = 0.15 where product_id = 1234567;
#       success
        commit;

Session 2: (retrying earlier update)
        select product_id from offers where offer_id = 1234567 for update;
        update offers set discount = 0.10 where product_id = 1234567;
#       success
        commit; 

You'll need to add some logic to your application (that editor you were talking 
about) so that it inserts savepoints and handles failures of sub-transactions 
appropriately.

> So we used real records in a real table. But how to clean if client 
> disconnected without delete own records?
> For this we created own sessioninfo table with inner id, user id, timestamp, 
> and [connectionid, connectiontime].
> The locking mechanism checks for same lock (Offer, 117), if anybody locks the 
> record, it checks for he's on or not.
> If active connection (ID + TS) then lock is valid, and we can show 
> information that "who is editing, please ask for him to release, etc.".
> If not, we can eliminate the record and insert our.

It sounds to me like you're complicating your code where you could be 
simplifying it. Possibly, because you're used to a database that provides 
certain features to make up for the lack of others that are harder to 
implement. Both MS Access and Firebird are very much file-based desktop 
databases that are not really meant for concurrent access. The big RDBMSes (PG, 
MS SQL server, Oracle, DB2) are _designed_ for such workloads.

> The main goal is to protect the main entities. It is working in MS.
> My question was about how to get my client's connection timestamp as 
> get_backend_pid.
> But as you wrote I can get it from activity log. Because PID can't be same as 
> mine, I can select my from the table.
> 
> You said it have danger (like guid repetition). Yes, it have. And not the BAD 
> SYSADMIN, but the summer/winter time changes are dangerous (the backward 
> version). In PG we can extend our "guid" with IP and Port too, and this could 
> be enough safe for us.

In that case you should at least use UTC timestamps. Still, with such an 
implementation it will be hard to create a reliable system.

> Thanks
> 
> 
> 
> 
> 2016-04-09 16:05 GMT+02:00 Adrian Klaver <adrian.kla...@aklaver.com>:
> On 04/09/2016 01:30 AM, Durumdara wrote:
> Dear Everybody!
> 
> 
> In MS we had a "persistent lock" structure and method.
> This over  transactions because based on real records in a real table
> with pri key (tablename + id).
> 
> For garbaging we had a special session info.
> In MS the session id is smallint, so it can repeats after server
> restarts, but my coll. found a "session creation timestamp".
> This is a key which unique.
> With this we can check for died sessions and we can clean their records.
> 
> It might help to explain more what it is you are trying to achieve.
> 
> First I am not sure what you mean by 'persistent lock', especially as it 
> applies to Postgres?
> 
> Second, I assume by garbaging you mean garbage collection of something?
> If that is the case what exactly are you garbage collecting?
> I see 'clean records', what records would these be?
> In particular, on Postgres, where are you going to do this?
> 
> 
> We want create same mechanism.
> 
> If the above questions did not already cover this, what mechanism?
> 
> 
> I know there are adv. locks in PG, but I want to use session id.
> 
> This could be:
> |pg_backend_pid|()
> 
> May pid repeats.
> Where I can get timestamp or some other unique data with I can create a
> combined primary key?
> 
> Thanks for your help!
> 
> dd
> 
> 
> 
> -- 
> Adrian Klaver
> adrian.kla...@aklaver.com
> 

Alban Hertroys
--
If you can't see the forest for the trees,
cut the trees and you'll find there is no forest.



-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to