Well.. it seems I have to rethink about my application design. Anyway,
thank you all for your insights and suggestions.
On 12/18/2019 10:46 PM, Justin wrote:
I agree completely,
I do not think Postgresql is a good fit for Shalini based on the
conversation so far
tracking Concurrency is goi
I agree completely,
I do not think Postgresql is a good fit for Shalini based on the
conversation so far
tracking Concurrency is going to be a killer... But i see the temptation
to use a DB for this as the updates are ACID less likely to corrupted data
for X reason
On Wed, Dec 18, 2019 at 12:1
Justin writes:
> I now see what is causing this specific issue...
> The update and row versions is happening on 2kb chunk at a time, That's
> going to make tracking what other clients are doing a difficult task.
Yeah, it's somewhat unfortunate that the chunkiness of the underlying
data storage b
I now see what is causing this specific issue...
The update and row versions is happening on 2kb chunk at a time, That's
going to make tracking what other clients are doing a difficult task.
All the clients would have to have some means to notify all the other
clients that an update occurred in
Justin writes:
> I have a question reading through this email chain. Does Large Objects
> table using these functions work like normal MVCC where there can be two
> versions of a large object in pg_largeobject .
Yes, otherwise you could never roll back a transaction that'd modified
a large obje
I have a question reading through this email chain. Does Large Objects
table using these functions work like normal MVCC where there can be two
versions of a large object in pg_largeobject . My gut says no as
moving/copying potentially 4 TB of data would kill any IO.
I can not find any document
Shalini wrote:
> Could you also please state the reason why is it happening in case
> of large objects? Because concurrent transactions are very well
> handled for other data types, but the same is not happening for
> lobs. Is it because the fomer are stored in toast table and there is
> n
Hi,Thanks. I will try this approach.Could you also please state the reason why is it happening in case of large objects? Because concurrent transactions are very well handled for other data types, but the same is not happening for lobs. Is it because the fomer are stored in toast table and there is
Shalini wrote:
> > Is there a workaround to this concurrency issue without creating a
> > new large object?
The transaction failing with the "Tuple concurrently updated"
error could be resubmitted by the client, as if it was a
serialization failure.
Or the failure could be preve
Hi Rene,
I am using Postgresql 11.2. Major version is 11 and minor version is 2.
On 12/10/2019 11:24 AM, Rene Romero Benavides wrote:
Hi Shalini. The usual diagnostic info is your postgresql server
version, major and minor version, such as in 12.1 , the major version
is 12 and the minor versi
Hi Shalini. The usual diagnostic info is your postgresql server version,
major and minor version, such as in 12.1 , the major version is 12 and the
minor version (patch version) is 1.
On Fri, Dec 6, 2019 at 9:26 AM Shalini wrote:
> Hi all,
>
> I am working on a project which allows multiple use
11 matches
Mail list logo