Siddharth Jain writes:
> when my application (Node.js) receives a class 40 error:
> Class 40 — Transaction Rollback
> 4 transaction_rollback
> 40002 transaction_integrity_constraint_violation
> 40001 serialization_failure
> 40003 statement_completion_unknown
> 40P01 deadlock_detectedthen does
On Tue, Apr 11, 2023 at 04:44:54PM +, Evgeny Morozov wrote:
> We have data_checksums=on. (It must be on by default, since I cannot
> find that in our config files anywhere.)
initdb does not enable checksums by default, requiring a
-k/--data-checksums, so likely this addition comes from from yo
Hi All,
when my application (Node.js) receives a class 40 error:
Class 40 — Transaction Rollback
4 transaction_rollback
40002 transaction_integrity_constraint_violation
40001 serialization_failure
40003 statement_completion_unknown
40P01 deadlock_detectedthen does it mean PG has already rolle
I'm not getting every reply in the list but I want to point this out from the
archive version of this thread:
> > I was under the impression that when using INSERT SELECT ORDER BY the
> > sequence
> > ids were generated using the select order.
> But someone said that’s not guaranteed, especia
On Tue, 11 Apr 2023, Federico wrote:
>My understanding was that they are generated in select order
But are they? (I don’t know, but I’d not assume that.)
>If my understanding is incorrect, would this alternative guarantee the above
>INSERT INTO t(id, data)
>SELECT nextval(pg_get_serial_
On Tue, 11 Apr 2023 at 23:46, Rob Sargent wrote:
>
>
> >> Can your client retain a hashmap of md5,data pairings, allowing the
> >> lookup on the way back using the returned data and supplied id?
> >>
> > When using unique columns or similar, that's something that is done,
> > but if there are no u
On Tue, 11 Apr 2023 at 23:44, Thorsten Glaser wrote:
>
> On Tue, 11 Apr 2023, Federico wrote:
>
> >I was under the impression that when using INSERT SELECT ORDER BY the
> >sequence
> >ids were generated using the select order.
>
> But someone said that’s not guaranteed, especially when INSERT wil
Can your client retain a hashmap of md5,data pairings, allowing the
lookup on the way back using the returned data and supplied id?
When using unique columns or similar, that's something that is done,
but if there are no unique columns in the value no match can be done
reliably with the sourc
On Tue, 11 Apr 2023, Federico wrote:
>I was under the impression that when using INSERT SELECT ORDER BY the sequence
>ids were generated using the select order.
But someone said that’s not guaranteed, especially when INSERT will
be parallelised later.
bye,
//mirabilos
--
15:41⎜ Somebody write a
On Tue, 11 Apr 2023 at 23:31, Rob Sargent wrote:
>
> On 4/11/23 14:37, Federico wrote:
> >
> > The problem here is not having the auto increment id in a particular
> > order, is that there
> > is apparently no correlation with the position of an element in the
> > values clause with the
> > id gen
On Tue, 11 Apr 2023 at 23:22, Thorsten Glaser wrote:
>
> On Tue, 11 Apr 2023, Federico wrote:
>
> >Of course sorting the returned ids is only viable when using a serial
>
> Yes, which is why I pointed out it doesn’t have to be.
>
> >or identity column, that's why in the general case I've mentioned
On 4/11/23 14:37, Federico wrote:
The problem here is not having the auto increment id in a particular
order, is that there
is apparently no correlation with the position of an element in the
values clause with the
id generated. That's the reason for using the sentinel column in the
general solu
On Tue, 11 Apr 2023, Federico wrote:
>Of course sorting the returned ids is only viable when using a serial
Yes, which is why I pointed out it doesn’t have to be.
>or identity column, that's why in the general case I've mentioned the
>insert with sentinel column
But it was pointed out that that
On Tue, 11 Apr 2023 at 22:59, Thorsten Glaser wrote:
>
> On Tue, 11 Apr 2023, Federico wrote:
>
> >The problem here is not having the auto increment id in a particular
>
> The id might not even be auto-increment but UUID or something…
> (I am surprised you would even try to insert multiple rows at
On Tue, Apr 11, 2023, at 4:22 PM, Tom Lane wrote:
> Federico writes:
> > Searching the archive seems that a using the INSERT SELECT ORDER BY
> > form should be a better solution,
> > so the above insert should be rewritten as
>
> > INSERT INTO t(data)
> > SELECT data FROM (VALUES ('a',
On Tue, 11 Apr 2023, Federico wrote:
>The problem here is not having the auto increment id in a particular
The id might not even be auto-increment but UUID or something…
(I am surprised you would even try to insert multiple rows at once.)
bye,
//mirabilos
--
15:41⎜ Somebody write a testsuite fo
Thanks for the ansers
> 2) What would you order by, id or data or both?
by values order, (that incidentally seems to be what PG does)
> with i as (INSERT INTO t(data) VALUES ('a', 1), ('b', 2), ('c', 3)
> returning id)
> select i.id from i order by id;
The problem here is not having the auto in
Federico writes:
> Searching the archive seems that a using the INSERT SELECT ORDER BY
> form should be a better solution,
> so the above insert should be rewritten as
> INSERT INTO t(data)
> SELECT data FROM (VALUES ('a', 1), ('b', 2), ('c', 3)) as vv(data,
> num) ORDER BY num
> RETU
Στις 11/4/23 23:06, ο/η Adrian Klaver έγραψε:
On 4/11/23 12:47, Federico wrote:
Hello list,
https://www.sqlite.org/lang_returning.html#limitations_and_caveats
Searching the archive seems that a using the INSERT SELECT ORDER BY
form should be a better solution,
so the above insert should be r
On 4/11/23 12:47, Federico wrote:
Hello list,
https://www.sqlite.org/lang_returning.html#limitations_and_caveats
Searching the archive seems that a using the INSERT SELECT ORDER BY
form should be a better solution,
so the above insert should be rewritten as
INSERT INTO t(data)
SELE
On 4/11/23 12:47, Federico wrote:
Hello list,
I have a few clarification questions regarding using insert with returning.
The use case is SQLAlchemy development, where the orm wants to insert
a list of rows,
get back the generated ids, defaults, etc, and match the returned values with
the origin
Hello list,
I have a few clarification questions regarding using insert with returning.
The use case is SQLAlchemy development, where the orm wants to insert
a list of rows,
get back the generated ids, defaults, etc, and match the returned values with
the original list of orm objects.
The followi
Hi
út 11. 4. 2023 v 19:42 odesílatel Joe Carlson napsal:
> Hello,
>
> I’ve recently encountered the issue of trying to insert more than 1 Gb
> into a TEXT column. While the docs say TEXT is unlimited length, I had been
> unaware of the 1Gb buffer size limitations.
>
I think so this is some mis
On 4/11/23 11:41, Joe Carlson wrote:
Hello,
I’ve recently encountered the issue of trying to insert more than 1 Gb into a
TEXT column. While the docs say TEXT is unlimited length, I had been unaware of
the 1Gb buffer size limitations.
We can debate whether or not saving something this big in
Hello,
I’ve recently encountered the issue of trying to insert more than 1 Gb into a
TEXT column. While the docs say TEXT is unlimited length, I had been unaware of
the 1Gb buffer size limitations.
We can debate whether or not saving something this big in a single column is a
good idea (spoile
> No idea about the former, but bad hardware is a good enough explanation.
> As to keeping it from happening: use good hardware.
Alright, thanks, I'll just keep my fingers crossed that it doesn't
happen again then!
> Also: Use checksums. PostgreSQL offers data checksums[1]. Some
filesystems also
David Rowley writes:
> On Tue, 11 Apr 2023 at 23:03, Christian Schröder
> wrote:
>> Particularly interesting are the sequential scans. In the old plan, we have
>> node #21, which took 32 seconds. Almost all of the time goes into actual
>> I/O. In the new plan, the corresponding node is #34. It
On Tue, 11 Apr 2023 at 23:03, Christian Schröder
wrote:
> We have recently migrated from our old PostgreSQL 9.4 server to a PostgreSQL
> 15 server. Even though the new machine has more resources, we see a
> considerable decrease in the performance of some of our heavier queries, and
> I have no
Hi list,
We have recently migrated from our old PostgreSQL 9.4 server to a PostgreSQL 15
server. Even though the new machine has more resources, we see a considerable
decrease in the performance of some of our heavier queries, and I have no idea
where I should start tuning. ?
Old database: Post
29 matches
Mail list logo