The only way that I see as plausible to use a subquery, both in the query
and in the setting of the variable, is that the relationship is one to one,
and that there is an index that responds to the predicate
UPDATE table1 t1
SET column_value = (SELECT FROM table2 t2 WHERE t2.column_relation
= t1.
On 9/14/24 11:10, Vinay Oli wrote:
Hi
I've checked the database size by meta command \l+ and even I checked
from file system level du -sh 49181 folder. 49181 is the db oid.
Minimal information is not going to solve this issue.
Provide the exact output of commands from:
1) The Primary data
Hi,
## Thomas Ziegler (thomas.zieg...@holmsecurity.com):
There's a lot of information missing here. Let's start from the top.
> I have had my database killed by the kernel oom-killer. After that I
> set turned off memory over-committing and that is where things got weird.
What exactly did you s
Hi ,
I've verified there's no crap sitting. I've checked the database size by
meta command \l+ and even I checked from file system level du -sh 49181
folder. 49181 is the db oid.
Thanks,
Vinay kumar
On Sat, Sep 14, 2024, 11:00 PM Laurenz Albe
wrote:
> On Sat, 2024-09-14 at 22:49 +0530, Vina
Hi
I've checked the database size by meta command \l+ and even I checked from
file system level du -sh 49181 folder. 49181 is the db oid.
Pgwal directory is same 40gb at primary and standby servers.
All the directories are of same size, 49181 folder (oid) is only having
different size.
Tha
On Sat, Sep 14, 2024 at 1:19 PM Vinay Oli wrote:
> Hi Team
>
> I have been using PostgreSQL for the past 6 years. PostgreSQL has
> significantly impacted my life, providing me with great opportunities for
> knowledge and self-development.
>
> I'm currently facing a strange issue with PostgreSQL 1
On Sat, 2024-09-14 at 22:49 +0530, Vinay Oli wrote:
> I'm currently facing a strange issue with PostgreSQL 15.0. I have a
> primary-standby setup that is in sync, with a replication slot in place.
> There are 18 databases, and one of the databases on the primary side
> is 104 GB, while the same dat
On 9/14/24 10:19, Vinay Oli wrote:
Hi Team
I have been using PostgreSQL for the past 6 years. PostgreSQL has
significantly impacted my life, providing me with great opportunities
for knowledge and self-development.
I'm currently facing a strange issue with PostgreSQL 15.0. I have a
primary-
On Sat, 2024-09-14 at 16:10 +0530, yudhi s wrote:
> > However, the strain on your system's resources and particularly the row
> > locks will impair normal database work.
> >
> > Essentially, you can either take an extended down time or perform the
> > updates
> > in very small chunks with a very
Hi Team
I have been using PostgreSQL for the past 6 years. PostgreSQL has
significantly impacted my life, providing me with great opportunities for
knowledge and self-development.
I'm currently facing a strange issue with PostgreSQL 15.0. I have a
primary-standby setup that is in sync, with a rep
Hello Community!
I have had my database killed by the kernel oom-killer. After that I set turned
off memory over-committing and that is where things got weird.
I have a PostgreSQL 16.2 running on a Debian Bookworm. The machine has 64GB of
RAM and runs in a VM in an OpenStack environment.
I have
On Sat, Sep 14, 2024 at 4:17 PM Peter J. Holzer wrote:
> On 2024-09-14 00:54:49 +0530, yudhi s wrote:
> > As "thiemo" mentioned , it can be done as below method, but if we have
> > multiple lookup tables to be populated for multiple columns , then , how
> can
> > the INSERT query be tweaked to ca
On Sat, Sep 14, 2024 at 4:55 PM Peter J. Holzer wrote:
>
> Which in turn means that you want as little overhead as possible per
> batch which means finding those 5000 rows should be quick. Which brings
> us back to Igor's question: Do you have any indexes in place which speed
> up finding those 5
On Fri, Sep 13, 2024 at 11:59 PM yudhi s
wrote:
>
>>
>> Do you have any indexes?
>> If not - you should, if yes - what are they?
>>
>>
>>
> Yes we have a primary key on this table which is on a UUID type column and
> also we have other indexes in other timestamp columns . But how is this
> going
On 2024-09-14 16:10:15 +0530, yudhi s wrote:
> On Sat, 14 Sept, 2024, 1:09 pm Laurenz Albe, wrote:
>
> On Sat, 2024-09-14 at 08:43 +0530, yudhi s wrote:
> > We have to update a column value(from numbers like '123' to codes like
> 'abc'
> > by looking into a reference table data) i
On Sat, 14 Sept, 2024, 1:09 pm Laurenz Albe,
wrote:
> On Sat, 2024-09-14 at 08:43 +0530, yudhi s wrote:
> > We have to update a column value(from numbers like '123' to codes like
> 'abc'
> > by looking into a reference table data) in a partitioned table with
> billions
> > of rows in it, with eac
On 2024-09-14 00:54:49 +0530, yudhi s wrote:
> As "thiemo" mentioned , it can be done as below method, but if we have
> multiple lookup tables to be populated for multiple columns , then , how can
> the INSERT query be tweaked to cater the need here?
Just use a join:
insert into target(val1, v
On Sat, 2024-09-14 at 12:05 +0200, Alban Hertroys wrote:
>
> That’s because the replacement data is an array of objects, not a
> single object.
>
> You need to iterate through the array elements to build your
> replacement data, something like what I do here with a select
> (because that’s way ea
> On 14 Sep 2024, at 10:33, Dan Kortschak wrote:
(…)
> I'm still having difficulties with the second part which is to update
> the contents of the amend array in the JSON.
>
> So far I'm able to append the relevant details to the append array, but
> I'm unable to correctly select the corrects
On Wed, 2024-07-24 at 00:23 +, Dan Kortschak wrote:
> On 7/23/24 13:11, Vincent Veyron wrote:
> > On Mon, 15 Jul 2024 20:31:13 +
> >
> > This is the goto page for anything SQL :
> > https://www.postgresql.org/docs/current/sql-commands.html
> >
> > For DateTime types :
> > https://www.post
On Sat, 2024-09-14 at 08:43 +0530, yudhi s wrote:
> We have to update a column value(from numbers like '123' to codes like 'abc'
> by looking into a reference table data) in a partitioned table with billions
> of rows in it, with each partition having 100's millions rows. As we tested
> for ~30mil
21 matches
Mail list logo