On Fri, Jul 5, 2024 at 10:45 AM Guyren Howe wrote:
> On Jul 4, 2024, at 22:07, Lok P wrote:
>
> If you stick to the principle of grouping columns in a table when you use
> those columns together, you should be good.
>
> Note that you might want to split up the “parent” table if that naturally
>
On Thursday, July 4, 2024, Lok P wrote:
>
> But do you also suggest keeping those table pieces related to each other
> through the same primary key ?
>
>
Yes, everyone row produced from the input data “row” should get the same ID
associated with it - either as an entire PK or a component of a
mul
"David G. Johnston" writes:
> On Thursday, July 4, 2024, Murthy Nunna wrote:
>> pg_terminate_backend(pid) will not work as it expects only one pid at a
>> time.
> Interesting…I wouldn’t expect the function calls to interact that
> way
TBH, my reaction to that was that the OP doesn't understand
On Fri, 5 Jul 2024 at 17:07, Lok P wrote:
> Also I understand the technical limitation of the max number of columns per
> table is ~1600. But should you advise to restrict/stop us to some low number
> long before reaching that limit , such that we will not face any anomalies
> when we grow in f
The convention here is to in-line replies, or bottom-post. Top-posting
makes the archives more difficult to read.
On Thursday, July 4, 2024, Murthy Nunna wrote:
> pg_terminate_backend(pid) will not work as it expects only one pid at a
> time.
>
>
> Interesting…I wouldn’t expect the function cal
On Fri, Jul 5, 2024 at 1:26 AM David G. Johnston
wrote:
> On Thu, Jul 4, 2024 at 12:38 PM Lok P wrote:
>
>>
>> Should we break the single transaction into multiple tables like one main
>> table and other addenda tables with the same primary key to join and fetch
>> the results wherever necessary
On 7/4/24 20:54, Murthy Nunna wrote:
Sorry, there is no problem with the following statement and the
environment variable. It works fine. But it terminates only one PID due
to LIMIT 1. I want to terminate all pids that meet this criteria. If I
remove LIMIT 1, pg_terminate_backend(pid) will not
Sorry, there is no problem with the following statement and the environment
variable. It works fine. But it terminates only one PID due to LIMIT 1. I want
to terminate all pids that meet this criteria. If I remove LIMIT 1,
pg_terminate_backend(pid) will not work as it expects only one pid at a t
On Thu, Jul 4, 2024 at 4:56 PM Murthy Nunna wrote:
>
>
> How can I rewrite the above in psql
>
The only real trick is using a psql variable instead of the shell-injection
of the environment variable. Use the --set CLI argument to assign the
environment variable to a psql variable then refer to
On Thu, Jul 4, 2024 at 1:57 PM Vasu Nagendra wrote:
>
>
> SELECT '{"n": {"a-b": 1, "@ab": 2, "ab": 3}}'::jsonb @? '$ ? (@.n.a\-b >= 3)';
>
>
Which is better written as:
select '{"n": {"a-b": 1, "@ab": 2, "ab": 3}}'::jsonb @? '$ ? (@.n."a-b" >=
3)';
Using the same double-quotes you defined the k
Hello:
Following works-
SELECT pid, pg_terminate_backend(pid) FROM pg_stat_activity
WHERE pid IN (select unnest(pg_blocking_pids(pid)) from pg_stat_activity
where cardinality(pg_blocking_pids(pid)) > 0)
and usename = 'DBUSER_10'
and now() - state_change
Ultimately, the way you should store the data depends on how you will use it.
When you retrieve these values, what are the different ways you’ll be using
them?
Normalised representations are more flexible, and the pragmatic,
performance-based consideration is that all the values in a row are al
Good afternoon,
I am running into the following issue with a JSONPath exists query.
This is a valid query
SELECT '{"n": {"a-b": 1, "@ab": 2, "ab": 3}}'::jsonb @? '$ ? (@.n.ab >= 3)';
This is an invalid query (syntax error)
SELECT '{"n": {"a-b": 1, "@ab": 2, "ab": 3}}'::jsonb @? '$ ? (@.n.a-b >=
On Thu, Jul 4, 2024 at 3:38 PM Lok P wrote:
> Hello,
> In one of the applications we are getting transactions in messages/events
> format and also in files and then they are getting parsed and stored into
> the relational database. The number of attributes/columns each transaction
> has is ~900+.
On 7/4/24 15:37, Lok P wrote:
Or say, what is the maximum number of columns per table we should
restrict? Should we break the single transaction into multiple tables
like one main table and other addenda tables with the same primary key
to join and fetch the results wherever necessary?
900 co
On 7/4/24 10:24, Shenavai, Manuel wrote:
Thanks for the questions.
Here are some details:
1) we use this query to get the bloat:
https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql
But in our load test, we got a empty database with 1 record that gets updated
very
On Thu, Jul 4, 2024 at 12:38 PM Lok P wrote:
>
> Should we break the single transaction into multiple tables like one main
> table and other addenda tables with the same primary key to join and fetch
> the results wherever necessary?
>
>
I would say yes. Find a way to logically group sets of col
Hello,
In one of the applications we are getting transactions in messages/events
format and also in files and then they are getting parsed and stored into
the relational database. The number of attributes/columns each transaction
has is ~900+. Logically they are part of one single transaction and s
Thanks for the questions.
Here are some details:
1) we use this query to get the bloat:
https://github.com/ioguix/pgsql-bloat-estimation/blob/master/table/table_bloat.sql
But in our load test, we got a empty database with 1 record that gets updated
very frequently. Here we just meature the DB si
On 7/4/24 08:16, Shenavai, Manuel wrote:
We see that our DB keeps increasing under high load (many updates). We see that
our DB has a size of 200GB and we got 199GB bloat, 0 dead tuple. And when the
DB is put on high load (many updates), we still see that the DB size grows. We
try to find para
We see that our DB keeps increasing under high load (many updates). We see that
our DB has a size of 200GB and we got 199GB bloat, 0 dead tuple. And when the
DB is put on high load (many updates), we still see that the DB size grows. We
try to find parameters to avoid DB growth.
I think we need
21 matches
Mail list logo