On Tue, May 9, 2023 at 3:15 AM Michael Paquier wrote:
>
> On Mon, May 08, 2023 at 07:15:20PM +0530, Dilip Kumar wrote:
> > I am able to reproduce this using the steps given above, I am also
> > trying to analyze this further. I will send the update once I get
> > some
/dev/null 2>&1 ;
> psql testdb -c "select 1" > /dev/null 2>&1 ;
> done;
> 3) Force some checkpoints:
> while true; do psql -c 'checkpoint' > /dev/null 2>&1; sleep 4; done
> 4) Force a few crashes and recoveries:
> while true ; do pg_ctl stop -m immediate ; pg_ctl start ; sleep 4 ; done
>
I am able to reproduce this using the steps given above, I am also
trying to analyze this further. I will send the update once I get
some clue.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
also continues to append more WAL to the
same file. But pg_receivewal is a separate utility so this can serve
as an archive location that means we can restore the complete WAL file
only.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Sun, Dec 5, 2021 at 10:55 AM Dilip Kumar wrote:
>
> On Fri, Dec 3, 2021 at 9:02 PM Tom Lane wrote:
> >
> > Dilip Kumar writes:
> > > On Thu, Dec 2, 2021 at 9:35 AM Dilip Kumar wrote:
> > >> I think there is no such view or anything which tells about
On Fri, Dec 3, 2021 at 9:02 PM Tom Lane wrote:
>
> Dilip Kumar writes:
> > On Thu, Dec 2, 2021 at 9:35 AM Dilip Kumar wrote:
> >> I think there is no such view or anything which tells about which
> >> backend or transaction has more than 64 sub transaction. But
On Thu, Dec 2, 2021 at 9:35 AM Dilip Kumar wrote:
> I think there is no such view or anything which tells about which
> backend or transaction has more than 64 sub transaction. But if we
> are ready to modify the code then we can LOG that information in
> GetNewTransactionId(), whe
dStatus.overflowed = substat->overflowed = true;
<-- we can log or put breakpoint here and identify which statement is
creating oeverflow-->
}
IMHO, it is good to LOG such information if we are not already logging
this anywhere.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
nMVCCSnapshot ()
13 #10 0x008f662e in HeapTupleSatisfiesMVCC ()
12 #11 0x004c436e in heapgetpage ()
[2]https://www.cybertec-postgresql.com/en/subtransactions-and-performance-in-postgresql/
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
g_stat_activity that is there some process that
shows in the wait event as SLRURead/SLRUWrite and not coming out of
that state?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Thu, Nov 25, 2021 at 9:50 AM Dilip Kumar wrote:
> > Does that shed any light?
>
> Seems like some of the processes are taking a long time or stuck while
> reading/writing SLRU pages, and due to that while creating a new
> connection the backend process is not able to che
On Thu, Nov 25, 2021 at 9:50 AM Dilip Kumar wrote:
>
> On Thu, Nov 25, 2021 at 8:58 AM James Sewell
> wrote:
> >>
> >> The hypothesis I'm thinking of is that incoming sessions are being blocked
> >> somewhere before they can acquire a ProcArray entry;
nCacheInitializePhase3 ()
> 6 #17 0x008cbd56 in InitPostgres ()
> 5 #18 0x0000007a1cfe in PostgresMain ()
> 4 #19 0x0048a9c0 in ServerLoop ()
> 3 #20 0x0072f11e in PostmasterMain ()
> 2 #21 0x0048b854 in main ()
>
> Does that shed any light?
Seems like some of the processes are taking a long time or stuck while
reading/writing SLRU pages, and due to that while creating a new
connection the backend process is not able to check the transaction
status (from pg_xlog) of the pg_class tuple and gets stuck/taking a
long time in a startup.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
uld be the immediate cause of the problem.
+1, that will be the best place to start with, additionally, you can
enable DEBUG2 message so that from logs we can identify why it could
not continue recovery from the archive.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
ard.
>
/*
* ApplyLauncherRegister
* Register a background worker running the logical replication launcher.
*/
void
ApplyLauncherRegister(void)
{
BackgroundWorker bgw;
if (max_logical_replication_workers == 0)
return;
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
subscriber and
launches the apply worker if there is no worker corresponding to some
subscription.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
m-1 created it then it seems like you change the
primary_conninfo on m-2 before the m-1 promotion got completed.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
to machine configuration - I'm waiting for that now (it is a very very
> old pentium3 machine ;) ).
One idea is that you can attach your process in gdb and call
MemoryContextStats(TopMemoryContext). This will show which context is
using how much memory. So basically u can call this function 2-3
times with some interval and see in which context the memory is
continuously increasing.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Wed, Apr 15, 2020 at 12:56 PM Pavel Stehule wrote:
> st 15. 4. 2020 v 7:32 odesílatel Dilip Kumar napsal:
>>
>> One of our customers tried to use XMLTABLE syntax without
>> row_expression, which works fine with ORACLE but doesn't work with
>> Postgre
"')|| '"'));
IS_MIL SRC
-- ---
0 1705562 O/P from the oracle database
01708486
01706882
postgres[7604]=# SELECT upc.is_mil,TRIM(column_value) src
postgres-# FROM user_pool_clean upc
postgres-#,xmltable(('"'|| REPLACE( upc.my_auds, ',',
'","')|| '"'));
ERROR: syntax error at or near ")"
LINE 3: ... ,xmltable(('"'|| REPLACE( upc.my_auds, ',', '","')|| '"'));
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
k_mem your plan has
switched from a parallel plan to a non-parallel plan. Basically,
earlier it was getting executed with 3 workers. And, after it becomes
non-parallel plan execution time is 3x. For the analysis can we just
reduce the value of parallel_tuple_cost and parallel_setup_cost and
see how it behaves?
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
On Tue, Nov 28, 2017 at 7:13 PM, Robert Haas wrote:
> On Tue, Nov 28, 2017 at 2:32 AM, Dilip Kumar
> wrote:
> > I think BitmapHeapScan check whether dsa is valid or not if DSA is not
> > valid then it should assume it's non-parallel plan.
> >
> > Attached
I think BitmapHeapScan check whether dsa is valid or not if DSA is not
valid then it should assume it's non-parallel plan.
Attached patch should fix the issue.
--
Regards,
Dilip Kumar
EnterpriseDB: http://www.enterprisedb.com
bug_fix_in_pbhs_when_dsa_not_initialized.patch
Description: Binary data
22 matches
Mail list logo