On Wed, 16 Jun 2021 at 12:09, Julien Rouhaud wrote:
>
> On Wed, Jun 16, 2021 at 12:02:52PM +0530, Atul Kumar wrote:
> >
> > Sometimes I run a Postgres query it takes 30 seconds. Then, I
> > immediately run the same query and it takes 2 seconds. It appears that
> > Postgres has some sort of caching
On Wed, Jun 16, 2021 at 12:02:52PM +0530, Atul Kumar wrote:
>
> Sometimes I run a Postgres query it takes 30 seconds. Then, I
> immediately run the same query and it takes 2 seconds. It appears that
> Postgres has some sort of caching. Can I somehow see what that cache
> is holding?
You can use p
Hi,
I have an postgres 10 instance on RDS.
Sometimes I run a Postgres query it takes 30 seconds. Then, I
immediately run the same query and it takes 2 seconds. It appears that
Postgres has some sort of caching. Can I somehow see what that cache
is holding? Can I force all caches to be cleared for
Hi,
Please find below the details you asked for:
Relation size
1986 MB
table count - 1407721
We have removed few indexes.
Query -
QUERY PLAN
Limit (cost=0.43..5529.03 rows=10 width=37) (actual
time=0.974..12911.087 rows=10 loops=1)
Output: items._id
Buffers: shared hit=4838 read=3701
At Tue, 15 Jun 2021 07:05:07 -0700 (MST), "email2ssk...@gmail.com"
wrote in
> Even I have this problem when I had to recover the database failed
> switchover.
> This is error is new primary server.
>
> < 2021-06-15 16:05:02.480 CEST > ERROR: requested starting point
> AF/7D00 on timeline
On Tue, Jun 15, 2021 at 09:53:45PM -0700, Dipanjan Das wrote:
>
> I am running "pg_basebackup -h -U postgres -D -X stream". It
> fails with either of the following two error messages:
> [...]
> WARNING: terminating connection because of crash of another server process
> DETAIL: The postmaster
Hi,
I am running "pg_basebackup -h -U postgres -D -X stream". It
fails with either of the following two error messages:
ERROR: Backup failed copying files.
DETAILS: data transfer failure on directory
'/mnt/data/barman/base/20210615T212304/data'
pg_basebackup error:
pg_basebackup: initiating bas
> On Jun 15, 2021, at 17:30, Peter Geoghegan wrote:
> It pretty much works by making the WAL sender process on the primary
> look like it holds a snapshot that's as old as the oldest snapshot on
> the replica.
>
> A replica can block VACUUM on the primary *directly* by holding a
> table-level
On Tue, Jun 15, 2021 at 5:24 PM Christophe Pettus wrote:
> When a replica sends a hot_standby_feedback message to the primary, does that
> create an entry in the primary's lock table, or is it flagged to autovacuum
> some other way?
It pretty much works by making the WAL sender process on the p
When a replica sends a hot_standby_feedback message to the primary, does that
create an entry in the primary's lock table, or is it flagged to autovacuum
some other way?
I saw that problem when I was running the query from DBeaver.
Got my answer.
Thanks & Regards.
On Tue, Jun 15, 2021 at 12:18 PM Pavel Stehule
wrote:
>
>
> út 15. 6. 2021 v 21:07 odesílatel Tom Lane napsal:
>
>> AI Rumman writes:
>> > I am using Postgresql 10 and seeing a strange behavior in C
Do you have "recovery_target_timeline=latest" configured in your
recovery.conf or postgresql.conf? Depending on the version you are using,
up to 11 recovery.conf and postgresql.conf 12+.
Cheers,
Mateusz
wt., 15 cze 2021, 22:05 użytkownik email2ssk...@gmail.com <
email2ssk...@gmail.com> napisał:
On 6/15/21 1:55 PM, AI Rumman wrote:
I am using Postgresql 10 and seeing a strange behavior in CONCAT function
when I am concatenating double precision and int with a separator.
select concat('41.1'::double precision,':', 20);
Result:
41.1014:20
Value 41.1 which double
Even I have this problem when I had to recover the database failed
switchover.
This is error is new primary server.
< 2021-06-15 16:05:02.480 CEST > ERROR: requested starting point
AF/7D00 on timeline 1 is not in this server's history
< 2021-06-15 16:05:02.480 CEST > DETAIL: This server's h
út 15. 6. 2021 v 21:07 odesílatel Tom Lane napsal:
> AI Rumman writes:
> > I am using Postgresql 10 and seeing a strange behavior in CONCAT function
> > when I am concatenating double precision and int with a separator.
>
> > select concat('41.1'::double precision,':', 20);
> >> Result:
> >> 41.
> út 15. 6. 2021 v 20:56 odesílatel AI Rumman napsal:
> I am using Postgresql 10 and seeing a strange behavior in CONCAT function
> when I am concatenating double precision and int with a separator.
>
> select concat('41.1'::double precision,':', 20);
>> Result:
>> 41.1014:20
>
>
> Va
AI Rumman writes:
> I am using Postgresql 10 and seeing a strange behavior in CONCAT function
> when I am concatenating double precision and int with a separator.
> select concat('41.1'::double precision,':', 20);
>> Result:
>> 41.1014:20
What have you got extra_float_digits set to?
On 6/15/21 11:55 AM, AI Rumman wrote:
I am using Postgresql 10 and seeing a strange behavior in CONCAT
function when I am concatenating double precision and int with a separator.
select concat('41.1'::double precision,':', 20);
Result:
41.1014:20
Value 41.1 which doubl
On 6/15/21 11:55 AM, AI Rumman wrote:
I am using Postgresql 10 and seeing a strange behavior in CONCAT
function when I am concatenating double precision and int with a separator.
select concat('41.1'::double precision,':', 20);
Result:
41.1014:20
Value 41.1 which doubl
Hi
út 15. 6. 2021 v 20:56 odesílatel AI Rumman napsal:
> I am using Postgresql 10 and seeing a strange behavior in CONCAT function
> when I am concatenating double precision and int with a separator.
>
> select concat('41.1'::double precision,':', 20);
>> Result:
>> 41.1014:20
>
>
>
I am using Postgresql 10 and seeing a strange behavior in CONCAT function
when I am concatenating double precision and int with a separator.
select concat('41.1'::double precision,':', 20);
> Result:
> 41.1014:20
Value 41.1 which double precision converts to 41.100014.
Is that e
>> < 2021-06-15 12:33:04.537 CEST > DEBUG: resetting unlogged relations:
>> cleanup 1 init 0
>
> Are you perhaps keeping your data in an UNLOGGED table? If so, resetting
> it to empty after a crash is exactly what's supposed to happen. The
> entire point of UNLOGGED is that the performance bene
On Tue, 15 Jun 2021 19:16:41 +0530
Atul Kumar wrote:
> hi,
>
> I have an RDS instance with 2GB of RAM, 1 CPU, instance class - t2.small.
>
> If you need any more info please let me know.
>
> and as you shared I need to tweak
> random_page_cost/seq_page_cost/effective_cache_size So please sugge
hi,
I have an RDS instance with 2GB of RAM, 1 CPU, instance class - t2.small.
If you need any more info please let me know.
and as you shared I need to tweak
random_page_cost/seq_page_cost/effective_cache_size So please suggest
which parameter value I need to increase or decrease as I am known
w
writes:
> I get this error when running a SQL statement in my Java application.
> ERROR: Invalid memory alloc request size 1683636507
This is a pretty common symptom of corrupt data (specifically, that the
length word of a variable-length field is garbage). More than that
can't be said with the
"Holtgrewe, Manuel" writes:
> So it looks as if the database jumps back "half an hour" to ensure consistent
> data. Everything in between is lost.
Postgres does not lose committed data --- if it did, we'd consider that a
fairly serious bug. (Well, there are caveats of course. But most of them
On Tue, 15 Jun 2021 at 18:21, David G. Johnston
wrote:
> You probably avoid the complications by doing the above, but the amount
of bloat you are causing seems excessive.
>
> I’d suggest an approach where you use the table data to build DDL in a
form that does adhere to the limitations described
On Tuesday, June 15, 2021, Vijaykumar Jain
wrote:
>
>
> --- now since the lookup table is update, a noop update would get new
> shards for ids and rebalance them accordingly.
>
> test=# update t set id = id ;
> UPDATE 25
>
You probably avoid the complications by doing the above, but the amount o
Hi,
thanks for your answer.
Let me give some background. I have a postgres instance that serves as the data
storage for a web-based data analytics application. For some queries, I'm
seeing postgres going OOM because the query grows too large and subsequently
the linux kernel kills the postgr
On Tue, 15 Jun 2021 16:12:11 +0530
Atul Kumar wrote:
> Hi,
>
> I have postgres 10 running on RDS instance.
>
> I have query below:
[...]
>
> So my doubt is initially when I run this query it takes around 42
> seconds to complete but later after few minutes it completes in 2-3
> seconds.
>
> I
On 6/15/21 6:09 AM, paul.m...@lfv.se wrote:
Hi list,
I get this error when running a SQL statement in my Java application.
ERROR: Invalid memory alloc request size 1683636507
Location: File:
d:\pginstaller.auto\postgres.windows-x64\src\backend\utils\mmgr\mcxt.c,
Routine: MemoryContextAlloc,
Hi list,
I get this error when running a SQL statement in my Java application.
ERROR: Invalid memory alloc request size 1683636507
Location: File:
d:\pginstaller.auto\postgres.windows-x64\src\backend\utils\mmgr\mcxt.c,
Routine: MemoryContextAlloc, Line: 779
Server SQLState: XX000
I think it has
On 6/15/21 5:42 AM, Holtgrewe, Manuel wrote:
Hi,
I have a database that is meant to have high-performance for bulk insert
operations. I've attached my postgres.conf file.
However, I'm seeing the following behaviour. At around 12:04, I have
started the database. Then, I did a bulk insert a
Hi,
I have a database that is meant to have high-performance for bulk insert
operations. I've attached my postgres.conf file.
However, I'm seeing the following behaviour. At around 12:04, I have started
the database. Then, I did a bulk insert and that completed. I then went on to
kill postgr
Hi,
I have postgres 10 running on RDS instance.
I have query below:
select * from "op_KFDaBAZDSXc4YYts9"."UserFeedItems"
where (("itemType" not in ('WELCOME_POST', 'UPLOAD_CONTACTS',
'BROADCAST_POST')) and ("userId" = '5d230d67bd99c5001b1ae757' and
"is_deleted" in (true, false)))
order by "score
hi,
I was playing around with a setup of having a lookup table for partitioning.
Basically, I wanted to be able to rebalance partitions based on my lookup
table.
-- create a lookup and assign shard nos to ids
test=# create table pt(id int, sh int);
CREATE TABLE
test=# insert into pt select x, 1
36 matches
Mail list logo