Hello,
Please find attached an updated patch.
>Flag isn't reset on error.
Corrected in the attached.
> + pgstat_reset_activityflag;
>Does this actually compile?
It does compile but with no effect. It has been corrected.
>snprintf()? I don't think you need to keep track of schemaname_len a
Hello,
>I think that you should add the flag or something which indicates whether this
>backend is running VACUUM or not, into PgBackendStatus.
>pg_stat_vacuum_progress should display the entries of only backends with that
>flag set true. This design means that you need to set the flag to true w
Hello,
Please check the attached patch as the earlier one had typo in regression test
output.
>+#define PG_STAT_GET_PROGRESS_COLS30
>Why did you use 30?
That has come from N_PROGRESS_PARAM * 3 where N_PROGRESS_PARAM = 10 is the
number of progress parameters of each type stored in shared me
Hello Fujii-san,
>Here are another review comments
Thank you for review. Please find attached an updated patch.
> You removed some empty lines, for example, in vacuum.h.
>Which seems useless to me.
Has been corrected in the attached.
>Why did you use an array to store the progress information o
Hello,
Please find attached patch with bugs reported by Thom and Sawada-san solved.
>* The progress of vacuum by autovacuum seems not to be displayed.
The progress is stored in shared variables during autovacuum. I guess the
reason they are not visible is that the entries are deleted as soon as
Hello Thom,
>Okay, I've just tested this with a newly-loaded table (1,252,973 of jsonb
>data),
Thanks a lot!
>but after it's finished, I end up with this:
>json=# select * from pg_stat_vacuum_progress;
>-[ RECORD 1 ]---+---
>pid | 5569
>total_pages | 217941
>scan
Hello,
Please find attached updated VACUUM progress checker patch.
Following have been accomplished in the patch
1. Accounts for index pages count while calculating total progress of VACUUM.
2. Common location for storing progress parameters for any command. Idea is
every command which needs t
Hello,
>Autovacuum knows what % of a table needs to be cleaned - that is how it is
>triggered.
>When a vacuum runs we should calculate how many TIDs we will collect and
>therefore how many trips to the indexes we need for given memory.
>We can use the VM to find out how many blocks we'll need t
Hello,
>When we're in Phase2 or 3, don't we need to report the number of total page
>scanned or percentage of how many table pages scanned, as well?
The total heap pages scanned need to be reported with phase 2 or 3. Complete
progress report need to have numbers from each phase when applicable.
Hello,
>Naming the GUC pgstat* seems a little inconsistent.
Sorry, there is a typo in the mail. The GUC name is 'track_activity_progress'.
>Also, adding the new GUC to src/backend/utils/misc/postgresql.conf.sample
>might be helpful
Yes. I will update.
Thank you,
Rahila Syed
__
Hello,
>TBH, I think that designing this as a hook-based solution is adding a whole
>lot of complexity for no value. The hard parts of the problem are collecting
>the raw data and making the results visible to users, and both of those
>require involvement of the core code. Where is the benefi
Hello,
>Unless I am missing something, I guess you can still keep the actual code that
>updates counters outside the core if you adopt an approach that Simon suggests.
Yes. The code to extract progress information from VACUUM and storing in shared
memory can be outside core even with pg_stat_act
Hello,
>There's no need to add those curly braces, or to subsequent if blocks
Yes, those are added by mistake.
>Also, is this patch taking the visibility map into account for its
>calculations?
Yes, it subtracts skippable/all-visible pages from total pages to be scanned.
For each page processed
Hello,
>Maybe, For DBAs,
>It might be better to show vacuum progress in pg_stat_activity.
>(if we'd do, add a free-style column like "progress" ?) This column might also
>be able to use for other long time commands like ANALYZE, CREATE/RE INDEX and
>COPY. To realize this feature, we certainly ne
Hello,
Please find attached a patch. As discussed, flag to denote compression and
presence of hole in block image has been added in XLogRecordImageHeader rather
than block header.
Following are WAL numbers based on attached test script posted by Michael
earlier in the thread.
Hello,
>Are there any other flag bits that we should or are planning to add into WAL
>header newly, except the above two? If yes and they are required by even a
>block which doesn't have an image, I will change my mind and agree to add
>something like chunk ID to a block header.
>But I guess t
Hello,
>It would be good to get those problems fixed first. Could you send an updated
>patch?
Please find attached updated patch with WAL replay error fixed. The patch
follows chunk ID approach of xlog format.
Following are brief measurement numbers.
Hello ,
>I've not read this logic yet, but ISTM there is a bug in that new WAL format
>because I got the following error and the startup process could not replay any
>WAL records when I set up replication and enabled wal_compression.
>LOG: record with invalid length at 0/3B0
>LOG: record
rdDataHeader[Short|Long]
block data
block data
...
main data
I will post a patch based on this.
Thank you,
Rahila Syed
-Original Message-
From: Andres Freund [mailto:and...@2ndquadrant.com]
Sent: Monday, February 16, 2015 5:26 PM
To: Syed, Rahila
Cc: Michael Paquier; Fujii Masao;
Hello,
Thank you for reviewing and testing the patch.
>+ /* leave if data cannot be compressed */
>+ if (compressed_len == 0)
>+ return false;
>This should be < 0, pglz_compress returns -1 when compression fails.
>
>+ if (pglz_decompress(block_image, bkpb->
Thank you for comments. Please find attached the updated patch.
>This patch fails to compile:
>xlogreader.c:1049:46: error: extraneous ')' after condition, expected a
>statement
>blk->with_hole && blk->hole_offset <=
> 0))
This has been rectified.
>Note
>IMO, we should add details about how this new field is used in the comments on
>top of XLogRecordBlockImageHeader, meaning that when a page hole is present we
>use the compression info structure and when there is no hole, we are sure that
>the FPW raw length is BLCKSZ meaning that the two byte
in parenthesis.
Also corrected above code format mistakes.
Thank you,
Rahila Syed
-Original Message-
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Syed, Rahila
Sent: Monday, February 09, 2015 6:58 PM
To: Michael Paquier; Fujii Masao
Cc: Post
all the
>above code to it for more simplicity
This is also implemented in the patch attached.
Thank you,
Rahila Syed
-Original Message-
From: Michael Paquier [mailto:michael.paqu...@gmail.com]
Sent: Friday, February 06, 2015 6:00 PM
To: Fujii Masao
Cc: Syed, Rahila; PostgreSQL mailing lis
, 2015 12:46 AM
To: Syed, Rahila
Cc: PostgreSQL mailing lists
Subject: Re: [HACKERS] [REVIEW] Re: Compression of full-page-writes
On Thu, Feb 5, 2015 at 11:06 PM, Syed, Rahila wrote:
>>/*
>>+* We recheck the actual size even if pglz_compress() report success,
>>+* because
Hello,
>/*
>+* We recheck the actual size even if pglz_compress() report success,
>+* because it might be satisfied with having saved as little as one byte
>+* in the compressed data.
>+*/
>+ *len = (uint16) compressed_len;
>+ if (*len >= orig_len - 1)
>+ return false;
>
-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Michael Paquier
Sent: Wednesday, November 26, 2014 1:55 PM
To: Alvaro Herrera
Cc: Andres Freund; Robert Haas; Fujii Masao; Rahila Syed; Rahila Syed;
PostgreSQL-development
Subject: Re: [HACKERS] [REVIEW] Re: Compression
put at the time of
recovery. Hence, memory for uncompressedPages needs to be allocated even if
fpw=on which is not the case for compressedPages.
Thank you,
Rahila Syed
-Original Message-
From: Fujii Masao [mailto:masao.fu...@gmail.com]
Sent: Monday, October 27, 2014 6:50 PM
To: Rahila S
Hello,
>Please find attached the patch to compress FPW.
Sorry I had forgotten to attach. Please find the patch attached.
Thank you,
Rahila Syed
From: pgsql-hackers-ow...@postgresql.org
[mailto:pgsql-hackers-ow...@postgresql.org] On Behalf Of Rahila Syed
Sent: Monday, September 22, 2014 3:1
29 matches
Mail list logo