umber we can
> write, we don't need to flush it yet. If that sounds correct, we
> should fix the LZ4 code the same way.
>
I agree with your patch. The patch looks good to me.
Yes, the LZ4 flush check should also be fixed. Please find the attached
patch to fix th
rly I think we should be testing USE_ZSTD not HAVE_LIBZSTD.
>
I reviewed the patch, and it seems to be capturing and replacing all the
places of HAVE_LIB* with USE_* correctly.
Just curious, apart from consistency, do you see other problems as well
when testing one vs the other?
Regards,
Jeevan Ladhe
ok got it. Thanks for your insights.
Regards,
Jeevan Ladhe
On Tue, 8 Mar 2022 at 22:23, Robert Haas wrote:
> On Tue, Mar 8, 2022 at 11:32 AM Jeevan Ladhe
> wrote:
> > I reviewed the patch, and it seems to be capturing and replacing all the
> > places of HAVE_LIB* wi
throw a warning and fall back to
non-parallel behavior?
Regards,
Jeevan Ladhe
On Mon, 14 Mar 2022 at 21:41, Dipesh Pandit wrote:
> Hi,
>
> I tried to implement support for parallel ZSTD compression. The
> library provides an option (ZSTD_c_nbWorkers) to specify the
> number of
ls seem to be giving redundant information, but others
might
have different thoughts.
The patch itself looks good otherwise. Also the make check and postgres_fdw
check looking good.
Regards,
Jeevan Ladhe
+ /* We have to sleep even after a successfull dump */
s/successfull/successful
Regards,
Jeevan Ladhe
On Wed, Dec 1, 2021 at 3:45 PM Daniel Gustafsson wrote:
> > On 1 Dec 2021, at 10:59, Jeevan Ladhe
> wrote:
>
> > Was wondering if we had any barriers to getting this committed.
>
> No barrier other than available time to, I will try to get to it shortly.
>
Great!
Hi Tushar,
You need to apply Robert's v10 version patches 0002, 0003 and 0004, before
applying the lz4 patch(v8 version).
Please let me know if you still face any issues.
Regards,
Jeevan Ladhe
On Mon, Dec 27, 2021 at 7:01 PM tushar
wrote:
> On 11/22/21 11:05 PM, Jeevan Ladhe wrote:
&g
uses int type. I I felt the function parameters should be
> int64. Attached patch fixes the same.
+1 for the change. The patch LGTM.
Regards,
Jeevan Ladhe
r output buffer needed by the LZ4F_compressEnd().
How about instead using memset() to zero the whole thing and then
> omitting the zero initializations? That seems like it would be less
> fragile, if the upstream structure definition ever changes.
>
Made this change.
Please review the patch, and let me know your comments.
Regards,
Jeevan Ladhe
lz4_compress_v5.patch
Description: Binary data
way gzip allows us to use our own alloc and free functions
by means of providing the function pointers for them. Unfortunately,
no, LZ4 does not have that kind of provision. Maybe that makes a
good proposal for LZ4 library ;-).
I cannot think of another solution to it right away.
Regards,
Jeevan Ladhe
r them. Unfortunately,
> > no, LZ4 does not have that kind of provision. Maybe that makes a
> > good proposal for LZ4 library ;-).
> > I cannot think of another solution to it right away.
>
> OK. Will give it some thought.
I have started a thread[1] on LZ4 community for this, but so far no
reply on that.
Regards,
Jeevan Ladhe
[1]
https://groups.google.com/g/lz4c/c/WnJkKwBWlcM/m/zszrla2mBQAJ?utm_medium=email&utm_source=footer
lz4_compress_v6.patch
Description: Binary data
calling the cleanup() callback
from PG_CATCH instead of PG_FINALLY()? But in the latter case, even if
we call from PG_CATCH, it will have a similar problem for gzip and other
sinks which may not need a custom cleanup() callback in case there is any
error before the backup could finish up normally.
I have implemented the cleanup callback bbsink_lz4_cleanup() in the
attached patch.
Please have a look and let me know of any comments.
Regards,
Jeevan Ladhe
On Fri, Oct 29, 2021 at 6:54 PM Robert Haas wrote:
> On Fri, Oct 29, 2021 at 8:59 AM Jeevan Ladhe
> wrote:>
> >
internal buffer? But, then we
are calling LZ4F_freeCompressionContext() immediately after the
LZ4F_compressEnd() call. I might be missing something, will be
happy to get more insights.
Regards,
Jeevan Ladhe
On Fri, Nov 5, 2021 at 1:21 PM wrote:
>
>
> ‐‐‐ Original Message ‐‐‐
>
On Fri, Nov 19, 2021 at 7:37 AM Michael Paquier wrote:
> On Thu, Nov 18, 2021 at 07:54:37PM +0530, Jeevan Ladhe wrote:
> > In dir_open_for_write() I observe that we are writing the header
> > and then calling LZ4F_compressEnd() in case there is an error
> > while writing t
eed to do this. There's a thread on
> "Teach pg_receivewal to use lz4 compression" which concluded that
> supporting different compression levels was unnecessary."
-
Regards,
Jeevan Ladhe
On Wed, Nov 17, 2021 at 3:17 AM Robert Haas wrote:
> On Mon, Nov 15,
On Wed, Nov 24, 2021 at 10:55 AM Michael Paquier
wrote:
> On Mon, Nov 22, 2021 at 09:02:47AM -0500, Robert Haas wrote:
> > On Mon, Nov 22, 2021 at 12:46 AM Jeevan Ladhe
> > wrote:
> >> Fair enough. But, still I have a doubt in mind what benefit would that
> >>
Hi Daniel,
Was wondering if we had any barriers to getting this committed.
I believe it will be good to have this change and also it will be more in
line
with other check functions also.
Regards,
Jeevan
On Thu, Oct 21, 2021 at 3:51 PM Daniel Gustafsson wrote:
> > On 14 Jul 2021, at 07:27, Sura
of above:
-- check default partition overlap
INSERT INTO list_parted2 VALUES('X');
CREATE TABLE fail_part PARTITION OF list_parted2 FOR VALUES IN ('W', 'X',
'Y');
ERROR: updated partition constraint for default partition
"list_parted2_def" would be violated by some row
Regards,
Jeevan Ladhe
l; end; end loop; end;$$;
>
I ran your testcase and on my VM I get numbers like 3593.801 ms
without patch and 3593.801 with the patch, average of 5 runs each.
The runs were quite consistent.
Further make check also passing well.
Regards,
Jeevan Ladhe
d
I also get about similar number 10136.839 with patch vs 12900.044 ms
without the patch.
Regards,
Jeevan Ladhe
Correction -
On Fri, Sep 27, 2019 at 3:11 PM Jeevan Ladhe
wrote:
> I ran your testcase and on my VM I get numbers like 3593.801 ms
> without patch and 3593.801 with the patch, average of 5 runs each.
> The runs were quite consistent.
>
3593.801 ms without patch and 3213.809 wi
neID endtli)
Regards,
Jeevan Ladhe
On Wed, Oct 16, 2019 at 6:49 PM Asif Rehman wrote:
>
>
> On Mon, Oct 7, 2019 at 6:35 PM Asif Rehman wrote:
>
>>
>>
>> On Mon, Oct 7, 2019 at 6:05 PM Robert Haas wrote:
>>
>>> On Mon, Oct 7, 2019 at 8:48 AM Asif Rehman
c reason to do
> this?
>
+1.
The patch does not apply on master, needs rebase.
Also, I got some whitespace errors.
I think you can also refactor the function tryExecuteStatement(), and
call your newly added function executeStatementExpect() by passing
an additional flag something like "errorOK".
Regards,
Jeevan Ladhe
ect(PGconn *con, const char *sql, const ExecStatusType
expected)
pgbench-buffer-1.patch:19: trailing whitespace.
{
pgbench-buffer-1.patch:20: trailing whitespace.
PGresult *res;
pgbench-buffer-1.patch:21: trailing whitespace.
error: patch failed: src/bin/pgbench/pgbench.c:599
error: src/bin/pgbench/pgbench.c: patch does not apply
$
Regards,
Jeevan Ladhe
pgbench tap tests are also running fine.
Regards,
Jeevan Ladhe
On Tue, Oct 22, 2019 at 8:57 PM Fabien COELHO wrote:
>
> >> The patch does not apply on master, needs rebase.
> >>
> >> Hmmm. "git apply pgbench-buffer-1.patch" works for me on current master
ken
will be still available via WAL files and as normal restore process, will be
replayed from the checkpoint onwards up to a consistent point.
My two cents!
Regards,
Jeevan Ladhe
On Sat, Jul 20, 2019 at 11:22 PM vignesh C wrote:
> Hi Jeevan,
>
> The idea is very nice.
> When Inser
building
here for constructing bitmap, where we scan all the files one by one. Maybe
for
the initial version, we can go with the current proposal that Robert has
suggested,
and add this switch at a later point as an enhancement.
- My thoughts.
Regards,
Jeevan Ladhe
.
I am not sure of thought behind this and might be wrong while making the
above
assumption. But, I tried to look around a bit in hackers emails and could
not
locate any related discussion.
I have attached a patch (mark_lsn_0_invalid.patch) that makes above changes.
Thoughts?
Regards,
Jeevan La
function pg_lsn_in_internal(), so the function
decode_lsn_internal() from patch 0001 will go away and the dependent code
needs
to be modified.
I shall review the rest of the patches, and post the comments.
Regards,
Jeevan Ladhe
[1]
https://www.postgresql.org/message-id
Hi Michael,
Thanks for your inputs, really appreciate.
On Tue, Jul 30, 2019 at 9:42 AM Michael Paquier wrote:
> On Mon, Jul 29, 2019 at 10:55:29PM +0530, Jeevan Ladhe wrote:
> > I am attaching a patch that makes sure that *have_error is set to false
> in
> > pg_lsn_in_interna
On Tue, Jul 30, 2019 at 6:06 PM Robert Haas wrote:
> On Tue, Jul 30, 2019 at 4:52 AM Jeevan Ladhe
> wrote:
> > My only concern was something that we internally treat as invalid, why do
> > we allow, that as a valid value for that type. While I am not trying to
> > reinv
any further commits have removed
references
to assignments from float8in_internal_opt_error(), evaluate it, and set out
a
patch.
This is one of the reason, I was saying it can be taken as a good practice
to
let the function who is accepting an out parameter sets the value for sure
to
some or other value.
Regards,
Jeevan Ladhe
ror() and
that
set the flag:
{code}
#define RETURN_ERROR(throw_error) \
do { \
if (have_error) { \
*have_error = true; \
return 0.0; \
} else { \
throw_error; \
} \
} while (0)
{code}
My patch on way, thanks.
Regards,
Jeevan Ladhe
the flag issue including
pg_lsn_in_internal() and others.
I have further also fixed couple of other functions,
numeric_div_opt_error() and
numeric_mod_opt_error() which are basically callers of
make_result_opt_error().
Kindly do let me know if you have any comments.
Regards,
Jeevan Ladhe
0001-Make-the-have_e
Hi Michael,
On Thu, Aug 1, 2019 at 1:51 PM Michael Paquier wrote:
> On Thu, Aug 01, 2019 at 12:39:26PM +0530, Jeevan Ladhe wrote:
> > Here is a patch that takes care of addressing the flag issue including
> > pg_lsn_in_internal() and others.
>
> Your original patch for pg_
Sure Michael, in the attached patch I have reverted the checks from
pg_lsn_in_internal() and added Assert() per my original patch.
Regards,
Jeevan Ladhe
0001-Make-have_error-initialization-more-defensive-v2.patch
Description: Binary data
; > but not appear literally in the call is quite confusing.
>
Can't agree more. This is where I also got confused initially and thought
the flag is unused.
Good idea. This needs some changes only in float.c.
Please find attached patch with the changes to RETURN_ERROR and
it's refe
>
> Thanks. Committed after applying some tweaks to it. I have noticed
> that you forgot numeric_int4_opt_error() in the set.
Oops. Thanks for the commit, Michael.
Regards,
Jeevan Ladhe
if I am missing something, but, should not it be just:
len = cnt;
As I said earlier in my previous email, we now do not need
+decode_lsn_internal()
as it is already taken care by the introduction of function
pg_lsn_in_internal().
Regards,
Jeevan Ladhe
Hi Robert,
On Fri, Aug 9, 2019 at 6:40 PM Robert Haas wrote:
> On Thu, Aug 8, 2019 at 8:37 PM Jeevan Ladhe
> wrote:
> > + if (!XLogRecPtrIsInvalid(previous_lsn))
> > + appendStringInfo(labelfile, "PREVIOUS WAL LOCATION: %X/%X\n",
> > +
ents above the macro call at both the places are not necessary as
your macro name itself is self-explanatory.
--
+ /*
+ * If file is truncated, then we will hit
+ * end-of-file error in which case we don't
+ * want to error out, instead just pad it with
+ * zeros.
+ */
+ if (feof(fp))
The if block does not do the truncation right away, so I think the comment
above can be reworded to explain why we reset cnt?
Regards,
Jeevan Ladhe
the
pg_combinebackup, but I think that kills the purpose of having
pg_combinebackup utility.
Thoughts?
Regards,
Jeevan Ladhe
>
> Fixed both comments in the attached patch.
>
Thanks, the patch looks good to me.
Regards,
Jeevan Ladhe
ovided
as input in the serial order the backups were taken. This is bit confusing
unless clarified in pg_combinebackup help menu or documentation. I think we
should clarify it at both the places.
-----
I think scan_directory() should be rather renamed as do_combinebackup().
Regards,
Jeevan
Hi Robert,
On Sat, Aug 31, 2019 at 8:29 AM Robert Haas wrote:
> On Thu, Aug 29, 2019 at 10:41 AM Jeevan Ladhe
> wrote:
> > Due to the inherent nature of pg_basebackup, the incremental backup also
> > allows taking backup in tar and compressed format. But, pg_combineb
helps keeping track of
all the extension objects together, create them at once, and drop once with
“DROP EXTENSION” command. Here[3] is complete documentation for extension.
Regards,
Jeevan Ladhe
[1] https://www.postgresql.org/docs/current/extend-pgxs.html
[2]
https://www.postgresql.org/docs/current
in it) and throws an error.
I see similarly there are other places, which are freeing this global
connection
but not setting to NULL, and all those should be fixed. I have attached a
patch
ecpg_connection_ptr_issue_fix.patch to fix these places.
Regards,
Jeevan Ladhe
ecpg_conne
> Thanks for spotting and fixing. I will push the patch as soon as I'm
> online again.
>
Thanks Michael for taking care of this.
Regards,
Jeevan Ladhe.
+1 for the changes.
>1. Why is "wanted >=%u" any better than "wanted at least %u"? IMO, the
>wording as opposed to >= symbol in the user-facing messages works
>better.
I think I agree with Bharath on this: "wanted at least %u" sounds better
for us
nting the message every time we create the context for
decoding a slot using pg_logical_slot_get_changes() seems over-burn.
Wondering if instead of LOG messages, should we mark these as
DEBUG1 in SnapBuildFindSnapshot() and CreateDecodingContext()
respectively? I can produce a patch for the same if we agree.
Regards,
Jeevan Ladhe
ry time interval I read using
pg_logical_slot_peek_changes(), make sure to consume them to a consistent
state, and only then use pg_logical_slot_get_changes() to advance the slot.
Regards,
Jeevan Ladhe
.
[1]
https://fuchsia.googlesource.com/fuchsia/+/refs/heads/main/zircon/tools/zbi/zbi.cc
Regards,
Jeevan Ladhe
On Wed, Jan 5, 2022 at 10:24 PM tushar
wrote:
>
>
> On Tue, Dec 28, 2021 at 1:12 PM Jeevan Ladhe <
> jeevan.la...@enterprisedb.com> wrote:
>
>> Hi Tushar,
}. I have attached a cleanup patch for that as
well.
Please review and let me know your thoughts.
Regards,
Jeevan Ladhe
0001-gzip-tap-test-remove-extra-variable.patch
Description: Binary data
v10-0002-Add-a-LZ4-compression-method-for-server-side-compres.patch
Description: Binary data
On Sat, Jan 29, 2022 at 1:20 AM Robert Haas wrote:
> On Fri, Jan 28, 2022 at 12:48 PM Jeevan Ladhe
> wrote:
> > I have attached the latest rebased version of the LZ4 server-side
> compression
> > patch on the recent commits. This patch also introduces the compression
>
the decompression to stdout. Now, I have removed the '-m',
added '-d' for decompression, and also added the target file explicitly in
the command.
Regards,
Jeevan Ladhe
v11-0001-Add-a-LZ4-compression-method-for-server-side-compres.patch
Description: Binary data
the level range check there similar to gzip.
Regards,
Jeevan Ladhe
v12-0001-Add-a-LZ4-compression-method-for-server-side-compres.patch
Description: Binary data
imilar to bbstreamer_lz4_compressor_content() in
bbstreamer_lz4_decompressor_content() we can change len to avail_in.
Regards,
Jeevan Ladhe
On Thu, 10 Feb 2022 at 18:11, Dipesh Pandit wrote:
> Hi,
>
> > On Mon, Jan 31, 2022 at 4:41 PM Jeevan Ladhe <
> jeevan.la...@enterprisedb
>Jeevan, Your v12 patch does not apply on HEAD, it requires a
rebase.
Sure, please find the rebased patch attached.
Regards,
Jeevan
On Fri, 11 Feb 2022 at 14:13, Dipesh Pandit wrote:
> Hi,
>
> Thanks for the feedback, I have incorporated the suggestions
> and updated a new patch. PFA v2 patch.
Thanks Robert for the bravity :-)
Regards,
Jeevan Ladhe
On Fri, 11 Feb 2022, 20:31 Robert Haas, wrote:
> On Fri, Feb 11, 2022 at 7:20 AM Dipesh Pandit
> wrote:
> > > Sure, please find the rebased patch attached.
> >
> > Thanks, I have validated v2 patch on top of r
r offline help for testing the patch.
[1]
https://www.postgresql.org/message-id/6c3f1558-1e56-9946-78a2-c59340da1dbf%40enterprisedb.com
Regards,
Jeevan Ladhe
On Mon, 14 Feb 2022 at 21:30, Robert Haas wrote:
> On Sat, Feb 12, 2022 at 1:01 AM Shinoda, Noriyoshi (PN Japan FSIP)
> wrote:
> &
bumping the version number.
Patch 0002 is the client-side compression patch.
Regards,
Jeevan Ladhe
On Tue, 15 Feb 2022 at 22:24, tushar wrote:
> On 2/15/22 6:48 PM, Jeevan Ladhe wrote:
> > Please find the attached updated version of patch for ZSTD server side
> Thanks, Jeevan, I again
bRisF-9ocqYDcMng6iSijGj1EZX99PgXA%3D3VVbWuahog%40mail.gmail.com
[2]
https://www.postgresql.org/message-id/20220215175944.GY31460%40telsasoft.com
Regards,
Jeevan Ladhe
On Wed, 16 Feb 2022 at 21:46, Robert Haas wrote:
> On Wed, Feb 16, 2022 at 11:11 AM Alvaro Herrera
> wrote:
> > This is hard to int
compression.
Regards,
Jeevan Ladhe
improve-with-lz4-install-documentation.patch
Description: Binary data
Z4 and ZSTD.
Ok we will see, either Dipesh or I will take care of it.
Regards,
Jeevan Ladhe
On Thu, 17 Feb 2022 at 02:37, Robert Haas wrote:
> On Wed, Feb 16, 2022 at 12:46 PM Jeevan Ladhe
> wrote:
> > So, I went ahead and have now also implemented client side decompression
> >
Thanks Amul, this looks pretty straight forward. LGTM.
I have also run the regression on master and seems good.
Regards,
Jeevan Ladhe
EATE TABLE and ALTER TABLE paths, so do the same here, and just
* don't do anything if we're not authorized to skip validation.
*/
This was originally discussed in thread[1] and commit:
f27a6b15e6566fba7748d0d9a3fc5bcfd52c4a1b
[1]
https://www.postgresql.org/message-id/flat/1238779931.11913728.1449143089410.JavaMail.yahoo%40mail.yahoo.com#f2d8318b6beef37dfff06baa9a1538b7
Regards,
Jeevan Ladhe
th any b >= 100 OR
a > 200 and a < 300 with any non-null b
OR a=300 with any b<200
Your particular tuple (300, 50) fits in the last part of the OR i.e
(a=300 with any b<200).
So, IMHO, the range partitioning is behaving as expected.
Similarly, for the case-1 you mention above:
create table r1 partition of r for values from (100,0) to (200,100);
create table r3 partition of r for values from (0,100) to (100,200);
here, (100, 0) or r1 would overlap with (100, 200) of r3.
[1] https://www.postgresql.org/docs/current/sql-createtable.html
Regards,
Jeevan Ladhe
d066d8c in BackendStartup (port=0x55779e358380) at
postmaster.c:4210
#25 0x55779d062f9b in ServerLoop () at postmaster.c:1742
#26 0x55779d062734 in PostmasterMain (argc=3, argv=0x55779e3308b0) at
postmaster.c:1414
#27 0x55779cf5805f in main (argc=3, argv=0x55779e3308b0) at main.c:209
Regards,
variable is appropriate here, but maybe we can
reword
the comment something like:
+ /*
+* We already have the next anticipated log segment number and the
+* timeline, check if this WAL file is ready to be archived. If
yes, skip
+* the directory scan.
+*/
Regards,
Jeevan Ladhe
ached patch, now I get following error:
*"All non-template0 databases must allow connections, i.e. their
pg_database.datallowconn must be true; database "mydb" has datallowconn set
to false."*
Regards,
Jeevan Ladhe
0001-Improve-the-pg_upgrade-error-message.patch
Description: Binary data
ink->base.bbs_next->bbs_buffer: 16537
input size to be compressed: 512
estimated size for compressed buffer by LZ4F_compressBound(): 262667
actual compressed size: 16
"
Will really appreciate any inputs, comments, suggestions here.
Regards,
Jeevan Ladhe
[1] https://fossies.org/linux/
Thanks, Robert for your response.
On Thu, Sep 9, 2021 at 1:09 AM Robert Haas wrote:
> On Wed, Sep 8, 2021 at 2:14 PM Jeevan Ladhe
> wrote:
> > To give an example, I put some logging statements, and I can see in the
> log:
> > "
> > bytes remaining in mysink-&
Thanks for the newer set of the patches Robert!
I was wondering if we should change the bbs_buffer_length in bbsink to
be size_t instead of int, because that's what most of the compression
libraries have their length variables defined as.
Regards,
Jeevan Ladhe
On Mon, Sep 13, 2021 at 9:
e enough, and I'm not quite sure how we solve
> that problem, but there is probably a way to do it.
>
Yes, you are right here, and I could verify this fact with an experiment.
When autoflush is 1, the file gets less compressed i.e. the compressed file
is of more size than the one generated when autoflush is set to 0.
But, as of now, I couldn't think of a solution as we need to really advance
the
bytes written to the output buffer so that we can write into the output
buffer.
Regards,
Jeevan Ladhe
-t server:/tmp/data_lz4 -Xnone --server-compression=lz4
Please let me know your thoughts.
Regards,
Jeevan Ladhe
On Mon, Sep 13, 2021 at 9:42 PM Robert Haas wrote:
> On Mon, Sep 13, 2021 at 7:19 AM Dilip Kumar wrote:
> > Seems like nothing has been done about the issue repor
On Tue, Sep 21, 2021 at 10:27 PM Robert Haas wrote:
> On Tue, Sep 21, 2021 at 9:08 AM Jeevan Ladhe
> wrote:
> > Yes, you are right here, and I could verify this fact with an experiment.
> > When autoflush is 1, the file gets less compressed i.e. the compressed
> file
>
LZ4F_compressBound(), ask the next sink to process the data so that
> we can empty the buffer.
>
Thanks for your comments, Robert.
Here is the patch addressing the comments, except the one regarding the
autoFlush flag setting.
Kindly have a look.
Regards,
Jeevan Ladhe
lz4_compress_v3.patch
Description: Binary data
have.
I went through each of the steps and appears that I have done it correctly.
I am still trying to debug and figure out where it is going wrong.
I am going to try hooking the pg_basebackup with the lz4 source and
debug both the sources.
Regards,
Jeevan Ladhe
Hi Robert,
I have fixed the autoFlush issue. Basically, I was wrongly initializing
the lz4 preferences in bbsink_lz4_begin_archive() instead of
bbsink_lz4_begin_backup(). I have fixed the issue in the attached
patch, please have a look at it.
Regards,
Jeevan Ladhe
On Fri, Sep 24, 2021 at 6:27
.
With the patch the command sent is now:
BASE_BACKUP ( LABEL 'pg_basebackup base backup', PROGRESS, MANIFEST
'yes', TABLESPACE_MAP, TARGET 'server', TARGET_DETAIL '/tmp/data_none')
Regards,
Jeevan Ladhe
On Tue, Sep 21, 2021 at 10:22 PM Robert Haas wrote:
e:
databases_with_datallowconn_false.txt
Failure, exiting
$ cat databases_with_datallowconn_false.txt
mydb
mydb1
mydb2
Regards,
Jeevan Ladhe
v2-0001-Improve-the-pg_upgrade-error-message.patch
Description: Binary data
ler for handling the timeline switch.
+ *
+ * A timeline switch has been notified, mark this event so that the next
iteration
+ * of pgarch_ArchiverCopyLoop() archives the history file, and we set the
+ * timeline to the new one for the next anticipated log segment.
+ */
Regards,
Jeevan Ladhe
On
On Wed, Jun 24, 2020 at 6:09 PM ROS Didier wrote:
> Hi Bruce
>
> In the following link :
> https://www.enterprisedb.com/blog/connecting-hadoop-and-edb-postgres-shrink-big-data-challenges
> We can see :
> "Support for various authentication methods (i.e. Kerberos, NOSASL, etc.)"
>
> So HDF
ust to clarify and be sure - was there anything else running on any
of
these 3 machines while the backup was in progress.
Regards,
Jeevan Ladhe
> Backup Duration(Min): 154.75 | 49.28 | 45.27 | 20.35
> (pgbench running with 100 parallel client simulating SELECT load)
>
to understand why the size is 0;
>> partitioned tables do not hold any data by themselves.
>>
>
> should be.
>
>
Yes, or maybe we can add that info in "Description".
> Some is missing still - there is not any total size across all partitions.
>
> maybe new command like
>
> \dtP+ .. show partitioned tables and their size
>
+1
Regards,
Jeevan Ladhe
t/5280E2AE.8070106%40usit.uio.no
[2]https://wiki.postgresql.org/wiki/Pg_dump_improvements
Regards,
Jeevan Ladhe
On Tue, Jun 26, 2018 at 12:12 PM, Prabhat Sahu <
prabhat.s...@enterprisedb.com> wrote:
> Hi,
>
> I have taken pg_dumpall in pg-master and after restoring the dump I
your patch. Created couple of tables,
subpartitions with
mix of range and list partitions, and I see these 3 functions are working as
documented.
Also, the patch does not have any 'make check' failures.
I will do the further code review and post if any comments.
Regards,
Jeevan Ladhe
89 matches
Mail list logo