On 11/3/25 22:32, David Rowley wrote:
On Tue, 4 Nov 2025 at 09:20, Mats Kindahl <[email protected]> wrote:
This is using Coccinelle to transform the text based on a semantic patch (which
is included in the patch). Unfortunately it mess with the whitespace a bit so
it is necessary to run pg_intent after. I'm surprised that there are
whitespace changes that should not be there, but I'll take a look.
pgindent likely won't put back the types of changes that were made
here. I think you're going to have to manually review anything that
comes out of any automated tool, at least certainly until there's some
confidence that the scripts are configured correctly. Loading extra
work onto reviewers or committers that you could have easily done
yourself isn't really a sustainable thing. We just lack bandwidth for
that, plus it's bad etiquette.
A few tips: It's fine to post rough patches to demonstrate ideas to
the list, but make that clear when doing so. If your intention is that
what you're sending gets committed, then please aim for as high a
quality as you can. There just won't be any long-term patience for
series of unvetted-by-human transformation patches from tools such as
Coccinelle around here, especially so if you expect the community to
vet them for you. I'm not saying that's what you're doing, but if you
are, please take this advice.
Hi David,
My apologies for not being clear. I understand the situation and thank
you for the advice. I'll keep that in mind in the future.
Here is an updated patch that I checked manually for unnecessary
whitespace changes.
I also checked that "destroyStringInfo(info)" was handled correctly,
which would be replacing it with a call to "pfree(info.data)" since the
StringInfoData structure is now on the stack but the data buffer needs
to be released since this is what a call to destroyStringInfo() would do.
Removing any calls of "pfree(info)" when "info" was changed to
"StringInfoData" since it is no longer a dynamically allocated variable.
Note that in this case it is not necessary to free "info.data" since the
original code does not do that.
There is one such case affected by this patch, and there the buffer
pointer is copied to another variable and then returned. This can
probably be improved by just returning the buffer pointer directly
without intermediate assignment to a variable, but I avoided this to
make reviewing the code easier. It is easy to add this change either now
or later and should not affect the generated code except at very low
optimization levels.
I also added fixes manually inside check_publications_origin_tables,
check_publications, and fetch_relation_list. In
check_publication_origin_table three StringInfo was dynamically
allocated but not released after. In check_publications and
fetch_relation_list there were extra cases of using a dynamically
allocated StringInfo that was not necessary.
Best wishes,
Mats Kindahl
From d3a099536c3f9d47a0cb75691d5d694b0ba63ef3 Mon Sep 17 00:00:00 2001
From: Mats Kindahl <[email protected]>
Date: Tue, 28 Jan 2025 14:09:41 +0100
Subject: Use stack-allocated StringInfoData
This uses a semantic patch to replace uses of StringInfo with StringInfoData
where the info is dynamically allocated but (optionally) freed at the end of
the block. This will avoid one dynamic allocation that otherwise have to be
dealt with.
For example, this code:
StringInfo info = makeStringInfo();
...
appendStringInfo(info, ...);
...
return do_stuff(..., info->data, ...);
Can be replaced with:
StringInfoData info;
initStringInfo(&info);
...
appendStringInfo(&info, ...);
...
return do_stuff(..., info.data, ...);
It does not do a replacement in these cases:
- If the variable is assigned to an expression. In this case, the pointer can
"leak" outside the function either through a global variable or a parameter
assignment.
- If an assignment is done to the expression. This cannot leak the data, but
could mean a value-assignment of a structure or assigning a StringInfo to the
variable, which is complicated to deal with, so we avoid this case.
- If the pointer is returned either using a normal return or any of the
PG_RETURN_X versions.
The semantic patch does not handle the case where you have multiple overlapping
uses in the same block similar to the above. In this case it does not do a
replacement at all and prints an error.
This commit fixes three such cases manually in check_publications_origin_tables,
check_publications, and fetch_relation_list respectively, all inside
subscriptioncmds.c and also add calls to pfree for StringInfo that was
dynamically allocated but not freed.
---
cocci/use_stringinfodata.cocci | 203 ++++++++++++++++++++++++
contrib/postgres_fdw/postgres_fdw.c | 14 +-
contrib/tcn/tcn.c | 19 +--
src/backend/access/transam/xlogbackup.c | 31 ++--
src/backend/backup/basebackup.c | 10 +-
src/backend/commands/subscriptioncmds.c | 57 ++++---
src/backend/utils/adt/json.c | 53 ++++---
src/backend/utils/adt/jsonb.c | 9 +-
src/backend/utils/adt/jsonfuncs.c | 7 +-
src/backend/utils/adt/multirangetypes.c | 13 +-
src/backend/utils/adt/rangetypes.c | 18 ++-
src/backend/utils/adt/xml.c | 34 ++--
12 files changed, 344 insertions(+), 124 deletions(-)
create mode 100644 cocci/use_stringinfodata.cocci
diff --git a/cocci/use_stringinfodata.cocci b/cocci/use_stringinfodata.cocci
new file mode 100644
index 00000000000..d1b596741fe
--- /dev/null
+++ b/cocci/use_stringinfodata.cocci
@@ -0,0 +1,203 @@
+// Replace uses of StringInfo with StringInfoData where the info is
+// dynamically allocated but (optionally) freed at the end of the
+// block. This will avoid one dynamic allocation that otherwise have
+// to be dealt with.
+//
+// For example, this code:
+//
+// StringInfo info = makeStringInfo();
+// ...
+// appendStringInfo(info, ...);
+// ...
+// return do_stuff(..., info->data, ...);
+//
+// Can be replaced with:
+//
+// StringInfoData info;
+// initStringInfo(&info);
+// ...
+// appendStringInfo(&info, ...);
+// ...
+// return do_stuff(..., info.data, ...);
+//
+//
+
+virtual report
+virtual context
+virtual patch
+
+// This rule captures the position of the makeStringInfo() and bases
+// all changes around that. It matches the case that we should *not*
+// replace, that is, those that either (1) return the pointer or (2)
+// assign the pointer to a variable or (3) assign a variable to the
+// pointer.
+//
+// The first two cases are matched because they could potentially leak
+// the pointer outside the function, for some expressions, but the
+// last one avoids assigning a StringInfo pointer of unknown source to
+// the new StringInfoData variable.
+//
+// If we replace this, the resulting change will result in a value
+// copy of a structure, which might not be optimal, so we do not do a
+// replacement.
+@id1 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+identifier f;
+position pos;
+expression E;
+identifier PG_RETURN =~ "PG_RETURN_[A-Z0-9_]+";
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ return f(..., info, ...);
+|
+ PG_RETURN(info);
+|
+ info = E
+|
+ E = info
+)
+
+@r1 depends on !patch disable decl_init exists@
+identifier info, fld;
+position dpos, pos != id1.pos;
+@@
+(
+* StringInfo@dpos info;
+ ...
+* info@pos = makeStringInfo();
+|
+* StringInfo@dpos info@pos = makeStringInfo();
+)
+<...
+(
+* destroyStringInfo(info);
+|
+* pfree(info);
+|
+* info->fld
+|
+* *info
+|
+* info
+)
+...>
+
+@script:python depends on report@
+info << r1.info;
+dpos << r1.dpos;
+@@
+coccilib.report.print_report(dpos[0], f"Variable '{info}' of type StringInfo can be defined using StringInfoData")
+
+// This rule replaces the StringInfo variable with a StringInfoData
+// variable and switches from using pointer-syntax to object-syntax.
+//
+// If destroyStringInfo() is used it frees the StringInfo and the data
+// held by the structure. When using StringInfoData we need to free
+// the data but not the StringInfo since that is now a StringInfoData.
+//
+// A pfree() of the StringInfo is not needed any more since the data
+// is not dynamically allocated. Is is also wrong since the structure
+// is now on the stack and not a pointer variable.
+@depends on patch disable decl_init exists@
+identifier info, fld;
+position pos != id1.pos;
+@@
+- StringInfo info;
++ StringInfoData info;
+ ...
+- info@pos = makeStringInfo();
++ initStringInfo(&info);
+<...
+(
+- destroyStringInfo(info);
++ pfree(info->data);
+|
+- pfree(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+
+// Here we repeat the matching of the "bad case" above since we cannot
+// inherit over modifications.
+@id2 exists@
+typedef StringInfo;
+local idexpression StringInfo info;
+position pos;
+expression E;
+identifier f;
+identifier PG_RETURN =~ "PG_RETURN_[A-Z0-9_]+";
+@@
+ info@pos = makeStringInfo()
+ ...
+(
+ return info;
+|
+ return f(..., info, ...);
+|
+ PG_RETURN(info);
+|
+ info = E
+|
+ E = info
+)
+
+// This rule replaces the StringInfo variable with a StringInfoData
+// variable and switches from using pointer-syntax to object-syntax.
+//
+// It should be identical to the rule above with the exception that
+// this rule matches initializer syntax, so make sure they are
+// identical.
+//
+// If destroyStringInfo() is used it frees the StringInfo and the
+// buffer data held by the structure. When using StringInfoData we
+// need to free the data but not the StringInfo since that is now a
+// StringInfoData.
+//
+// A pfree() of the StringInfo is not needed any more since the buffer
+// data is not dynamically allocated. Is is also wrong since the
+// structure is now on the stack and not a pointer variable.
+@depends on patch exists@
+identifier info, fld;
+position pos != id2.pos;
+statement S, S1;
+@@
+- StringInfo info@pos = makeStringInfo();
++ StringInfoData info;
+... when != S
+(
+<...
+(
+- destroyStringInfo(info);
++ pfree(info.data);
+|
+- pfree(info);
+|
+ info
+- ->fld
++ .fld
+|
+- *info
++ info
+|
+- info
++ &info
+)
+...>
+&
++ initStringInfo(&info);
+ S1
+)
diff --git a/contrib/postgres_fdw/postgres_fdw.c b/contrib/postgres_fdw/postgres_fdw.c
index 456b267f70b..06b52c65300 100644
--- a/contrib/postgres_fdw/postgres_fdw.c
+++ b/contrib/postgres_fdw/postgres_fdw.c
@@ -2841,7 +2841,7 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
*/
if (list_length(fdw_private) > FdwScanPrivateRelations)
{
- StringInfo relations;
+ StringInfoData relations;
char *rawrelations;
char *ptr;
int minrti,
@@ -2875,7 +2875,7 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
rtoffset = bms_next_member(plan->fs_base_relids, -1) - minrti;
/* Now we can translate the string */
- relations = makeStringInfo();
+ initStringInfo(&relations);
ptr = rawrelations;
while (*ptr)
{
@@ -2897,24 +2897,24 @@ postgresExplainForeignScan(ForeignScanState *node, ExplainState *es)
char *namespace;
namespace = get_namespace_name_or_temp(get_rel_namespace(rte->relid));
- appendStringInfo(relations, "%s.%s",
+ appendStringInfo(&relations, "%s.%s",
quote_identifier(namespace),
quote_identifier(relname));
}
else
- appendStringInfoString(relations,
+ appendStringInfoString(&relations,
quote_identifier(relname));
refname = (char *) list_nth(es->rtable_names, rti - 1);
if (refname == NULL)
refname = rte->eref->aliasname;
if (strcmp(refname, relname) != 0)
- appendStringInfo(relations, " %s",
+ appendStringInfo(&relations, " %s",
quote_identifier(refname));
}
else
- appendStringInfoChar(relations, *ptr++);
+ appendStringInfoChar(&relations, *ptr++);
}
- ExplainPropertyText("Relations", relations->data, es);
+ ExplainPropertyText("Relations", relations.data, es);
}
/*
diff --git a/contrib/tcn/tcn.c b/contrib/tcn/tcn.c
index 3158dee0f26..9ca47f17cac 100644
--- a/contrib/tcn/tcn.c
+++ b/contrib/tcn/tcn.c
@@ -66,12 +66,13 @@ triggered_change_notification(PG_FUNCTION_ARGS)
TupleDesc tupdesc;
char *channel;
char operation;
- StringInfo payload = makeStringInfo();
+ StringInfoData payload;
bool foundPK;
List *indexoidlist;
ListCell *indexoidscan;
+ initStringInfo(&payload);
/* make sure it's called as a trigger */
if (!CALLED_AS_TRIGGER(fcinfo))
ereport(ERROR,
@@ -149,22 +150,22 @@ triggered_change_notification(PG_FUNCTION_ARGS)
foundPK = true;
- strcpy_quoted(payload, RelationGetRelationName(rel), '"');
- appendStringInfoCharMacro(payload, ',');
- appendStringInfoCharMacro(payload, operation);
+ strcpy_quoted(&payload, RelationGetRelationName(rel), '"');
+ appendStringInfoCharMacro(&payload, ',');
+ appendStringInfoCharMacro(&payload, operation);
for (i = 0; i < indnkeyatts; i++)
{
int colno = index->indkey.values[i];
Form_pg_attribute attr = TupleDescAttr(tupdesc, colno - 1);
- appendStringInfoCharMacro(payload, ',');
- strcpy_quoted(payload, NameStr(attr->attname), '"');
- appendStringInfoCharMacro(payload, '=');
- strcpy_quoted(payload, SPI_getvalue(trigtuple, tupdesc, colno), '\'');
+ appendStringInfoCharMacro(&payload, ',');
+ strcpy_quoted(&payload, NameStr(attr->attname), '"');
+ appendStringInfoCharMacro(&payload, '=');
+ strcpy_quoted(&payload, SPI_getvalue(trigtuple, tupdesc, colno), '\'');
}
- Async_Notify(channel, payload->data);
+ Async_Notify(channel, payload.data);
}
ReleaseSysCache(indexTuple);
break;
diff --git a/src/backend/access/transam/xlogbackup.c b/src/backend/access/transam/xlogbackup.c
index cda4b38b7d6..d9b6dc63ae3 100644
--- a/src/backend/access/transam/xlogbackup.c
+++ b/src/backend/access/transam/xlogbackup.c
@@ -31,18 +31,20 @@ build_backup_content(BackupState *state, bool ishistoryfile)
char startstrbuf[128];
char startxlogfile[MAXFNAMELEN]; /* backup start WAL file */
XLogSegNo startsegno;
- StringInfo result = makeStringInfo();
+ StringInfoData result;
char *data;
Assert(state != NULL);
+ initStringInfo(&result);
+
/* Use the log timezone here, not the session timezone */
pg_strftime(startstrbuf, sizeof(startstrbuf), "%Y-%m-%d %H:%M:%S %Z",
pg_localtime(&state->starttime, log_timezone));
XLByteToSeg(state->startpoint, startsegno, wal_segment_size);
XLogFileName(startxlogfile, state->starttli, startsegno, wal_segment_size);
- appendStringInfo(result, "START WAL LOCATION: %X/%08X (file %s)\n",
+ appendStringInfo(&result, "START WAL LOCATION: %X/%08X (file %s)\n",
LSN_FORMAT_ARGS(state->startpoint), startxlogfile);
if (ishistoryfile)
@@ -52,18 +54,18 @@ build_backup_content(BackupState *state, bool ishistoryfile)
XLByteToSeg(state->stoppoint, stopsegno, wal_segment_size);
XLogFileName(stopxlogfile, state->stoptli, stopsegno, wal_segment_size);
- appendStringInfo(result, "STOP WAL LOCATION: %X/%08X (file %s)\n",
+ appendStringInfo(&result, "STOP WAL LOCATION: %X/%08X (file %s)\n",
LSN_FORMAT_ARGS(state->stoppoint), stopxlogfile);
}
- appendStringInfo(result, "CHECKPOINT LOCATION: %X/%08X\n",
+ appendStringInfo(&result, "CHECKPOINT LOCATION: %X/%08X\n",
LSN_FORMAT_ARGS(state->checkpointloc));
- appendStringInfoString(result, "BACKUP METHOD: streamed\n");
- appendStringInfo(result, "BACKUP FROM: %s\n",
+ appendStringInfoString(&result, "BACKUP METHOD: streamed\n");
+ appendStringInfo(&result, "BACKUP FROM: %s\n",
state->started_in_recovery ? "standby" : "primary");
- appendStringInfo(result, "START TIME: %s\n", startstrbuf);
- appendStringInfo(result, "LABEL: %s\n", state->name);
- appendStringInfo(result, "START TIMELINE: %u\n", state->starttli);
+ appendStringInfo(&result, "START TIME: %s\n", startstrbuf);
+ appendStringInfo(&result, "LABEL: %s\n", state->name);
+ appendStringInfo(&result, "START TIMELINE: %u\n", state->starttli);
if (ishistoryfile)
{
@@ -73,22 +75,21 @@ build_backup_content(BackupState *state, bool ishistoryfile)
pg_strftime(stopstrfbuf, sizeof(stopstrfbuf), "%Y-%m-%d %H:%M:%S %Z",
pg_localtime(&state->stoptime, log_timezone));
- appendStringInfo(result, "STOP TIME: %s\n", stopstrfbuf);
- appendStringInfo(result, "STOP TIMELINE: %u\n", state->stoptli);
+ appendStringInfo(&result, "STOP TIME: %s\n", stopstrfbuf);
+ appendStringInfo(&result, "STOP TIMELINE: %u\n", state->stoptli);
}
/* either both istartpoint and istarttli should be set, or neither */
Assert(XLogRecPtrIsInvalid(state->istartpoint) == (state->istarttli == 0));
if (!XLogRecPtrIsInvalid(state->istartpoint))
{
- appendStringInfo(result, "INCREMENTAL FROM LSN: %X/%08X\n",
+ appendStringInfo(&result, "INCREMENTAL FROM LSN: %X/%08X\n",
LSN_FORMAT_ARGS(state->istartpoint));
- appendStringInfo(result, "INCREMENTAL FROM TLI: %u\n",
+ appendStringInfo(&result, "INCREMENTAL FROM TLI: %u\n",
state->istarttli);
}
- data = result->data;
- pfree(result);
+ data = result.data;
return data;
}
diff --git a/src/backend/backup/basebackup.c b/src/backend/backup/basebackup.c
index bb7d90aa5d9..2be4e069816 100644
--- a/src/backend/backup/basebackup.c
+++ b/src/backend/backup/basebackup.c
@@ -239,7 +239,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
TimeLineID endtli;
backup_manifest_info manifest;
BackupState *backup_state;
- StringInfo tablespace_map;
+ StringInfoData tablespace_map;
/* Initial backup state, insofar as we know it now. */
state.tablespaces = NIL;
@@ -263,11 +263,11 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
/* Allocate backup related variables. */
backup_state = (BackupState *) palloc0(sizeof(BackupState));
- tablespace_map = makeStringInfo();
+ initStringInfo(&tablespace_map);
basebackup_progress_wait_checkpoint();
do_pg_backup_start(opt->label, opt->fastcheckpoint, &state.tablespaces,
- backup_state, tablespace_map);
+ backup_state, &tablespace_map);
state.startptr = backup_state->startpoint;
state.starttli = backup_state->starttli;
@@ -342,7 +342,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
if (opt->sendtblspcmapfile)
{
sendFileWithContent(sink, TABLESPACE_MAP,
- tablespace_map->data, -1, &manifest);
+ tablespace_map.data, -1, &manifest);
sendtblspclinks = false;
}
@@ -399,7 +399,7 @@ perform_base_backup(basebackup_options *opt, bbsink *sink,
endtli = backup_state->stoptli;
/* Deallocate backup-related variables. */
- destroyStringInfo(tablespace_map);
+ pfree(tablespace_map.data);
pfree(backup_state);
}
PG_END_ENSURE_ERROR_CLEANUP(do_pg_abort_backup, BoolGetDatum(false));
diff --git a/src/backend/commands/subscriptioncmds.c b/src/backend/commands/subscriptioncmds.c
index 1f45444b499..492f38bdf0c 100644
--- a/src/backend/commands/subscriptioncmds.c
+++ b/src/backend/commands/subscriptioncmds.c
@@ -491,20 +491,20 @@ static void
check_publications(WalReceiverConn *wrconn, List *publications)
{
WalRcvExecResult *res;
- StringInfo cmd;
+ StringInfoData cmd;
TupleTableSlot *slot;
List *publicationsCopy = NIL;
Oid tableRow[1] = {TEXTOID};
- cmd = makeStringInfo();
- appendStringInfoString(cmd, "SELECT t.pubname FROM\n"
+ initStringInfo(&cmd);
+ appendStringInfoString(&cmd, "SELECT t.pubname FROM\n"
" pg_catalog.pg_publication t WHERE\n"
" t.pubname IN (");
- GetPublicationsStr(publications, cmd, true);
- appendStringInfoChar(cmd, ')');
+ GetPublicationsStr(publications, &cmd, true);
+ appendStringInfoChar(&cmd, ')');
- res = walrcv_exec(wrconn, cmd->data, 1, tableRow);
- destroyStringInfo(cmd);
+ res = walrcv_exec(wrconn, cmd.data, 1, tableRow);
+ pfree(cmd.data);
if (res->status != WALRCV_OK_TUPLES)
ereport(ERROR,
@@ -2597,33 +2597,41 @@ check_publications_origin_tables(WalReceiverConn *wrconn, List *publications,
*/
if (publist)
{
- StringInfo pubnames = makeStringInfo();
- StringInfo err_msg = makeStringInfo();
- StringInfo err_hint = makeStringInfo();
+ StringInfoData pubnames;
+ StringInfoData err_msg;
+ StringInfoData err_hint;
+
+ initStringInfo(&pubnames);
+ initStringInfo(&err_msg);
+ initStringInfo(&err_hint);
/* Prepare the list of publication(s) for warning message. */
- GetPublicationsStr(publist, pubnames, false);
+ GetPublicationsStr(publist, &pubnames, false);
if (check_table_sync)
{
- appendStringInfo(err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
+ appendStringInfo(&err_msg, _("subscription \"%s\" requested copy_data with origin = NONE but might copy data that had a different origin"),
subname);
- appendStringInfoString(err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
+ appendStringInfoString(&err_hint, _("Verify that initial data copied from the publisher tables did not come from other origins."));
}
else
{
- appendStringInfo(err_msg, _("subscription \"%s\" enabled retain_dead_tuples but might not reliably detect conflicts for changes from different origins"),
+ appendStringInfo(&err_msg, _("subscription \"%s\" enabled retain_dead_tuples but might not reliably detect conflicts for changes from different origins"),
subname);
- appendStringInfoString(err_hint, _("Consider using origin = NONE or disabling retain_dead_tuples."));
+ appendStringInfoString(&err_hint, _("Consider using origin = NONE or disabling retain_dead_tuples."));
}
ereport(WARNING,
errcode(ERRCODE_OBJECT_NOT_IN_PREREQUISITE_STATE),
- errmsg_internal("%s", err_msg->data),
+ errmsg_internal("%s", err_msg.data),
errdetail_plural("The subscription subscribes to a publication (%s) that contains tables that are written to by other subscriptions.",
"The subscription subscribes to publications (%s) that contain tables that are written to by other subscriptions.",
- list_length(publist), pubnames->data),
- errhint_internal("%s", err_hint->data));
+ list_length(publist), pubnames.data),
+ errhint_internal("%s", err_hint.data));
+
+ pfree(pubnames.data);
+ pfree(err_msg.data);
+ pfree(err_hint.data);
}
ExecDropSingleTupleTableSlot(slot);
@@ -2885,12 +2893,13 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
int server_version = walrcv_server_version(wrconn);
bool check_columnlist = (server_version >= 150000);
int column_count = check_columnlist ? 4 : 3;
- StringInfo pub_names = makeStringInfo();
+ StringInfoData pub_names;
initStringInfo(&cmd);
+ initStringInfo(&pub_names);
/* Build the pub_names comma-separated string. */
- GetPublicationsStr(publications, pub_names, true);
+ GetPublicationsStr(publications, &pub_names, true);
/* Get the list of relations from the publisher */
if (server_version >= 160000)
@@ -2917,7 +2926,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
" FROM pg_publication\n"
" WHERE pubname IN ( %s )) AS gpt\n"
" ON gpt.relid = c.oid\n",
- pub_names->data);
+ pub_names.data);
/* From version 19, inclusion of sequences in the target is supported */
if (server_version >= 190000)
@@ -2926,7 +2935,7 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
" SELECT DISTINCT s.schemaname, s.sequencename, " CppAsString2(RELKIND_SEQUENCE) "::\"char\" AS relkind, NULL::int2vector AS attrs\n"
" FROM pg_catalog.pg_publication_sequences s\n"
" WHERE s.pubname IN ( %s )",
- pub_names->data);
+ pub_names.data);
}
else
{
@@ -2939,10 +2948,10 @@ fetch_relation_list(WalReceiverConn *wrconn, List *publications)
appendStringInfo(&cmd, "FROM pg_catalog.pg_publication_tables t\n"
" WHERE t.pubname IN ( %s )",
- pub_names->data);
+ pub_names.data);
}
- destroyStringInfo(pub_names);
+ pfree(pub_names.data);
res = walrcv_exec(wrconn, cmd.data, column_count, tableRow);
pfree(cmd.data);
diff --git a/src/backend/utils/adt/json.c b/src/backend/utils/adt/json.c
index 88a612b041d..06dd62f0008 100644
--- a/src/backend/utils/adt/json.c
+++ b/src/backend/utils/adt/json.c
@@ -631,13 +631,13 @@ Datum
array_to_json(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- array_to_json_internal(array, result, false);
+ array_to_json_internal(array, &result, false);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -648,13 +648,13 @@ array_to_json_pretty(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
bool use_line_feeds = PG_GETARG_BOOL(1);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- array_to_json_internal(array, result, use_line_feeds);
+ array_to_json_internal(array, &result, use_line_feeds);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -664,13 +664,13 @@ Datum
row_to_json(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- composite_to_json(array, result, false);
+ composite_to_json(array, &result, false);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -681,13 +681,13 @@ row_to_json_pretty(PG_FUNCTION_ARGS)
{
Datum array = PG_GETARG_DATUM(0);
bool use_line_feeds = PG_GETARG_BOOL(1);
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- composite_to_json(array, result, use_line_feeds);
+ composite_to_json(array, &result, use_line_feeds);
- PG_RETURN_TEXT_P(cstring_to_text_with_len(result->data, result->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -763,12 +763,13 @@ to_json(PG_FUNCTION_ARGS)
Datum
datum_to_json(Datum val, JsonTypeCategory tcategory, Oid outfuncoid)
{
- StringInfo result = makeStringInfo();
+ StringInfoData result;
- datum_to_json_internal(val, false, result, tcategory, outfuncoid,
+ initStringInfo(&result);
+ datum_to_json_internal(val, false, &result, tcategory, outfuncoid,
false);
- return PointerGetDatum(cstring_to_text_with_len(result->data, result->len));
+ return PointerGetDatum(cstring_to_text_with_len(result.data, result.len));
}
/*
@@ -1347,25 +1348,25 @@ json_build_array_worker(int nargs, const Datum *args, const bool *nulls, const O
{
int i;
const char *sep = "";
- StringInfo result;
+ StringInfoData result;
- result = makeStringInfo();
+ initStringInfo(&result);
- appendStringInfoChar(result, '[');
+ appendStringInfoChar(&result, '[');
for (i = 0; i < nargs; i++)
{
if (absent_on_null && nulls[i])
continue;
- appendStringInfoString(result, sep);
+ appendStringInfoString(&result, sep);
sep = ", ";
- add_json(args[i], nulls[i], result, types[i], false);
+ add_json(args[i], nulls[i], &result, types[i], false);
}
- appendStringInfoChar(result, ']');
+ appendStringInfoChar(&result, ']');
- return PointerGetDatum(cstring_to_text_with_len(result->data, result->len));
+ return PointerGetDatum(cstring_to_text_with_len(result.data, result.len));
}
/*
diff --git a/src/backend/utils/adt/jsonb.c b/src/backend/utils/adt/jsonb.c
index da94d424d61..9399cdb491a 100644
--- a/src/backend/utils/adt/jsonb.c
+++ b/src/backend/utils/adt/jsonb.c
@@ -125,15 +125,16 @@ jsonb_send(PG_FUNCTION_ARGS)
{
Jsonb *jb = PG_GETARG_JSONB_P(0);
StringInfoData buf;
- StringInfo jtext = makeStringInfo();
+ StringInfoData jtext;
int version = 1;
- (void) JsonbToCString(jtext, &jb->root, VARSIZE(jb));
+ initStringInfo(&jtext);
+ (void) JsonbToCString(&jtext, &jb->root, VARSIZE(jb));
pq_begintypsend(&buf);
pq_sendint8(&buf, version);
- pq_sendtext(&buf, jtext->data, jtext->len);
- destroyStringInfo(jtext);
+ pq_sendtext(&buf, jtext.data, jtext.len);
+ pfree(jtext.data);
PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
diff --git a/src/backend/utils/adt/jsonfuncs.c b/src/backend/utils/adt/jsonfuncs.c
index 41862872e8a..936cd9dcd9a 100644
--- a/src/backend/utils/adt/jsonfuncs.c
+++ b/src/backend/utils/adt/jsonfuncs.c
@@ -4607,11 +4607,12 @@ Datum
jsonb_pretty(PG_FUNCTION_ARGS)
{
Jsonb *jb = PG_GETARG_JSONB_P(0);
- StringInfo str = makeStringInfo();
+ StringInfoData str;
- JsonbToCStringIndent(str, &jb->root, VARSIZE(jb));
+ initStringInfo(&str);
+ JsonbToCStringIndent(&str, &jb->root, VARSIZE(jb));
- PG_RETURN_TEXT_P(cstring_to_text_with_len(str->data, str->len));
+ PG_RETURN_TEXT_P(cstring_to_text_with_len(str.data, str.len));
}
/*
diff --git a/src/backend/utils/adt/multirangetypes.c b/src/backend/utils/adt/multirangetypes.c
index 84733dc5019..95e9539591e 100644
--- a/src/backend/utils/adt/multirangetypes.c
+++ b/src/backend/utils/adt/multirangetypes.c
@@ -378,17 +378,18 @@ multirange_send(PG_FUNCTION_ARGS)
{
MultirangeType *multirange = PG_GETARG_MULTIRANGE_P(0);
Oid mltrngtypoid = MultirangeTypeGetOid(multirange);
- StringInfo buf = makeStringInfo();
+ StringInfoData buf;
RangeType **ranges;
int32 range_count;
MultirangeIOData *cache;
+ initStringInfo(&buf);
cache = get_multirange_io_data(fcinfo, mltrngtypoid, IOFunc_send);
/* construct output */
- pq_begintypsend(buf);
+ pq_begintypsend(&buf);
- pq_sendint32(buf, multirange->rangeCount);
+ pq_sendint32(&buf, multirange->rangeCount);
multirange_deserialize(cache->typcache->rngtype, multirange, &range_count, &ranges);
for (int i = 0; i < range_count; i++)
@@ -399,11 +400,11 @@ multirange_send(PG_FUNCTION_ARGS)
range = RangeTypePGetDatum(ranges[i]);
outputbytes = SendFunctionCall(&cache->typioproc, range);
- pq_sendint32(buf, VARSIZE(outputbytes) - VARHDRSZ);
- pq_sendbytes(buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ);
+ pq_sendint32(&buf, VARSIZE(outputbytes) - VARHDRSZ);
+ pq_sendbytes(&buf, VARDATA(outputbytes), VARSIZE(outputbytes) - VARHDRSZ);
}
- PG_RETURN_BYTEA_P(pq_endtypsend(buf));
+ PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
/*
diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c
index 0b2ad8b0975..0e451e4693b 100644
--- a/src/backend/utils/adt/rangetypes.c
+++ b/src/backend/utils/adt/rangetypes.c
@@ -264,7 +264,7 @@ Datum
range_send(PG_FUNCTION_ARGS)
{
RangeType *range = PG_GETARG_RANGE_P(0);
- StringInfo buf = makeStringInfo();
+ StringInfoData buf;
RangeIOData *cache;
char flags;
RangeBound lower;
@@ -273,6 +273,8 @@ range_send(PG_FUNCTION_ARGS)
check_stack_depth(); /* recurses when subtype is a range type */
+ initStringInfo(&buf);
+
cache = get_range_io_data(fcinfo, RangeTypeGetOid(range), IOFunc_send);
/* deserialize */
@@ -280,9 +282,9 @@ range_send(PG_FUNCTION_ARGS)
flags = range_get_flags(range);
/* construct output */
- pq_begintypsend(buf);
+ pq_begintypsend(&buf);
- pq_sendbyte(buf, flags);
+ pq_sendbyte(&buf, flags);
if (RANGE_HAS_LBOUND(flags))
{
@@ -290,8 +292,8 @@ range_send(PG_FUNCTION_ARGS)
uint32 bound_len = VARSIZE(bound) - VARHDRSZ;
char *bound_data = VARDATA(bound);
- pq_sendint32(buf, bound_len);
- pq_sendbytes(buf, bound_data, bound_len);
+ pq_sendint32(&buf, bound_len);
+ pq_sendbytes(&buf, bound_data, bound_len);
}
if (RANGE_HAS_UBOUND(flags))
@@ -300,11 +302,11 @@ range_send(PG_FUNCTION_ARGS)
uint32 bound_len = VARSIZE(bound) - VARHDRSZ;
char *bound_data = VARDATA(bound);
- pq_sendint32(buf, bound_len);
- pq_sendbytes(buf, bound_data, bound_len);
+ pq_sendint32(&buf, bound_len);
+ pq_sendbytes(&buf, bound_data, bound_len);
}
- PG_RETURN_BYTEA_P(pq_endtypsend(buf));
+ PG_RETURN_BYTEA_P(pq_endtypsend(&buf));
}
/*
diff --git a/src/backend/utils/adt/xml.c b/src/backend/utils/adt/xml.c
index 35c915573a1..41e775570ec 100644
--- a/src/backend/utils/adt/xml.c
+++ b/src/backend/utils/adt/xml.c
@@ -2133,7 +2133,7 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
node->type == XML_ELEMENT_NODE) ? node->name : NULL;
int domain = error->domain;
int level = error->level;
- StringInfo errorBuf;
+ StringInfoData errorBuf;
/*
* Defend against someone passing us a bogus context struct.
@@ -2210,16 +2210,16 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
}
/* Prepare error message in errorBuf */
- errorBuf = makeStringInfo();
+ initStringInfo(&errorBuf);
if (error->line > 0)
- appendStringInfo(errorBuf, "line %d: ", error->line);
+ appendStringInfo(&errorBuf, "line %d: ", error->line);
if (name != NULL)
- appendStringInfo(errorBuf, "element %s: ", name);
+ appendStringInfo(&errorBuf, "element %s: ", name);
if (error->message != NULL)
- appendStringInfoString(errorBuf, error->message);
+ appendStringInfoString(&errorBuf, error->message);
else
- appendStringInfoString(errorBuf, "(no message provided)");
+ appendStringInfoString(&errorBuf, "(no message provided)");
/*
* Append context information to errorBuf.
@@ -2237,11 +2237,11 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
xmlGenericErrorFunc errFuncSaved = xmlGenericError;
void *errCtxSaved = xmlGenericErrorContext;
- xmlSetGenericErrorFunc(errorBuf,
+ xmlSetGenericErrorFunc(&errorBuf,
(xmlGenericErrorFunc) appendStringInfo);
/* Add context information to errorBuf */
- appendStringInfoLineSeparator(errorBuf);
+ appendStringInfoLineSeparator(&errorBuf);
xmlParserPrintFileContext(input);
@@ -2250,7 +2250,7 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
}
/* Get rid of any trailing newlines in errorBuf */
- chopStringInfoNewlines(errorBuf);
+ chopStringInfoNewlines(&errorBuf);
/*
* Legacy error handling mode. err_occurred is never set, we just add the
@@ -2263,10 +2263,10 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
if (xmlerrcxt->strictness == PG_XML_STRICTNESS_LEGACY)
{
appendStringInfoLineSeparator(&xmlerrcxt->err_buf);
- appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf->data,
- errorBuf->len);
+ appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf.data,
+ errorBuf.len);
- destroyStringInfo(errorBuf);
+ pfree(errorBuf.data);
return;
}
@@ -2281,23 +2281,23 @@ xml_errorHandler(void *data, PgXmlErrorPtr error)
if (level >= XML_ERR_ERROR)
{
appendStringInfoLineSeparator(&xmlerrcxt->err_buf);
- appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf->data,
- errorBuf->len);
+ appendBinaryStringInfo(&xmlerrcxt->err_buf, errorBuf.data,
+ errorBuf.len);
xmlerrcxt->err_occurred = true;
}
else if (level >= XML_ERR_WARNING)
{
ereport(WARNING,
- (errmsg_internal("%s", errorBuf->data)));
+ (errmsg_internal("%s", errorBuf.data)));
}
else
{
ereport(NOTICE,
- (errmsg_internal("%s", errorBuf->data)));
+ (errmsg_internal("%s", errorBuf.data)));
}
- destroyStringInfo(errorBuf);
+ pfree(errorBuf.data);
}
--
2.43.0