On 21/12/2021, at 10:25 AM, Tom Lane wrote:
> Not quite like that. Look into nodeAgg.c, which solves a similar problem
> for the transvalues themselves with code like
>
>/* forget the old value, if any */
>if (!oldIsNull && !pertrans->inputtypeByVal)
>pfre
I have a question about trying to keep memory from growing too much in a C
aggregate function with pass-by-reference types. I am trying to keep track of a
last-seen value in my aggregate state, so I have code roughly doing this:
Datum current;
MemoryContext aggContext;
AggCheckCallContext(fcinfo
On 16/12/2021, at 2:43 PM, Tom Lane wrote:
> Of course what this function is actually returning is numeric[].
> There is some code such as array_out that will look at the
> element type OID embedded in the array value, and do the right
> thing. But other code will believe the function's declared
On 15/12/2021, at 11:51 AM, Tom Lane wrote:
> You should
> probably palloc temp arrays right here and then use construct_md_array
> directly instead of dealing with an ArrayBuildState.
OK, I gave that a go [1] this time in a vec_agg_sum() aggregate, that operates
much the same as the vec_agg_mea
On 15/12/2021, at 11:51 AM, Tom Lane wrote:
> Hmm, I think you're abusing the ArrayBuildState API. In particular,
> what guarantees that the astate->dvalues and astate->dnulls arrays
> are long enough for what you're stuffing into them?
The length is defined in the aggregate transition function
ork/aggs_for_vecs/blob/9e742cdc32a113268fd3c1f928c8ac724acec9f5/vec_agg_mean.c>
Cheers,
Matt Magoffin
On 5/12/2021, at 9:04 AM, Tom Lane wrote:
> So that probably means that you weren't careful about allocating your
> own state data in the long-lived context (agg_context), and so it
> got freed between calls.
It turns out I wasn’t careful about setting isnull on the passed in state
argument. Aft
On 5/12/2021, at 5:16 AM, Tom Lane wrote:
> Calling numeric_avg_accum in the agg_context is unnecessary, and possibly
> counterproductive (it might leak memory in that context, since like all
> other aggregates it assumes it's called in a short-lived context).
OK, thanks for that, I’ll remove t
terminology might be off.
Cheers,
Matt Magoffin
[1] https://github.com/pjungwir/aggs_for_vecs
[2]
https://github.com/SolarNetwork/aggs_for_vecs/blob/feature/numeric-stats-agg/vec_to_mean_numeric.c
[3]
https://github.com/SolarNetwork/aggs_for_vecs/blob/7c2a5aad35a814dca6d9f5a
> On 27/03/2020, at 5:26 AM, Adrian Klaver wrote:
>
> Well morning and coffee helped some, but not enough to offer blinding
> insight. Reviewing the function above, the TimescaleDB insert block function
> and the overview of the TimescaleDB hypertable architecture leads me to
> believe there
> On 23/03/2020, at 1:10 PM, Adrian Klaver wrote:
>
> So the query is in the function solardatum.store_datum()?
>
> If so what is it doing?
Yes. This function first performs the INSERT INTO the solardatum.da_datum table
that we’re discussing here; then it inserts into two different tables. I
> On 23/03/2020, at 9:44 AM, Adrian Klaver wrote:
> Is there a chance the BEFORE trigger functions are doing something that could
> be leading to the error?
>
> In the error log is there a line with the actual values that failed?
The error log does not show the literal values, no. Here is a li
> On 22/03/2020, at 8:11 AM, Adrian Klaver wrote:
>
>> I was thinking more about this:
>> "INSERT INTO solardatum.da_datum(ts, node_id, source_id, posted, jdata_i,
>> jdata_a, jdata_s, jdata_t)
>> VALUES (…) ..."
>> from your OP. Namely whether it was:
>> VALUES (), (), (), ...
>> and if so
> On 21/03/2020, at 8:10 AM, Adrian Klaver wrote:
>
>> The _hyper_1_1931_chunk_da_datum_x_acc_idx index has the same definition as
>> the da_datum_x_acc_idx above (it is defined on a child table). That is, they
>> are both essentially:
>> UNIQUE, btree (node_id, source_id, ts DESC, jdata_a) WH
> On 21/03/2020, at 4:00 AM, Adrian Klaver wrote:
>
> On 3/20/20 2:17 AM, Matt Magoffin wrote:
>> Hello,
>> Indexes:
>> "da_datum_pkey" UNIQUE, btree (node_id, ts, source_id) CLUSTER,
>> tablespace "solarindex"
>> "d
Hello,
I am experiencing a duplicate key violation in Postgres 9.6 on occasion for one
particular query, and I’m wondering where I’m going wrong. My table looks like
this:
Table "solardatum.da_datum"
Column | Type | Collation | Nullable | Default
-
16 matches
Mail list logo