Luca Ferrari writes:
> On Wed, Mar 12, 2025 at 12:54 PM Artur Zakirov wrote:
>> In your case `base/357283/365810` file is a new index file. For some
>> reason Postgres tries to read the new index. I suppose this is because
>> during reading the table `t` within the function `f_t` it tries to
>> a
On 3/12/25 14:31, Luca Ferrari wrote:
On Wed, Mar 12, 2025 at 12:54 PM Artur Zakirov wrote:
I can reproduce this with the table `t` on PG 15.10.
I didn't mention I'm running 16.6, but I'm pretty sure it is
reproducible on other versions too.
In your case `base/357283/365810` file is a new in
On Wed, Mar 12, 2025 at 12:54 PM Artur Zakirov wrote:
>
> I can reproduce this with the table `t` on PG 15.10.
I didn't mention I'm running 16.6, but I'm pretty sure it is
reproducible on other versions too.
>
> In your case `base/357283/365810` file is a new index file. For some
> reason Postgr
Hey,
On Wed, 12 Mar 2025 at 10:11, Luca Ferrari wrote:
> Now, according to the documentation, the function f_t is immutable
> since it is not modifying the database, so what is going on? And why
> is the same function working if the table has not the constraint on
> the column?
I can reproduce t
;
RETURN return_value;
END
$CODE$
LANGUAGE plpgsql
IMMUTABLE;
CREATE INDEX IF NOT EXISTS idx_tt ON tt( f_tt( pk ) );
CREATE INDEX IF NOT EXISTS idx_t ON t( f_t( pk ) );
The last index, created on table t throws the error:
ERROR: could not read block 0 in file "base/357283/365810": r
ile we are looking for a suitable backup to recover from, I hope this
>>> community may have some other advice on forward steps in case we cannot
>>> restore.
>>>
>>> RCA: Unexpected shutdown due to critical power failure
>>>
>>> Current Issue: The file b
the following for each of the 5 records:
insert into ApprovalStageDefinition values (1, 'Stage One', 'Stage One');
The error message when running this query is:
ERROR: could not read block 0 in file "base/16509/17869": read only 0
of 8192 bytes
T
ot
>> restore.
>>
>> RCA: Unexpected shutdown due to critical power failure
>>
>> Current Issue: The file base/16509/17869 is zero bytes in size.
>>
>> The error message when running this query is:
ERROR: could not read block 0 in file "base/16509/1
such as the following for each of the 5 records:
> insert into ApprovalStageDefinition values (1, 'Stage One', 'Stage One');
>
> The error message when running this query is:
> ERROR: could not read block 0 in file "base/16509/17869": read only 0 of
> 8192 bytes
>
&
such as the following for each of the 5 records:
> insert into ApprovalStageDefinition values (1, 'Stage One', 'Stage One');
>
> The error message when running this query is:
> ERROR: could not read block 0 in file "base/16509/17869": read only 0 of
> 8192 bytes
>
> T
e error message when running this query is:
ERROR: could not read block 0 in file "base/16509/17869": read only 0 of 8192
bytes
The file does exist on the file system, with zero bytes, as do the associated
fsm and vm files.
PostGres does allow us to describe the table:
\d ApprovalSta
Hi,
On 2018-08-02 13:00:16 -0700, Peter Geoghegan wrote:
> On Tue, Jul 31, 2018 at 9:00 PM, Andres Freund wrote:
> > I don't think that's particularly relevant. We should always register an
> > invalidation before the relevant CommandCounterIncrement(), because that
> > is what makes catalog chan
On Tue, Jul 31, 2018 at 9:00 PM, Andres Freund wrote:
> I don't think that's particularly relevant. We should always register an
> invalidation before the relevant CommandCounterIncrement(), because that
> is what makes catalog changes visible, and therefore requires
> registering invalidations fo
Hi,
On 2018-07-31 19:29:37 -0700, Peter Geoghegan wrote:
> On Tue, Jul 31, 2018 at 7:02 PM, Andres Freund wrote:
> > Not a fan of this comment. It doesn't really explain that well why it's
> > needed here, but then goes on to a relatively general explanation of why
> > cache invalidation is neces
On Tue, Jul 31, 2018 at 7:02 PM, Andres Freund wrote:
> Maybe expand a bit on this by saying that it's more likely "because
> plan_create_index_workers() triggers a relcache entry to be (re-)built,
> which previously did only happen in edge cases" or such?
Okay.
> Not a fan of this comment. It d
On 2018-07-31 18:48:23 -0700, Peter Geoghegan wrote:
> On Mon, Jul 9, 2018 at 11:32 AM, Andres Freund wrote:
> > I assume we'll have to backpatch this issue, so I think it'd probably a
> > good idea to put a specific CacheInvalidateHeapTuple() in there
> > explicitly in the back branches, and do t
On Mon, Jul 9, 2018 at 11:32 AM, Andres Freund wrote:
> I assume we'll have to backpatch this issue, so I think it'd probably a
> good idea to put a specific CacheInvalidateHeapTuple() in there
> explicitly in the back branches, and do the larger fix in 12. ISTM
> there's some risks that it'd caus
On 2018-07-25 19:27:47 -0400, Tom Lane wrote:
> Andres Freund writes:
> > On 2018-06-28 08:02:10 -0700, Andres Freund wrote:
> >> I wonder why we don't just generally trigger invalidations to an
> >> indexes' "owning" relation in CacheInvalidateHeapTuple()?
>
> > Tom, do you have any comments abo
On Wed, Jul 25, 2018 at 4:03 PM, Andres Freund wrote:
> Peter, given that your patch made this more likely, and that you're a
> committer these days, I'm opening an open items entry, and assign it to
> you. Does that sound ok?
I intend to follow through on this soon. I have been distracted by
pro
Andres Freund writes:
> On 2018-06-28 08:02:10 -0700, Andres Freund wrote:
>> I wonder why we don't just generally trigger invalidations to an
>> indexes' "owning" relation in CacheInvalidateHeapTuple()?
> Tom, do you have any comments about the above?
It seems like an ugly and fragile hack, off
Hi,
On 2018-06-28 08:02:10 -0700, Andres Freund wrote:
> I believe this happens because there's currently no relcache
> invalidation registered for the main relation, until *after* the index
> is built. Normally it'd be the CacheInvalidateRelcacheByTuple(tuple) in
> index_update_stats(), which is
Hi,
On 2018-07-09 12:06:21 -0700, Peter Geoghegan wrote:
> > I assume we'll have to backpatch this issue, so I think it'd probably a
> > good idea to put a specific CacheInvalidateHeapTuple() in there
> > explicitly in the back branches, and do the larger fix in 12. ISTM
> > there's some risks tha
On Mon, Jul 9, 2018 at 11:32 AM, Andres Freund wrote:
>> Note that there is a kludge within plan_create_index_workers() that
>> has us treat the heap relation as an inheritance parent, just to get a
>> RelOptInfo for the heap relation without running into similar trouble
>> with the index in get_r
Hi,
On 2018-07-09 09:59:58 -0700, Peter Geoghegan wrote:
> On Thu, Jun 28, 2018 at 8:02 AM, Andres Freund wrote:
> > I believe this happens because there's currently no relcache
> > invalidation registered for the main relation, until *after* the index
> > is built. Normally it'd be the CacheInva
On Thu, Jun 28, 2018 at 8:02 AM, Andres Freund wrote:
> Peter, looks like you might be involved specifically.
Seems that way.
> This however seems wrong. Cleary the relation's index list is out of
> date.
>
> I believe this happens because there's currently no relcache
> invalidation registered
;;
> end
> $body$
> language plpgsql immutable;
> CREATE FUNCTION
> testdb=> create index idx_fake on t ( f_fake( pk ) );
> CREATE INDEX
> testdb=> drop index idx_fake;
> DROP INDEX
>
> testdb=> create index idx_fake on t ( f_fake( pk ) );
> 2018-06-28 1
b=> create index idx_fake on t ( f_fake( pk ) );
CREATE INDEX
testdb=> drop index idx_fake;
DROP INDEX
testdb=> create index idx_fake on t ( f_fake( pk ) );
2018-06-28 10:23:18.275 CEST [892] ERROR: could not read block 0 in
file "base/16392/16538": read only 0 of 8192 bytes
On Wed, Jun 27, 2018 at 10:44 PM Andres Freund wrote:
> But I also can't reproduce it either on 10.4, 10-current, master. Did
> you build from source? Packages? Any extensions? Is there anything
> missing from the above instruction to reproduce this?
Somehow today I cannot reproduce it by myself
t; language plpgsql immutable;
>
> Of course, f_fake is not immutable.
> When on 10.4 or 11 beta 1 I try to create an index on this nasty
> crappy function:
>
> create index idx_fake on t ( f_fake( pk ) );
>
> ERROR: could not read block 0 in file "base/16392/16444
On Wed, Jun 27, 2018 at 11:35 AM, Luca Ferrari wrote:
> If I then disconnect and reconnect I'm able to issue the select and
> get back the results. But if I issue a reindex I got the same error
> and the table "becames unreadable" for the whole session.
> On 10.3 the table is never locked for the
nction:
create index idx_fake on t ( f_fake( pk ) );
ERROR: could not read block 0 in file "base/16392/16444": read only 0
of 8192 bytes
CONTEXT: SQL statement "select tfrom t where pk =
i limit 1"
PL/pgSQL function f_fake(integer) line 5 at SQL statement
that
31 matches
Mail list logo