Rambabu V wrote:
> While taking pgdump we are getting error message cache lookup failed for
> function 7418447.
> While trying select * from pg_proc where oid=7418447 returns zero rows.
> Please help us on this.
That means that some catalog data are corrupted.
If possible, restore from a backu
Hi Tom,
Thanks for your reply, that’s very helpful and informative.
Although there's no way to have any useful pg_statistic stats if you won't
do an ANALYZE, the planner nonetheless can see the table's current
physical size, and what it normally does is to multiply the last-reported
tuple density
David Wheeler writes:
> I'm having performance trouble with a particular set of queries. It goes a
> bit like this
> 1) queue table is initially empty, and very narrow (1 bigint column)
> 2) we insert ~30 million rows into queue table
> 3) we do a join with queue table to delete from another tab
On Wed, Jun 27, 2018 at 03:45:26AM +, David Wheeler wrote:
> Hi All,
>
> I’m having performance trouble with a particular set of queries. It goes a
> bit like this
>
> 1) queue table is initially empty, and very narrow (1 bigint column)
> 2) we insert ~30 million rows into queue table
> 3) w
OID is a temp-var that is only consistent within a Query Execution.
https://www.postgresql.org/docs/current/static/datatype-oid.html
On Thu, Jun 28, 2018 at 12:50 AM, Steve Crawford <
scrawf...@pinpointresearch.com> wrote:
>
>
> On Wed, Jun 27, 2018 at 8:31 AM Rambabu V wrote:
>
>> Hi Team,
>>
On Wed, Jun 27, 2018 at 8:31 AM Rambabu V wrote:
> Hi Team,
>
> While taking pgdump we are getting error message cache lookup failed for
> function 7418447. While trying select * from pg_proc where oid=7418447
> returns zero rows. Please help us on this.
>
Searching on that error messages yields
Hi Team,
While taking pgdump we are getting error message cache lookup failed for
function 7418447. While trying select * from pg_proc where oid=7418447
returns zero rows. Please help us on this.
Hi All,
I’m having performance trouble with a particular set of queries. It goes a bit
like this
1) queue table is initially empty, and very narrow (1 bigint column)
2) we insert ~30 million rows into queue table
3) we do a join with queue table to delete from another table (delete from a
using
Hi Laurenz,
You’re right about the table being bloated, the videos.description column is
large. I thought about moving it to a separate table, but having an index only
on the columns used in the query seems to have compensated for that already.
Thank you.
> On Jun 27, 2018, at 10:19 AM, Laurenz
Roman Kushnir wrote:
> The following basic inner join is taking too much time for me. (I’m using
> count(videos.id)
> instead of count(*) because my actual query looks different, but I simplified
> it here to the essence).
> I’ve tried following random people's suggestions and adjusting the
> ra
10 matches
Mail list logo