The first query timed out, but the second one returned this:

0 "623140"
1 "53"
2 "12"
3 "10"
4 "1"
5 "1"
7 "1"
[null] "162"

Not quite sure what that means, but if there is just a small number of
overly-large records, I might be able to delete them. If I can find them.

On Mon, Nov 5, 2018 at 12:54 PM Daniel Verite <dan...@manitou-mail.org>
wrote:

>         Charles Martin wrote:
>
> > SELECT max(length(docfilecontents::text)) FROM docfile;
> > and after a very long time, got:
> > ERROR: invalid memory alloc request size 1636085512 SQL state: XX000
>
> It would mean that at least one row has a "docfilecontents"
> close to 0.5GB in size. Or that the size fields in certain rows
> are corrupted, although that's less plausible if you have
> no reason to suspect hardware errors.
>
> Does the following query work:
>
>  SELECT max(octet_length(docfilecontents)) FROM docfile;
>
> or maybe a histogram by size in hundred of megabytes:
>
>  SELECT octet_length(docfilecontents)/(1024*1024*100),
>               count(*)
>   FROM docfile
>   GROUP BY octet_length(docfilecontents)/(1024*1024*100);
>
> Note that the error message above does not say that there's not enough
> free memory, it says that it won't even try to allocate that much, because
> 1636085512 is over the "varlena limit" of 1GB.
> AFAICS I'm afraid that this table as it is now cannot be exported
> by pg_dump, even if you had enough free memory, because any individual
> row in COPY cannot exceed 1GB in text format.
>
>
> Best regards,
> --
> Daniel Vérité
> PostgreSQL-powered mailer: http://www.manitou-mail.org
> Twitter: @DanielVerite
>

Reply via email to