Hello

I worked with 80K float fields without any problem.

There are possible issues:

* needs lot of memory for detoast - it can be problem with more parallel
queries
* there is a risk of possible repeated detost - some unhappy usage in
plpgsql can be slow - it is solvable, but you have to identify this issue
* any update of large array is slow - so these arrays are good for write
once data

Regards

Pavel


2014-02-14 23:07 GMT+01:00 lup <robjsarg...@gmail.com>:

> Would 10K elements of float[3] make any difference in terms of read/write
> performance?
> Or 240K byte array?
>
> Or are these all functionally the same issue for the server? If so,
> intriguing possibilities abound. :)
>
>
>
>
>
> --
> View this message in context:
> http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-tp5790562p5792099.html
> Sent from the PostgreSQL - general mailing list archive at Nabble.com.
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>

Reply via email to