My table currently uses up 62 GB of storage, and it has 450 M rows. This
narrow table has a PK on (ParentID, ChildNumber), and it has between 20K and
50K of child rows per parent.
The data is inserted daily, rarely modified, never deleted. The performance
of modifications is not an issue. The only
Bill,
Regarding "SELECT performance improve nearly linerally to the number of
partitions," - can you elaborate why? If I split my table into several
partitions, even the index depth may stay the same, because the PK is
narrow, it only consists of 2 4-byte integers.
My selects are distributed more
Kevin,
For now, all the data fits in the cache: the box has 384GB of RAM. But I
want to be ready for later, when we have more data. It is easier to refactor
my table now, when it is still smallish.
Children are only added to recently added parents, and they are all
added/updated/deleted at once.
Kevin,
What would be the advantages of partitioning on ranges of ParentID? Each
query will touch at most one partition. I might or might not get PK indexes
one level of depth less.
I understand that I will CLUSTER these smaller tables and benefit from that.
Other than clustering, what are other
To deploy my changes, I am using apgdiff. For that, I am invoking the
following command:
pg_dump --host=my_dev_server --username=myself --no-password --schema-only
--file=C:\Temp\mydb_old.sql my_test_db
Two objects are present in my test database, but not in the dump file. I can
invoke them from
Every row of my table has a double[] array of approximately 30K numbers. I
have ran a few tests, and so far everything looks good.
I am not pushing the limits here, right? It should be perfectly fine to
store arrays of 30k double numbers, correct?
--
View this message in context:
http://postgr
No large deletes, just inserts/updates/selects. What are the potential
problems with deletes?
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-tp5790562p5790568.html
Sent from the PostgreSQL - general mailing list a
I will be always reading/writing the whole array. The table is about 40GB. It
replaces two tables, parent and child, using about 160 GB together.
--
View this message in context:
http://postgresql.1045698.n5.nabble.com/Is-it-reasonable-to-store-double-arrays-of-30K-elements-tp5790562p5790570.ht
I would like to give my users the ability to invoke read-only functions and
select statements, so that they can easily see the data. Both me and the
users have experience mostly with SQL Server, so anyone can keep like 30
connections without much thinking.
Since too many open connections seems to
Hi Pavel,
1. I believe we have lots of memory. How much is needed to read one array
of 30K float number?
2. What do we need to avoid possible repeated detost, and what it is?
3. We are not going to update individual elements of the arrays. We might
occasionally replace the whole thing. When we ben
10 matches
Mail list logo