Hi all,

I am curious if there is a significant speed up with doing this if most of
the queries run against it are going to be table wide. I won't drop the
data and the data won't really grow. Do I get better speedup with one large
table and large indexes or many small tables with many small indexes?

Thanks,
~Ben

On Sat, Jun 16, 2012 at 2:13 AM, Gabriele Bartolini <
gabriele.bartol...@2ndquadrant.it> wrote:

> Hi Benedict,
>
> Il 15/06/12 20:58, Benedict Holland ha scritto:
>
>  The tables would have to be specified with a table pk constraint falling
>> between two ranges. A view would then be created to manage all of the small
>> tables with triggers handling insert and update operations. Select would
>> have to be view specific but that is really cheap compared to updates. That
>> should have the additional benefit of only hitting a specific table(s) with
>> an update.
>>
>> Basically, I don't see how this particular configuration breaks and if
>> PostgreSQL already has the ability to do this as it seems very useful to
>> manage very large data sets.
>>
>
> What you are looking for is called 'partitioning' (horizontal
> partitioning). I suggest that you read this chapter:
> http://www.postgresql.org/**docs/9.1/static/ddl-**partitioning.html<http://www.postgresql.org/docs/9.1/static/ddl-partitioning.html>
>
> Cheers,
> Gabriele
>
> --
>  Gabriele Bartolini - 2ndQuadrant Italia
>  PostgreSQL Training, Services and Support
>  gabriele.bartolini@**2ndQuadrant.it | www.2ndQuadrant.it
>
>

Reply via email to