Oh, sorry, overlooked that part.
Maybe refreshing stats with VACUUM FULL ?
2013/5/17 Robert Emery
> Hi Sékine,
>
> Unfortunately I'm not trying to empty the table completely, just
> delete about 10-15% of the data in it.
>
> Thanks,
>
> On 17 May 2013 14:11, Sékine Coulibaly wrote:
> > Rob,
>
On Fri, May 17, 2013 at 8:17 AM, Merlin Moncure wrote:
> On Fri, May 17, 2013 at 1:34 AM, David Rees wrote:
>> On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang wrote:
>>> For our application, a few seconds of data loss is acceptable.
>>
>> If a few seconds of data loss is acceptable, I would serious
On Fri, May 17, 2013 at 1:34 AM, David Rees wrote:
> On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang wrote:
>> For our application, a few seconds of data loss is acceptable.
>
> If a few seconds of data loss is acceptable, I would seriously look at
> the synchronous_commit setting and think about tu
Hi All,
We've got 3 quite large tables that due to an unexpected surge in
usage (!) have grown to about 10GB each, with 72, 32 and 31 million
rows in. I've been tasked with cleaning out about half of them, the
problem I've got is that even deleting the first 1,000,000 rows seems
to take an unreaso
On Thu, May 16, 2013 at 7:46 AM, Cuong Hoang wrote:
> For our application, a few seconds of data loss is acceptable.
If a few seconds of data loss is acceptable, I would seriously look at
the synchronous_commit setting and think about turning that off rather
than risk silent corruption with non-e