On Sun, May 19, 2013 at 8:44 PM, Greg Smith wrote:
> On 5/13/13 6:36 PM, Mike McCann wrote:
>>
>> stoqs_march2013_s=# explain analyze select * from
>> stoqs_measuredparameter order by datavalue;
>>
>> QUERY PLAN
>>
>>
On 5/16/13 7:52 PM, Cuong Hoang wrote:
The standby host will be disk-based so it
will be less vulnerable to power loss.
If it can keep up with replay from the faster master, that sounds like a
decent backup. Make sure you setup all write caches very carefully on
that system, because it's goi
On 5/16/13 8:06 PM, Tomas Vondra wrote:
Have you considered using a UPS? That would make the SSDs about as
reliable as SATA/SAS drives - the UPS may fail, but so may a BBU unit on
the SAS controller.
That's not true at all. Any decent RAID controller will have an option
to stop write-back cac
On 5/17/13 7:26 AM, Rob Emery wrote:
I can keep decreasing the size of
the window I'm deleting but I feel I must be doing something either
fundamentally wrong or over-complicating this enormously.
I've had jobs like this where we ended up making the batch size cover
only 4 hours at a time. On
On 5/13/13 6:36 PM, Mike McCann wrote:
stoqs_march2013_s=# explain analyze select * from
stoqs_measuredparameter order by datavalue;
QUERY PLAN
---
Thanks for suggestion Tomas. We're about to set up WAL backup to Amazon S3.
I think this should cover all of our bases. At least for the moment,
SAS-based standby seems to keep up with the master because that's its sole
purpose. We're not sending queries to the hot standby. We also consider
switchi
Do you really need a running standby for fast failover? What about doing
plain WAL archiging? I'd definitely consider that, because even if you
setup a SAS-based replica, you can't use it for production as it does no
handle the load.
I think you could setup WAL archiving and in case of crash just
On 17.5.2013 03:34, Mark Kirkwood wrote:
> On 17/05/13 12:06, Tomas Vondra wrote:
>> Hi,
>>
>> On 16.5.2013 16:46, Cuong Hoang wrote:
>
>>> Pro for the master server. I'm aware of write cache issue on SSDs in
>>> case of power loss. However, our hosting provider doesn't offer any
>>> other choices
Rob,
I'm going to make half of the list cringe at this suggestion though I have
used it successfully.
If you can guarantee the table will not be vacuumed during this cleanup or
rows you want deleted updated, I would suggest using the ctid column to
facilitate the delete. Using the simple transac
On Fri, May 17, 2013 at 4:26 AM, Rob Emery wrote:
> Hi All,
>
> We've got 3 quite large tables that due to an unexpected surge in
> usage (!) have grown to about 10GB each, with 72, 32 and 31 million
> rows in. I've been tasked with cleaning out about half of them, the
> problem I've got is that
10 matches
Mail list logo