On 13 January 2017 at 18:17, Ivan Voras wrote:
> On 13 January 2017 at 12:00, Stuart Bishop
> wrote:
>
>>
>>
>> On 7 January 2017 at 02:33, Ivan Voras wrote:
>>
>>>
>>>
>>>
>>> I forgot to add one more information, the da
se a filesystem
snapshot from before you promoted the replica to a primary. You do need to
ensure that the real primary keep enough WAL logs to cover the period your
report database is broken out.
Personally though, I'd take the opportunity to set up wal shipping and
point in time recovery on your primary, and rebuild your reporting database
regularly from these backups. You get your fresh reporting database on
demand without overloading the primary, and regularly test your backups.
--
Stuart Bishop
http://www.stuartbishop.net/
, can calculate exactly which store(s) need to
be hit, and returns the rows and to PostgreSQL it looks like 1 big table
with 1.3 trillion rows. And if it doesn't do that in 30ms you get to blame
yourself :)
--
Stuart Bishop
http://www.stuartbishop.net/
custom storage. A single table at the PG
level, you can shard the data yourself into 8 bazillion separate stores, in
whatever structure suites your read and write operations (maybe reusing an
embedded db engine, ordered flat file+log+index, whatever).
--
Stuart Bishop
http://www.stuartbishop.net/
tic,
or is pessimism a dial?
--
Stuart Bishop
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
the generic OS algorithms. And size of the hot set,
since this gets pinned in shared_buffers. Urgh, so many variables.
--
Stuart Bishop
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
eem to be very high, but so far have not
posed a problem and may well be correct. I'm trusting pgtune here
rather than my outdated guesses.
--
Stuart Bishop
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subs
the slave
having trouble, for example). Lowering the number of concurrent
connections in your pgbouncer connection pool could help here.
--
Stuart Bishop
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to y
ith shared_buffers=5GB also seems
fine. Old load graphs show the load is comparable from when it was
running Ubuntu 10.04.
My big systems are still all on Ubuntu 10.04 (cut over in January I expect).
--
Stuart Bishop
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsq
ial index help? CREATE UNIQUE INDEX foo_idx ON mytab(id)
WHERE id = EffectiveId
--
Stuart Bishop
http://www.stuartbishop.net/
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance
can measure, with a guesstimate of
the disk cache hit rate. It would be lovely if these two variables
were separate. It would be even lovelier if the disk cache hit rate
could be probed at run time and didn't need setting at all, but I
suspect that isn't possible on some platforms.
--
PG on Opteron?
With PG 8.2 and 8.3, is it still pretty much limited to 8 cores making 2 of
the quad core Xeons redundant or detrimental?
I expect we will be running this hardware for 8.2, 8.3 and 8.4. Anyone aware
of anything that might change the landscape for 8.4?
--
Stuart Bishop <[EMAIL P
Stuart Bishop wrote:
> I would like to understand what causes some of my indexes to be slower to
> use than others with PostgreSQL 8.1. On a particular table, I have an int4
> primary key, an indexed unique text 'name' column and a functional index of
> type text. The fun
Tom Lane wrote:
> Stuart Bishop <[EMAIL PROTECTED]> writes:
>> I would like to understand what causes some of my indexes to be slower to
>> use than others with PostgreSQL 8.1.
>
> I was about to opine that it was all about different levels of
> correlation betwee
=1999.456..1999.724 rows=50 loops=1)
-> Index Scan using person_sort_key_idx on person (cost=0.00..41129.52
rows=527773 width=553) (actual time=0.079..1274.952 rows=527050 loops=1)
Total runtime: 1999.858 ms
(3 rows)
--
Stuart Bishop <[EMAIL PROTECTED]> http://www.
etter
performance by mounting the database from a RAM disk, or if I would be
better off keeping that RAM free and increasing the effective_cache_size
appropriately.
I'd also be interested in knowing if this is dependant on whether I am
running 7.4, 8.0 or 8.1.
--
Stuart Bishop <[EMAIL PRO
16 matches
Mail list logo