On 12 February 2012 22:28, Peter van Hardenberg wrote:
> Yes, I think if we could normalize, anonymize, and randomly EXPLAIN
> ANALYZE 0.1% of all queries that run on our platform we could look for
> bad choices by the planner. I think the potential here could be quite
> remarkable.
Tom Lane sugg
On Sun, Feb 12, 2012 at 12:01 PM, Joshua Berkus wrote:
> You'd pretty much need to do large-scale log harvesting combined with samples
> of query concurrency taken several times per minute. Even that won't
> "normalize" things the way you want, though, since all queries are not equal
> in term
> Is there an easy and unintrusive way to get such a metric as the
> aggregated query times? And to normalize it for how much work
> happens
> to have been doing on at the time?
You'd pretty much need to do large-scale log harvesting combined with samples
of query concurrency taken several time
On Thu, Feb 9, 2012 at 5:29 PM, Scott Marlowe wrote:
> On Thu, Feb 9, 2012 at 3:41 PM, Peter van Hardenberg wrote:
>> Hmm, perhaps we could usefully aggregate auto_explain output.
>
> How about something where you run a site at random_page cost of x,
> then y, then z and you do some aggregating o
On Sat, Feb 11, 2012 at 8:26 AM, Jeff Janes wrote:
> By the time you realize the query is running long, it is too late to
> start analyzing it. And without analyzing it, you probably can't get
> the information you need.
>
> Maybe with the timing = off feature,it would might make sense to just
>
On Thu, Feb 9, 2012 at 2:41 PM, Peter van Hardenberg wrote:
> Hmm, perhaps we could usefully aggregate auto_explain output.
By the time you realize the query is running long, it is too late to
start analyzing it. And without analyzing it, you probably can't get
the information you need.
Maybe
On Fri, Feb 10, 2012 at 5:40 PM, Josh Berkus wrote:
> Peter,
>
>> We've funded some work by Peter Geoghegan to make pg_stat_statements
>> more useful, but the patch is currently sitting in the commitfest in
>> need of a champion. I'd very much like to see it landed.
>
> Ok, let me review it then .
Peter,
> We've funded some work by Peter Geoghegan to make pg_stat_statements
> more useful, but the patch is currently sitting in the commitfest in
> need of a champion. I'd very much like to see it landed.
Ok, let me review it then ...
> Between that work, 9.2, and Dimitri's extension whitelis
On Fri, Feb 10, 2012 at 11:32 AM, Josh Berkus wrote:
> On 2/9/12 2:41 PM, Peter van Hardenberg wrote:
> So one thought would be to add in pg_stat_statements to your platform
> ... something I'd like to see Heroku do anyway. Then you can sample
> this across dozens (or hundreds) of user databases,
Le vendredi 10 février 2012 20:32:50, Josh Berkus a écrit :
> On 2/9/12 2:41 PM, Peter van Hardenberg wrote:
> > Hmm, perhaps we could usefully aggregate auto_explain output.
>
> The other option is to take a statistical approach. After all, what you
> want to do is optimize average response time
On 2/9/12 2:41 PM, Peter van Hardenberg wrote:
> Hmm, perhaps we could usefully aggregate auto_explain output.
The other option is to take a statistical approach. After all, what you
want to do is optimize average response times across all your user's
databases, not optimize for a few specific qu
On Thu, Feb 9, 2012 at 3:41 PM, Peter van Hardenberg wrote:
> Hmm, perhaps we could usefully aggregate auto_explain output.
How about something where you run a site at random_page cost of x,
then y, then z and you do some aggregating of query times in each. A
scatter plot should tell you lots.
Hmm, perhaps we could usefully aggregate auto_explain output.
On Thu, Feb 9, 2012 at 7:32 AM, Jeff Janes wrote:
> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe wrote:
>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
>>> Having read the thread, I don't really see how I could study
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe wrote:
> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
>> Having read the thread, I don't really see how I could study what a
>> more principled value would be.
>
> Agreed. Just pointing out more research needs to be done.
>
>> That sai
On 09/02/12 00:09, Greg Sabino Mullane wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
You can execute several queries with the three different values provided
by Scott and Josh.
- SET random_page_cost = 2.0
First execution of the queries with EXPLAIN ANALYZE
- SET random_page_cost
-BEGIN PGP SIGNED MESSAGE-
Hash: RIPEMD160
> You can execute several queries with the three different values provided
> by Scott and Josh.
> - SET random_page_cost = 2.0
> First execution of the queries with EXPLAIN ANALYZE
> - SET random_page_cost = 1.4
> Second execution of the quer
On Wed, Feb 8, 2012 at 7:54 PM, Peter van Hardenberg wrote:
> On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg wrote:
>> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe
>> wrote:
>>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
That said, I have access to a very large fle
On Wed, Feb 8, 2012 at 6:47 PM, Peter van Hardenberg wrote:
> On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe wrote:
>> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
>>> That said, I have access to a very large fleet in which to can collect
>>> data so I'm all ears for suggestions ab
On Wed, Feb 8, 2012 at 6:28 PM, Scott Marlowe wrote:
> On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
>> That said, I have access to a very large fleet in which to can collect
>> data so I'm all ears for suggestions about how to measure and would
>> gladly share the results with the
On 08/02/12 21:15, Peter van Hardenberg wrote:
Having read the thread, I don't really see how I could study what a
more principled value would be.
That said, I have access to a very large fleet in which to can collect
data so I'm all ears for suggestions about how to measure and would
gladly sha
On Wed, Feb 8, 2012 at 6:45 PM, Peter van Hardenberg wrote:
> Having read the thread, I don't really see how I could study what a
> more principled value would be.
Agreed. Just pointing out more research needs to be done.
> That said, I have access to a very large fleet in which to can collect
Having read the thread, I don't really see how I could study what a
more principled value would be.
That said, I have access to a very large fleet in which to can collect
data so I'm all ears for suggestions about how to measure and would
gladly share the results with the list.
Peter
On Wed, Feb
On Wed, Feb 8, 2012 at 5:50 PM, Josh Berkus wrote:
> On 2/7/12 4:59 PM, Peter van Hardenberg wrote:
>> Per the thread from last month, I've updated the default
>> random_page_cost on Heroku Postgres to reduce the expected cost of a
>> random_page on all new databases.
>
> This is because Heroku us
On 2/7/12 4:59 PM, Peter van Hardenberg wrote:
> Per the thread from last month, I've updated the default
> random_page_cost on Heroku Postgres to reduce the expected cost of a
> random_page on all new databases.
This is because Heroku uses AWS storage, which has fast seeks but poor
throughput com
Per the thread from last month, I've updated the default
random_page_cost on Heroku Postgres to reduce the expected cost of a
random_page on all new databases.
Thanks to everyone who helped come to this conclusion!
Peter
--
Peter van Hardenberg
San Francisco, California
"Everything was beautifu
25 matches
Mail list logo