On 08/09/2015 18:00, Alvaro Herrera wrote:
> Julien Rouhaud wrote:
>> Hi,
>>
>> Please find attached a v2 of the patch. See below for changes.
>
> Pushed after smallish tweaks. Please test to verify I didn't break
> anything.
>
I just tried with all the cases I could think of, everything works
Julien Rouhaud wrote:
> Hi,
>
> Please find attached a v2 of the patch. See below for changes.
Pushed after smallish tweaks. Please test to verify I didn't break
anything.
(It's a pity that we can't add a regression test with a value other than
0.)
--
Álvaro Herrerahttp://www.
Hi,
Please find attached a v2 of the patch. See below for changes.
On 02/09/2015 15:53, Andres Freund wrote:
>
> Hi,
>
> On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:
>> I didn't know that the thread must exists on -hackers to be able to add
>> a commitfest entry, so I transfer the threa
On Fri, Sep 4, 2015 at 11:21 AM, Greg Stark wrote:
> On Thu, Sep 3, 2015 at 2:13 PM, Merlin Moncure wrote:
>> I find this talk of platters and spindles to be somewhat
>> baroque; for a 200$ part I have to work pretty hard to max out the
>> drive when reading and I'm still not completely sure if
On 2015-09-04 17:21:38 +0100, Greg Stark wrote:
> Wouldn't SSDs need much *less* aggressive prefetching? There's still
> latency and there are multiple I/O channels so they will still need
> some. But spinning media gives latencies measured in milliseconds. You
> can process a lot of tuples in mill
On Thu, Sep 3, 2015 at 2:13 PM, Merlin Moncure wrote:
> I find this talk of platters and spindles to be somewhat
> baroque; for a 200$ part I have to work pretty hard to max out the
> drive when reading and I'm still not completely sure if it's the drive
> itself, postgres, cpu, or sata interfac
On Fri, Sep 4, 2015 at 05:21:38PM +0100, Greg Stark wrote:
> Wouldn't SSDs need much *less* aggressive prefetching? There's still
> latency and there are multiple I/O channels so they will still need
> some. But spinning media gives latencies measured in milliseconds. You
> can process a lot of tu
On Wed, Sep 2, 2015 at 5:38 PM, Andres Freund wrote:
> If you additionally take into account hardware realities where you have
> multiple platters, multiple spindles, command queueing etc, that's even
> more true. A single rotation of a single platter with command queuing
> can often read several
That doesn't match any of the empirical tests I did at the time. I posted
graphs of the throughput for varying numbers of spindles with varying
amount of prefetch. In every case more prefetching increases throuput up to
N times the single platter throuput where N was the number of spindles.
There
On 2015-09-03 01:59:13 +0200, Tomas Vondra wrote:
> That's a bit surprising, especially considering that e_i_c=30 means ~100
> pages to prefetch if I'm doing the math right.
>
> AFAIK queue depth for SATA drives generally is 32 (so prefetching 100 pages
> should not make a difference), 256 for SAS
On 09/03/2015 12:23 AM, Andres Freund wrote:
On 2015-09-02 14:31:35 -0700, Josh Berkus wrote:
On 09/02/2015 02:25 PM, Tomas Vondra wrote:
As I explained, spindles have very little to do with it - you need
multiple I/O requests per device, to get the benefit. Sure, the DBAs
should know how ma
On 2015-09-02 19:49:13 +0100, Greg Stark wrote:
> I can take the blame for this formula.
>
> It's called the "Coupon Collector Problem". If you hit get a random
> coupon from a set of n possible coupons, how many random coupons would
> you have to collect before you expect to have at least one of
On 2015-09-02 14:31:35 -0700, Josh Berkus wrote:
> On 09/02/2015 02:25 PM, Tomas Vondra wrote:
> >
> > As I explained, spindles have very little to do with it - you need
> > multiple I/O requests per device, to get the benefit. Sure, the DBAs
> > should know how many spindles they have and should
On Wed, Sep 2, 2015 at 2:31 PM, Josh Berkus wrote:
> On 09/02/2015 02:25 PM, Tomas Vondra wrote:
> >
> > As I explained, spindles have very little to do with it - you need
> > multiple I/O requests per device, to get the benefit. Sure, the DBAs
> > should know how many spindles they have and shou
On Wed, Sep 2, 2015 at 4:31 PM, Josh Berkus wrote:
> On 09/02/2015 02:25 PM, Tomas Vondra wrote:
>>
>> As I explained, spindles have very little to do with it - you need
>> multiple I/O requests per device, to get the benefit. Sure, the DBAs
>> should know how many spindles they have and should be
On 09/02/2015 02:25 PM, Tomas Vondra wrote:
>
> As I explained, spindles have very little to do with it - you need
> multiple I/O requests per device, to get the benefit. Sure, the DBAs
> should know how many spindles they have and should be able to determine
> optimal IO depth. But we actually sa
Hi,
On 09/02/2015 08:49 PM, Greg Stark wrote:
On 2 Sep 2015 14:54, "Andres Freund" wrote:
+ /*--
+ * The user-visible GUC parameter is the number of drives (spindles),
+ * which we need to translate to a number-of-pages-to-prefetch target.
+ * The target value is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 02/09/2015 15:53, Andres Freund wrote:
> On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:
>
> You also didn't touch /* * How many buffers PrefetchBuffer callers
> should try to stay ahead of their * ReadBuffer calls by. This is
> maintained by
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi,
On 02/09/2015 18:06, Tomas Vondra wrote:
> Hi
>
> On 09/02/2015 03:53 PM, Andres Freund wrote:
>>
>> Hi,
>>
>> On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:
>>> I didn't know that the thread must exists on -hackers to be
>>> able to add a
On 2 Sep 2015 14:54, "Andres Freund" wrote:
>
>
> > + /*--
> > + * The user-visible GUC parameter is the number of drives (spindles),
> > + * which we need to translate to a number-of-pages-to-prefetch target.
> > + * The target value is stashed in *extra and then assign
On 2015-09-02 18:06:54 +0200, Tomas Vondra wrote:
> Maybe the best thing we can do is just completely abandon the "number of
> spindles" idea, and just say "number of I/O requests to prefetch". Possibly
> with an explanation of how to estimate it (devices * queue length).
I think that'd be a lot b
Hi
On 09/02/2015 03:53 PM, Andres Freund wrote:
Hi,
On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:
I didn't know that the thread must exists on -hackers to be able to add
a commitfest entry, so I transfer the thread here.
Please, in the future, also update the title of the thread to so
Hi,
On 2015-07-18 12:17:39 +0200, Julien Rouhaud wrote:
> I didn't know that the thread must exists on -hackers to be able to add
> a commitfest entry, so I transfer the thread here.
Please, in the future, also update the title of the thread to something
fitting.
> @@ -539,6 +541,9 @@ ExecInitB
23 matches
Mail list logo