On Tue, Aug 10, 2010 at 2:00 PM, Christopher Browne wrote:
> On Tue, Aug 10, 2010 at 3:52 PM, Scott Marlowe
> wrote:
>> My point being, no matter how terrible an idea a certain storage media
>> is, there's always a use case for it. Even if it's very narrow.
>
> The trouble is, if extra subscrib
On Tue, Aug 10, 2010 at 3:52 PM, Scott Marlowe wrote:
> My point being, no matter how terrible an idea a certain storage media
> is, there's always a use case for it. Even if it's very narrow.
The trouble is, if extra subscribers induce load on the "master,"
which they presumably will, then that
On Tue, Aug 10, 2010 at 12:38 PM, Karl Denninger wrote:
> Scott Marlowe wrote:
>
> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger wrote:
>
>
> ANY disk that says "write is complete" when it really is not is entirely
> unsuitable for ANY real database use. It is simply a matter of time
>
>
> Wh
On 8/10/2010 3:28 PM, Karl Denninger wrote:
Brad Nicholson wrote:
On 8/10/2010 2:38 PM, Karl Denninger wrote:
Scott Marlowe wrote:
On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger wrote:
ANY disk that says "write is complete" when it really is not is entirely
unsuitable for ANY real dat
Brad Nicholson wrote:
> On 8/10/2010 2:38 PM, Karl Denninger wrote:
>> Scott Marlowe wrote:
>>> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger wrote:
>>>
ANY disk that says "write is complete" when it really is not is entirely
unsuitable for ANY real database use. It is simply a ma
On 8/10/2010 2:38 PM, Karl Denninger wrote:
Scott Marlowe wrote:
On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger wrote:
ANY disk that says "write is complete" when it really is not is entirely
unsuitable for ANY real database use. It is simply a matter of time
What about read on
On 8/10/2010 2:28 PM, Greg Smith wrote:
Brad Nicholson wrote:
What about putting indexes on them? If the drive fails and drops
writes on those, they could be rebuilt - assuming your system can
function without the index(es) temporarily.
Dumping indexes on SSD is one of the better uses for t
Scott Marlowe wrote:
> On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger wrote:
>
>> ANY disk that says "write is complete" when it really is not is entirely
>> unsuitable for ANY real database use. It is simply a matter of time
>>
>
> What about read only slaves where there's a master wit
Brad Nicholson wrote:
What about putting indexes on them? If the drive fails and drops
writes on those, they could be rebuilt - assuming your system can
function without the index(es) temporarily.
Dumping indexes on SSD is one of the better uses for them, presuming you
can survive what is li
On Tue, Aug 10, 2010 at 12:13 PM, Karl Denninger wrote:
> ANY disk that says "write is complete" when it really is not is entirely
> unsuitable for ANY real database use. It is simply a matter of time
What about read only slaves where there's a master with 100+spinning
hard drives "getting it r
Brad Nicholson wrote:
> On 8/10/2010 12:21 PM, Greg Smith wrote:
>> Scott Carey wrote:
>>> Also, the amount of data at risk in a power loss varies between
>>> drives. For Intel's drives, its a small chunk of data ( < 256K).
>>> For some other drives, the cache can be over 30MB of outstanding
>>>
On 8/10/2010 12:21 PM, Greg Smith wrote:
Scott Carey wrote:
Also, the amount of data at risk in a power loss varies between
drives. For Intel's drives, its a small chunk of data ( < 256K).
For some other drives, the cache can be over 30MB of outstanding writes.
For some workloads this is ac
Another couple of possible ways:
Select groupfield,value
>From tbl x1
Where number = (select max(number) from tbl x2 where x2.groupfield=
x1.groupfield)
Select groupfield,value
>From tbl x1
Where (groupfield,number) in (select groupfield,max(number) from tbl group
by groupfield)
Which is qui
Matthew Wakeling wrote:
> I'm trying to eke a little bit more performance out of an
> application
In addition to the suggestion from Thomas Kellerer, it would be
interesting to try the following and see how performance compares
using real data.
select group, value from tbl x
where not exis
On 10 August 2010 17:06, Thom Brown wrote:
> On 10 August 2010 17:03, Matthew Wakeling wrote:
>> On Tue, 10 Aug 2010, Thomas Kellerer wrote:
>>>
>>> No. It's built in (8.4) and it's called Windowing functions:
>>> http://www.postgresql.org/docs/8.4/static/tutorial-window.html
>>> http://www.postg
Jeff Davis wrote:
Depending on which 256K you lose, you might as well lose your entire
database.
Let's be nice and assume that you only lose one 8K block because of the
SSD write cache; that's not so bad, right? Guess what--you could easily
be the next lucky person who discovers the block
Matthew Wakeling wrote on 10.08.2010 18:03:
On Tue, 10 Aug 2010, Thomas Kellerer wrote:
No. It's built in (8.4) and it's called Windowing functions:
http://www.postgresql.org/docs/8.4/static/tutorial-window.html
http://www.postgresql.org/docs/8.4/static/functions-window.html
SELECT group, last_
Scott Carey wrote:
Also, the amount of data at risk in a power loss varies between
drives. For Intel's drives, its a small chunk of data ( < 256K). For
some other drives, the cache can be over 30MB of outstanding writes.
For some workloads this is acceptable
No, it isn't ever acceptable. Y
On Tue, Aug 10, 2010 at 04:40:16PM +0100, Matthew Wakeling wrote:
>
> I'm trying to eke a little bit more performance out of an
> application, and I was wondering if there was a better way to do the
> following:
>
> I am trying to retrieve, for many sets of rows grouped on a couple
> of fields, t
On 10 August 2010 17:03, Matthew Wakeling wrote:
> On Tue, 10 Aug 2010, Thomas Kellerer wrote:
>>
>> No. It's built in (8.4) and it's called Windowing functions:
>> http://www.postgresql.org/docs/8.4/static/tutorial-window.html
>> http://www.postgresql.org/docs/8.4/static/functions-window.html
>>
On Tue, 10 Aug 2010, Thomas Kellerer wrote:
No. It's built in (8.4) and it's called Windowing functions:
http://www.postgresql.org/docs/8.4/static/tutorial-window.html
http://www.postgresql.org/docs/8.4/static/functions-window.html
SELECT group, last_value(value) over(ORDER BY number)
FROM table
Matthew Wakeling wrote on 10.08.2010 17:40:
Currently, I do this in my application by ordering by the number and
only using the last value. I imagine that this is something that can be
done in the new Postgres 9, with a sorted group by - something like this:
SELECT group, LAST(value, ORDER BY nu
I'm trying to eke a little bit more performance out of an application, and
I was wondering if there was a better way to do the following:
I am trying to retrieve, for many sets of rows grouped on a couple of
fields, the value of an ungrouped field where the row has the highest
value in anoth
23 matches
Mail list logo