On Fri, Apr 5, 2013 at 11:40 AM, Merlin Moncure wrote:
>
> On Fri, Apr 5, 2013 at 11:54 AM, Dave Johansen wrote:
> > On Sat, Feb 26, 2011 at 2:38 PM, Dave Johansen
> > wrote:
> >>
> >> Unfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff
> >> wasn't added til 8.4.
> >> Dave
>
On Fri, Apr 5, 2013 at 11:54 AM, Dave Johansen wrote:
> On Sat, Feb 26, 2011 at 2:38 PM, Dave Johansen
> wrote:
>>
>> Unfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff
>> wasn't added til 8.4.
>> Dave
>>
>> On Feb 26, 2011 2:06 PM, "Josh Berkus" wrote:
>> > Dave,
>> >
>> >
On Sat, Feb 26, 2011 at 2:38 PM, Dave Johansen
wrote:
>
> Unfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff
> wasn't added til 8.4.
> Dave
>
> On Feb 26, 2011 2:06 PM, "Josh Berkus" wrote:
> > Dave,
> >
> > Why not test the windowing version I posted?
We finally have moved
Florian Weimer wrote:
> Kevin Grittner:
>> Well, unless you use timestamp WITH time zone, you might not be
>> able to do that at all. There are very few places where timestamp
>> WITHOUT time zone actually makes sense.
>
> I don't think PostgreSQL keeps track of actual time zone values,
True --
* Kevin Grittner:
> Dave Crooke wrote:
>
>> create table data
>>(id_key int,
>> time_stamp timestamp without time zone,
>> value double precision);
>>
>> create unique index data_idx on data (id_key, time_stamp);
>
>> I need to find the most recent value for each distinct value o
Unfortunately, I'm running 8.3.3 and to my knowledge the windowing stuff
wasn't added til 8.4.
Dave
On Feb 26, 2011 2:06 PM, "Josh Berkus" wrote:
> Dave,
>
> Why not test the windowing version I posted?
>
> --
> -- Josh Berkus
> PostgreSQL Experts Inc.
> http://www.pgexperts.com
Dave,
Why not test the windowing version I posted?
--
-- Josh Berkus
PostgreSQL Experts Inc.
http://www.pgexperts.com
--
Sent via pgsql-performance mailing list (pgsql-performance@postgr
On Fri, Feb 25, 2011 at 1:45 PM, Dave Crooke wrote:
> Hi Dave
>
> Yes, 100% the best solution I did the same thing a while back, I just
> have a separate copy of the data in a "latest" table and the Java code just
> runs a second SQL statement to update it when writing a new record (I've
> n
Hi Dave
Yes, 100% the best solution I did the same thing a while back, I just
have a separate copy of the data in a "latest" table and the Java code just
runs a second SQL statement to update it when writing a new record (I've
never been a trigger fan).
I found myself looking at the "find th
On Thu, Feb 24, 2011 at 4:38 PM, Dave Crooke wrote:
> Thanks to all I had a tickling feeling at the back of my mind that
> there was a neater answer here. For the record, times (all from in-memory
> cached data, averaged over a bunch of runs):
>
> Dependent subquery = 117.9 seconds
> Join to
> Why did you use row_number instead of rank?
Because I assumed he only wanted one row in the event of ties.
Hmmm, although with that schema, there won't be ties. So it's pretty
much arbitrary then.
> I am now curious how the speed compares though. I still think the
> DISTINCT ON will be faste
On 02/24/2011 06:20 PM, Josh Berkus wrote:
SELECT id_key, time_stamp, value
FROM (
SELECT id_key, time_stamp, value,
row_number()
OVER ( PARTITION BY id_key
ORDER BY time_stamp DESC)
as ranking
FROM thetable
Friday, February 25, 2011, 12:53:08 AM you wrote:
> select distinct on (a,b,c)
> a, b c, time_stamp,value
Without the comma, you declare 'b AS c'
> from data
> order by a, b, c, time_stamp desc;
> The output produced is the same as this query:
> select distinct on (a,b)
> a
On 2/24/11 3:38 PM, Dave Crooke wrote:
> Thanks to all I had a tickling feeling at the back of my mind that
> there was a neater answer here. For the record, times (all from
> in-memory cached data, averaged over a bunch of runs):
>
> Dependent subquery = 117.9 seconds
> Join to temp table =
P.S. I noticed inadvertently (by making a typo ;-) that not all of the
columns in the DISTINCT ON are required to be part of the output, in which
case it appears to reduce the DISTINCT ON to the columns that are
represented in my real world situation, "id_key" is actually composed
of 3 columns
Thanks to all I had a tickling feeling at the back of my mind that
there was a neater answer here. For the record, times (all from in-memory
cached data, averaged over a bunch of runs):
Dependent subquery = 117.9 seconds
Join to temp table = 2.7 sec
DISTINCT ON = 2.7 sec
So the DISTINCT ON m
On Thu, Feb 24, 2011 at 2:18 PM, Kevin Grittner
wrote:
> Dave Crooke wrote:
>
>> create table data
>> (id_key int,
>> time_stamp timestamp without time zone,
>> value double precision);
>>
>> create unique index data_idx on data (id_key, time_stamp);
>
>> I need to find the most recent
Michael Glaesemann wrote:
> SELECT DISTINCT ON (data.id_key)
>data.id_key, data.time_stamp, data.value
> FROM data
> ORDER BY data.id_key, data.time_stamp DESC;
Dang! I forgot the DESC in my post! Thanks for showing the
*correct* version.
-Kevin
--
Sent via pgsql-performance
On Feb 24, 2011, at 14:55, Dave Crooke wrote:
> Is there a more elegant way to write this, perhaps using PG-specific
> extensions?
SELECT DISTINCT ON (data.id_key)
data.id_key, data.time_stamp, data.value
FROM data
ORDER BY data.id_key, data.time_stamp DESC;
Michael Glaesemann
grzm s
Dave Crooke wrote:
> create table data
>(id_key int,
> time_stamp timestamp without time zone,
> value double precision);
>
> create unique index data_idx on data (id_key, time_stamp);
> I need to find the most recent value for each distinct value of
> id_key.
Well, unless you u
On Thu, Feb 24, 2011 at 1:55 PM, Dave Crooke wrote:
> Hi foks
>
> This is an old chestnut which I've found a number of online threads for, and
> never seen a clever answer to. It seems a common enough idiom that there
> might be some slicker way to do it, so I thought I might inquire with this
> a
Hi foks
This is an old chestnut which I've found a number of online threads for, and
never seen a clever answer to. It seems a common enough idiom that there
might be some slicker way to do it, so I thought I might inquire with this
august group if such a clever answer exists
Consider the fo
22 matches
Mail list logo