Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Israel Brewster
> On Mar 17, 2015, at 1:41 PM, Marc Mamin wrote: > > >>> On Tue, Mar 17, 2015, at 11:30 AM, Adrian Klaver wrote: On 03/17/2015 10:57 AM, Israel Brewster wrote: > > >> On Mar 17, 2015, at 9:30 AM, Paul Jungwirth >> wrote: >> >> So next question: how do I get the

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Marc Mamin
>>On Tue, Mar 17, 2015, at 11:30 AM, Adrian Klaver wrote: >>> On 03/17/2015 10:57 AM, Israel Brewster wrote: >>> > >>> > >>> >> On Mar 17, 2015, at 9:30 AM, Paul Jungwirth >>> >> wrote: >>> >> >>> >> So next question: how do I get the "active" time per hour from this? >>> >> >>> >> I think you j

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Marc Mamin
> > >-- > Adrian Klaver > adrian.kla...@aklaver.com > >On Tue, Mar 17, 2015, at 11:30 AM, Adrian Klaver wrote: >> On 03/17/2015 10:57 AM, Israel Brewster wrote: >> > >> > >> >> On Mar 17, 2015, at 9:30 AM, Paul Jungwirth >> >> wrote: >> >> >> >> So next question: how do I get the "active" tim

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Adrian Klaver
-- Adrian Klaver adrian.kla...@aklaver.com On Tue, Mar 17, 2015, at 11:30 AM, Adrian Klaver wrote: > On 03/17/2015 10:57 AM, Israel Brewster wrote: > > > > > >> On Mar 17, 2015, at 9:30 AM, Paul Jungwirth > >> wrote: > >> > >> So next question: how do I get the "active" time per hour from

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Paul Jungwirth
Some weird edge cases to be careful about: activities that cross midnight. >> Activities that last more than one full day, >> e.g. start 3/15 and end 3/17. Right. And I will run into some of those (at least the crossing midnight), > so I'll keep an eye out. If you are running the report on mor

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Adrian Klaver
On 03/17/2015 10:57 AM, Israel Brewster wrote: On Mar 17, 2015, at 9:30 AM, Paul Jungwirth wrote: So next question: how do I get the "active" time per hour from this? I think you just SUM() over the intersection between each hourly window and each event, right? This might be easiest using

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Israel Brewster
> On Mar 17, 2015, at 9:30 AM, Paul Jungwirth > wrote: > > So next question: how do I get the "active" time per hour from this? > > I think you just SUM() over the intersection between each hourly window and > each event, right? This might be easiest using tsrange, something like this: Soun

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread David G. Johnston
On Tuesday, March 17, 2015, Israel Brewster wrote: > > > > On Mar 17, 2015, at 8:09 AM, Paul Jungwirth > wrote: > > > >>> test=> select h, count(*) from start_end, generate_series(0, 23) as > s(h) where h between extract(hour from start_time) and extract(hour from > end_time) group by h order by

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Paul Jungwirth
So next question: how do I get the "active" time per hour from this? I think you just SUM() over the intersection between each hourly window and each event, right? This might be easiest using tsrange, something like this: SUM(extract(minutes from (tsrange(start_time, end_time) && tsrange(

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Israel Brewster
On Mar 17, 2015, at 9:05 AM, David G. Johnston wrote: > > On Tuesday, March 17, 2015, Israel Brewster > wrote: > > > > On Mar 17, 2015, at 8:09 AM, Paul Jungwirth > > wrote: > > > >>> test=> select h, count(*) from start_end, generate_series(0, 23) as s(h) > >>

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Israel Brewster
> On Mar 17, 2015, at 8:09 AM, Paul Jungwirth > wrote: > >>> test=> select h, count(*) from start_end, generate_series(0, 23) as s(h) >>> where h between extract(hour from start_time) and extract(hour from >>> end_time) group by h order by h; >>> >>> h | count >>> +--- >>> 8 |

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Paul Jungwirth
test=> select h, count(*) from start_end, generate_series(0, 23) as s(h) where h between extract(hour from start_time) and extract(hour from end_time) group by h order by h; h | count +--- 8 | 2 9 | 3 10 | 2 11 | 2 Note if you always want all 24 rows with a count

Re: [GENERAL] Group by range in hour of day

2015-03-17 Thread Israel Brewster
BEGIN:VCARD VERSION:3.0 N:Brewster;Israel;;; FN:Israel Brewster ORG:Frontier Flying Service;MIS TITLE:PC Support Tech II EMAIL;type=INTERNET;type=WORK;type=pref:isr...@frontierflying.com TEL;type=WORK;type=pref:907-450-7293 item1.ADR;type=WORK;type=pref:;;5245 Airport Industrial Wy;Fairbanks;AK;9

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread David G. Johnston
On Mon, Mar 16, 2015 at 4:16 PM, Israel Brewster wrote: > On Mar 16, 2015, at 2:22 PM, David G. Johnston > wrote: > > > On Mon, Mar 16, 2015 at 3:13 PM, Adrian Klaver > wrote: > >> On 03/16/2015 02:57 PM, Israel Brewster wrote: >> >>> I have a table with two timestamp columns for the start time

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread Adrian Klaver
On 03/16/2015 04:16 PM, Israel Brewster wrote: On Mar 16, 2015, at 2:22 PM, David G. Johnston mailto:david.g.johns...@gmail.com>> wrote: On Mon, Mar 16, 2015 at 3:13 PM, Adrian Klaver mailto:adrian.kla...@aklaver.com>>wrote: On 03/16/2015 02:57 PM, Israel Brewster wrote: I have a

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread Israel Brewster
On Mar 16, 2015, at 2:22 PM, David G. Johnston wrote: > > On Mon, Mar 16, 2015 at 3:13 PM, Adrian Klaver > wrote: > On 03/16/2015 02:57 PM, Israel Brewster wrote: > I have a table with two timestamp columns for the start time and end > time of each record (call

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread John W Higgins
Assuming 3 things Table name - test Column names - start_time, end_time Added an id column (int) to distinguish each record in the table You can go with this. (my apologies for formatting issues) with slots as ( select * fromgenerate_series(0,1439) as s(slot) ), slots_hours as (

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread Israel Brewster
On Mar 16, 2015, at 2:16 PM, Paul Jungwirth wrote: > >> I have a table with two timestamp columns for the start time and end >> time of each record (call them start and end).I'm trying to figure out >> if there is a way to group these records by "hour of day", > > I think you can do this by sel

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread Israel Brewster
On Mar 16, 2015, at 2:13 PM, Adrian Klaver wrote: > > On 03/16/2015 02:57 PM, Israel Brewster wrote: >> I have a table with two timestamp columns for the start time and end >> time of each record (call them start and end).I'm trying to figure out >> if there is a way to group these records by "ho

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread David G. Johnston
On Mon, Mar 16, 2015 at 3:13 PM, Adrian Klaver wrote: > On 03/16/2015 02:57 PM, Israel Brewster wrote: > >> I have a table with two timestamp columns for the start time and end >> time of each record (call them start and end).I'm trying to figure out >> if there is a way to group these records by

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread Paul Jungwirth
I have a table with two timestamp columns for the start time and end time of each record (call them start and end).I'm trying to figure out if there is a way to group these records by "hour of day", I think you can do this by selecting `FROM generate_series(0, 23) s(h)` and then joining to your

Re: [GENERAL] Group by range in hour of day

2015-03-16 Thread Adrian Klaver
On 03/16/2015 02:57 PM, Israel Brewster wrote: I have a table with two timestamp columns for the start time and end time of each record (call them start and end).I'm trying to figure out if there is a way to group these records by "hour of day", that is the record should be included in the group

Re: [GENERAL] group by of multi columns

2015-01-04 Thread Flyingfox Lee
Thank you, this works like a charm. On 2015年1月4日周日 20:01 Andreas Kretschmer wrote: > Flyingfox Lee wrote: > > > I am doing a `group by` on a table with ~ 3 million rows, the code is > simply > > `select A, B, C, D,E count(1) from t group by A, B, C, D, E order by > 6`, it > > takes ~ 3 minutes

Re: [GENERAL] group by of multi columns

2015-01-04 Thread Andreas Kretschmer
Flyingfox Lee wrote: > I am doing a `group by` on a table with ~ 3 million rows, the code is simply > `select A, B, C, D,E count(1) from t group by A, B, C, D, E order by 6`,  it > takes ~ 3 minutes for this operation and there are ~ 500 rows returned. So, to > speed this up, should I add a compo

Re: [GENERAL] group by query plan on already clustered index

2013-07-25 Thread Pavel Stehule
Hello 2013/7/25 Sandeep Gupta : > On a table T with two fields, f1 and f2, the sql command > > select count(f2), f1 > from T > group by f1 > > result is seq scan followed by a sort on f1 (see the query plan below): > >GroupAggregate (cost=21566127.88..22326004.09 rows=987621 width=8) >->

Re: [GENERAL] Group by -- precedence question

2013-03-22 Thread Jov
jov On Mar 23, 2013 9:26 AM, "Joe Van Dyk" wrote: > > begin; > create table f (v numeric); > insert into f values (1), (0.8); > select ceil(v) as v from f group by v; > > -- sorta expected the result to be grouped by the column alias, > -- not by the in the table > > v > ─── > 1 > 1 > > This is

Re: [GENERAL] Group by bug?

2012-12-27 Thread wd
Oh, I see, thanks for your quick reply. On Fri, Dec 28, 2012 at 3:47 PM, Jov wrote: > > > 2012/12/28 wd > >> hi, >> >> wd_test=# \d t1 >> Table "public.t1" >> Column | Type |Modifiers >> +-+---

Re: [GENERAL] Group by bug?

2012-12-27 Thread Jov
2012/12/28 wd > hi, > > wd_test=# \d t1 > Table "public.t1" > Column | Type |Modifiers > +-+- > id | integer | not null default nextval('t1_id_seq'::regclass) > tag| text

Re: [GENERAL] Group by bug?

2012-12-27 Thread wd
Sorry, forget to say, PostgreSQL 9.2.2 on x86_64-unknown-linux-gnu, compiled by gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3), 64-bit psql (9.2.2) On Fri, Dec 28, 2012 at 3:24 PM, wd wrote: > hi, > > wd_test=# \d t1 > Table "public.t1" > Column | Type |

Re: [GENERAL] GROUP BY or alternative means to group

2012-04-09 Thread Bruno Wolff III
On Mon, Apr 09, 2012 at 13:55:04 -0400, Michael Gould wrote: Thanks that is a help. I would be nice if any key could be used as those are normally the things I would do group by's This is what the 9.1 documentation says: "When GROUP BY is present, it is not valid for the SELECT list express

Re: [GENERAL] GROUP BY or alternative means to group

2012-04-09 Thread Michael Gould
Thanks that is a help. I would be nice if any key could be used as those are normally the things I would do group by's Regards Mike Gould From my Samsung Android tablet on T-Mobile. The first nationwide 4G networkBruno Wolff III wrote:On Mon, Mar 12, 2012 at 16:18:05 -0400,    Michael Gou

Re: [GENERAL] GROUP BY or alternative means to group

2012-04-09 Thread Bruno Wolff III
On Mon, Mar 12, 2012 at 16:18:05 -0400, Michael Gould wrote: You need to include all columns that are not aggregrative columns in the group by. Even though that is the standard it is a pain to list all columns even if you don't need them In later versions of postgres this is relaxed a bit

Re: [GENERAL] GROUP BY or alternative means to group

2012-04-07 Thread Michael Gould
You need to include all columns that are not aggregrative columns in the group by. Even though that is the standard it is a pain to list all columns even if you don't need them Best Regards Michael Gould Sent from Samsung mobile Alexander Reichstadt wrote: >Hi, > >the following statement w

Re: [GENERAL] group by does not show error

2012-03-24 Thread Thomas Kellerer
AI Rumman wrote on 24.03.2012 09:06: I am using Postgresql 9.1.0. I found that following GROUP BY query works in my DB :- \d t1 Table "public.t1" Column | Type | Modifiers +-+--- i | integer | not null nam| text| Indexes: "t1_pkey" PRIMAR

Re: [GENERAL] GROUP BY or alternative means to group

2012-03-12 Thread Scott Marlowe
On Mon, Mar 12, 2012 at 3:19 PM, Alexander Reichstadt wrote: > But where would I insert the max(address) piece? > Just put max() or min() around any field in the select list that's not in the group by clause -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes t

Re: [GENERAL] GROUP BY or alternative means to group

2012-03-12 Thread Alexander Reichstadt
I guess I lack the knowledge to integrate your answer in my queryActually I'd prefer to always see the first address entered unless there is a where-clause added. Not sure how this works out then and haven't tested. But given the initial query extended by distinct on it would be like so: >

Re: [GENERAL] GROUP BY or alternative means to group

2012-03-12 Thread Scott Marlowe
On Mon, Mar 12, 2012 at 1:35 PM, Alexander Reichstadt wrote: > Hi, > > the following statement worked on mysql but gives me an error on postgres: > > column "addresses.address1" must appear in the GROUP BY clause or be used in > an aggregate function > > I guess I am doing something wrong. I read

Re: [GENERAL] GROUP BY or alternative means to group

2012-03-12 Thread Alexander Reichstadt
Thanks, I just posted my response to my own question for the archives. I take it also that group by is faster than distinct on. If it is a substantial performance gain I have to work on this some more. A subquery I would expect would be much of a drag, so for all keystroke-updated list-tables th

Re: [GENERAL] GROUP BY or alternative means to group

2012-03-12 Thread Kiriakos Georgiou
Instead of the joins you can use a subquery to get the first address. Or you can do the joins without the group by and use row_number() over(partition by companies.id) on the select list to label each company address with a number starting at 1. You can just keep rows that have row_number = 1.

Solved [Re: [GENERAL] GROUP BY or alternative means to group]

2012-03-12 Thread Alexander Reichstadt
So the mysql way for group by seems to be non-standard. What works for postgres is the DISTINCT ON (fieldname) approach. Thanks Am 12.03.2012 um 20:35 schrieb Alexander Reichstadt: > Hi, > > the following statement worked on mysql but gives me an error on postgres: > > column "addresses.add

Re: [GENERAL] GROUP BY or alternative means to group

2012-03-12 Thread Bartosz Dmytrak
Hi, You can use one of windowing function: http://www.postgresql.org/docs/9.1/static/tutorial-window.html http://www.postgresql.org/docs/9.1/static/functions-window.html this could be rank() in subquery or first_value(vale any), but there could be performance issue another solution could be boolea

Re: [GENERAL] GROUP BY Wildcard Syntax Thought

2011-05-04 Thread Tom Lane
"David Johnston" writes: > When specifying columns in a GROUP BY clause would it be possible to use a > wildcard to specify all columns coming from a given relation? I think the need for this will be largely superseded by the SQL-standard behavior that grouping by a primary key is sufficient (whi

Re: [GENERAL] Group by with insensitive order

2011-01-19 Thread Aram Fingal
> Easy way is something like > > SELECT LEAST(drug1, drug2), GREATEST(drug1, drug2), AVG(response) > FROM data > GROUP BY 1, 2 > > though it'd be a PITA to scale that to more than 2 drugs. > > regards, tom lane Thanks, Tom and Hubert, who said the same thing. For the for

Re: [GENERAL] Group by with insensitive order

2011-01-19 Thread Tom Lane
Aram Fingal writes: > Suppose I'm doing a group by query like the following: > SELECT drug1, drug2, AVG(response) > FROM data > GROUP BY drug1, drug2 > The problem is that the same drug may appear sometimes as drug1 and sometimes > as drug2. So, for example, the combination "aspirin, acetaminop

Re: [GENERAL] Group by with insensitive order

2011-01-19 Thread hubert depesz lubaczewski
On Wed, Jan 19, 2011 at 03:37:58PM -0500, Aram Fingal wrote: > Suppose I'm doing a group by query like the following: > > SELECT drug1, drug2, AVG(response) > FROM data > GROUP BY drug1, drug2 > > The problem is that the same drug may appear sometimes as drug1 and > sometimes as drug2. So, for e

Re: [GENERAL] Group by and lmit

2010-11-03 Thread Filip Rembiałkowski
2010/11/2 Bill Reynolds : >    I’m using postgres 8.3.7. that's a pity because in 8.4 we have window functions which make this possible in one query: select * from ( select x, y, count(*) as counter, row_number() over(partition by x order by count(*)) rn from mytable group by x, y order b

Re: [GENERAL] Group by and limit

2010-11-03 Thread Dimitri Fontaine
Reid Thompson writes: > I only want the first 500 for each x. > Any tips or tricks someone might know would be appreciated. > I’m using postgres 8.3.7. http://troels.arvin.dk/db/rdbms/#select-top-n Consider using a more recent version of PostgreSQL, equipped with window functions! Regards, --

Re: [GENERAL] Group By Question

2010-10-05 Thread Chris Velevitch
That's getting too complicated. It can be done simply as: SELECT DISTINCT(test.people.id) test.people.id, test.people.name, test.likes.ref FROM test.people LEFT JOIN test.likes ON test.people.id = test.likes.ref ORDER BY test.people.id, test.likes.date DESCENDING Assuming people.id is unique, the

Re: [GENERAL] Group By Question

2010-10-02 Thread Christian Ullrich
* Andrew E. Tegenkamp wrote: I have two tables and want to attach and return the most recent data from the second table. Table 1 has a counter ID and name. Table 2 has a counter ID, Reference (to Table 1 ID), Date, and Like. I want to do a query that gets each name and their most recent like. I

Re: [GENERAL] Group By Question

2010-10-01 Thread Darren Duncan
Andrew E. Tegenkamp wrote: I have two tables and want to attach and return the most recent data from the second table. Table 1 has a counter ID and name. Table 2 has a counter ID, Reference (to Table 1 ID), Date, and Like. I want to do a query that gets each name and their most recent like. I ha

Re: [GENERAL] GROUP BY column alias?

2010-02-19 Thread Tom Lane
David Fetter writes: > On Fri, Feb 19, 2010 at 12:07:42AM -0500, Tom Lane wrote: >> SQL:1999 and later use a slightly different definition which is not >> entirely upward compatible with SQL-92. In most cases, however, >> PostgreSQL will interpret an ORDER BY or GROUP BY expression the >> same way

Re: [GENERAL] GROUP BY column alias?

2010-02-19 Thread David Fetter
On Fri, Feb 19, 2010 at 12:07:42AM -0500, Tom Lane wrote: > Lew writes: > > Eric B. Ridge wrote: > >> That explains it. Thanks. Breaks the rule of least surprise, > >> but it is SQL. > > SQL:1999 and later use a slightly different definition which is not > entirely upward compatible with SQL-92

Re: [GENERAL] GROUP BY column alias?

2010-02-18 Thread Tom Lane
Lew writes: > Eric B. Ridge wrote: >> That explains it. Thanks. Breaks the rule of least surprise, but it is SQL. > I don't think it does break the rule of least surprise. How would one > expect the column or the alias to have precedence without knowledge of > the rule from documentation? The

Re: [GENERAL] GROUP BY column alias?

2010-02-18 Thread Lew
Scott Bailey wrote: SQL name resolution rules are that column names have higher precedence than aliases and variables. So it will always bind to the column not the alias. Eric B. Ridge wrote: That explains it. Thanks. Breaks the rule of least surprise, but it is SQL. I don't think it doe

Re: [GENERAL] GROUP BY column alias?

2010-02-18 Thread Eric B. Ridge
On Feb 18, 2010, at 5:52 PM, Scott Bailey wrote: > SQL name resolution rules are that column names have higher precedence than > aliases and variables. So it will always bind to the column not the alias. That explains it. Thanks. Breaks the rule of least surprise, but it is SQL. eric -- Sent

Re: [GENERAL] GROUP BY column alias?

2010-02-18 Thread Scott Bailey
Eric B. Ridge wrote: On Feb 18, 2010, at 4:31 PM, Scott Bailey wrote: I'm not sure why you would be surprised by that behavior. You are grouping by a timestamp, so any microsecond difference will be a new group. I get that. ;) Hence the ::date. This is what doesn't make sense: Expected:

Re: [GENERAL] GROUP BY column alias?

2010-02-18 Thread Eric B. Ridge
On Feb 18, 2010, at 4:31 PM, Scott Bailey wrote: > I'm not sure why you would be surprised by that behavior. You are grouping by > a timestamp, so any microsecond difference will be a new group. I get that. ;) Hence the ::date. This is what doesn't make sense: Expected: select day::date as

Re: [GENERAL] GROUP BY column alias?

2010-02-18 Thread Scott Bailey
Eric B. Ridge wrote: Maybe I'm getting too old to SQL anymore, but I ran across something yesterday in a machine generated query that took me over an hour to figure out. Here's a little testcase. Maybe somebody can explain why the last "Not Expected" case does what it does. select version()

Re: [GENERAL] Group by problem!

2009-11-04 Thread Sam Mason
On Wed, Nov 04, 2009 at 12:41:25PM +0330, shahrzad khorrami wrote: > Column | Type | > id | integer | not null default > f1 | character varying(32) | > f3 | character varying(32) | > f4 | character varying(32) | > f5 | character varying(32) |

Re: [GENERAL] Group by problem!

2009-11-04 Thread Grzegorz Jaśkiewicz
On Wed, Nov 4, 2009 at 9:11 AM, shahrzad khorrami < shahrzad.khorr...@gmail.com> wrote: > > hi all, > > > > > Column | Type | > Modifiers > > +---+--- > id | integer | not null default >

Re: [GENERAL] Group by on %like%

2009-07-04 Thread Jennifer Trey
Sorry for taking so long to respond. The prefix thingy is definetly attractive for future development and I had already discovered them to be a challenge. However, i have noticed all kinds of ways people tend to write their number, including omitting the + or 00 .. so at this time, for this task, I

Re: [GENERAL] Group by on %like%

2009-07-03 Thread nha
Hello, Le 3/07/09 12:53, Dimitri Fontaine a écrit : Hi, Le 3 juil. 09 à 11:44, Jennifer Trey a écrit : I would like to run a query and group several rows based on a phone number. However, the same phone number might have a prefix on occasion, example : name | phone_number

Re: [GENERAL] Group by on %like%

2009-07-03 Thread Dimitri Fontaine
Hi, Le 3 juil. 09 à 11:44, Jennifer Trey a écrit : I would like to run a query and group several rows based on a phone number. However, the same phone number might have a prefix on occasion, example : name | phone_number -- james | 123456 james | 00441234556 as you can s

Re: [GENERAL] Group by on %like%

2009-07-03 Thread Guy Flaherty
On Fri, Jul 3, 2009 at 8:32 PM, Guy Flaherty wrote: > > > On Fri, Jul 3, 2009 at 7:44 PM, Jennifer Trey wrote: > >> Hi, >> >> I would like to run a query and group several rows based on a phone >> number. >> >> However, the same phone number might have a prefix on occasion, example : >> >> name |

Re: [GENERAL] Group by on %like%

2009-07-03 Thread Guy Flaherty
On Fri, Jul 3, 2009 at 7:44 PM, Jennifer Trey wrote: > Hi, > > I would like to run a query and group several rows based on a phone number. > > However, the same phone number might have a prefix on occasion, example : > > name | phone_number > -- > james | 123456 > james | 00441

Re: [GENERAL] Group by clause creating "ERROR: wrong record type supplied in RETURN NEXT" (version 8.1.11 -- grr...)

2008-11-26 Thread Klint Gore
Webb Sprague wrote: select * from mkn.query_table_data ('select sum(p087001) as pop from datatable_00040 group by substr(geo_id, 13, 6) order by pop desc limit 10') as FOO (pop integer); ERROR: wrong record type supplied in RETURN NEXT CONTEXT: PL/pgSQL function "query_table_data"

Re: [GENERAL] Group BY and Chart of Accounts

2008-10-30 Thread WaGathoni
Works like a charm. Thank you very much Justin. On Thu, Oct 30, 2008 at 3:49 AM, justin <[EMAIL PROTECTED]> wrote: > There was a number of code mistakes in my examples as i was just doing it > off the top of my head, just went through it and got it all working. > I had to change the function ar

Re: [GENERAL] Group BY and Chart of Accounts

2008-10-29 Thread justin
There was a number of code mistakes in my examples as i was just doing it off the top of my head, just went through it and got it all working. I had to change the function around as it was double dipping accounts just run this and it does work.

Re: [GENERAL] Group BY

2008-10-29 Thread Scott Marlowe
On Wed, Oct 29, 2008 at 2:59 PM, WaGathoni <[EMAIL PROTECTED]> wrote: > failing with an error to the effect that that that > coa.doaIhaveChildren and coa.coa_id must be included in the GROUP BY > clause and what is is the recommended course of action. Generally the solution in postgresql i

Re: [GENERAL] GROUP BY does not follow SQL standard

2008-10-10 Thread Scott Marlowe
On Thu, Oct 9, 2008 at 8:01 AM, Tony Marston <[EMAIL PROTECTED]> wrote: > The Postgresql implementation of GROUP BY does not conform to either the > 1999 or 2003 SQL standard. The documentation states that every field in the > SELECT list which is not aggregated must be specified in the GROUP BY >

Re: [GENERAL] group by error message?

2008-09-25 Thread Louis-David Mitterrand
On Thu, Sep 25, 2008 at 11:01:08AM -0400, Tom Lane wrote: > Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > > critik=# select distinct pt.type_fr,sum(e2.id_event) from person_type > > pt natural join person_to_event join event e using (id_event) LEFT JOIN > > event e2 ON e.id_event = e2.

Re: [GENERAL] group by error message?

2008-09-25 Thread Tom Lane
Louis-David Mitterrand <[EMAIL PROTECTED]> writes: > critik=# select distinct pt.type_fr,sum(e2.id_event) from person_type > pt natural join person_to_event join event e using (id_event) LEFT JOIN event > e2 ON e.id_event = e2.id_event AND e2.id_event=219 join event_type et ON > e.id_event

Re: [GENERAL] GROUP BY hour

2008-08-01 Thread Nathan Thatcher
Brilliant! On Fri, Aug 1, 2008 at 12:18 PM, Steve Crawford <[EMAIL PROTECTED]> wrote: > Nathan Thatcher wrote: >> >> I have, what I imagine to be, a fairly simple question. I have a query >> that produces output for a line graph. Each row represents an interval >> on the graph. >> >> SELECT COUNT(

Re: [GENERAL] GROUP BY hour

2008-08-01 Thread Osvaldo Rosario Kussama
Nathan Thatcher escreveu: I have, what I imagine to be, a fairly simple question. I have a query that produces output for a line graph. Each row represents an interval on the graph. SELECT COUNT(call_id) AS value, EXTRACT(hour from start_time) AS hour FROM c_call WHERE start_time >= '2008-08-01

Re: [GENERAL] GROUP BY hour

2008-08-01 Thread Steve Crawford
Nathan Thatcher wrote: I have, what I imagine to be, a fairly simple question. I have a query that produces output for a line graph. Each row represents an interval on the graph. SELECT COUNT(call_id) AS value, EXTRACT(hour from start_time) AS hour FROM c_call WHERE start_time >= '2008-08-01 00

Re: [GENERAL] GROUP BY, ORDER & LIMIT ?

2008-05-06 Thread hubert depesz lubaczewski
On Tue, May 06, 2008 at 01:22:30PM -0400, Kynn Jones wrote: > Suppose table X has two columns: class (TEXT) and size (INT). I want a > listing showing the (up to) 5 largest values of "size" for each value of > "class" (for some values of "class" the total number of available records > may be less

Re: [GENERAL] GROUP BY, ORDER & LIMIT ?

2008-05-06 Thread David Wilson
select class, size from X t1 where size in (select size from X t2 where t2.class=t1.class order by size desc limit 5); On Tue, May 6, 2008 at 1:22 PM, Kynn Jones <[EMAIL PROTECTED]> wrote: > Suppose table X has two columns: class (TEXT) and size (INT). I want a > listing showing the (up to) 5 la

Re: [GENERAL] group by and count(*) behaviour in 8.3

2008-01-02 Thread Pavel Stehule
> > > can you send structure and execution plan? > Thank you for your request, the execution plan is the one from > "explain" (I think) but what is the "structure plan"? no, only structure :) table and fields. > The problema was a bug on my import in new database! > > To avoid future error of t

Re: [GENERAL] group by and count(*) behaviour in 8.3

2008-01-02 Thread Edoardo Panfili
Pavel Stehule ha scritto: Hello it works to me: postgres=# create table c1(n varchar, e integer); CREATE TABLE postgres=# create table c2(n2 varchar, e integer); CREATE TABLE postgres=# insert into c1 values('aa',1),('bb',2),('aa',3); INSERT 0 3 postgres=# insert into c2 values('aa',1),('bb',2)

Re: [GENERAL] group by and count(*) behaviour in 8.3

2008-01-02 Thread Pavel Stehule
Hello it works to me: postgres=# create table c1(n varchar, e integer); CREATE TABLE postgres=# create table c2(n2 varchar, e integer); CREATE TABLE postgres=# insert into c1 values('aa',1),('bb',2),('aa',3); INSERT 0 3 postgres=# insert into c2 values('aa',1),('bb',2),('aa',3); INSERT 0 3 postgr

Re: [GENERAL] Group By question

2007-10-18 Thread brian
Jeff Lanzarotta wrote: Sam Mason <[EMAIL PROTECTED]> wrote: On Tue, Oct 16, 2007 at 07:46:34AM -0700, Jeff Lanzarotta wrote: Hello, I have a table that looks something like this: SKU Dept Col1 Col2 Col3 --- -- --- --- -- 1112 3

Re: [GENERAL] Group By question

2007-10-18 Thread Jeff Lanzarotta
Okay, actually the query is something like: select dept, (col1 + col2) * col3) from table group by dept So, the output would look something like: DeptTotal -- --- 1 26 2 18 3 9 Sam Mason <[EMAIL PROTECTED]> wrote: On Tue, Oct 16, 2007 at 07:46:34AM -0700,

Re: [GENERAL] Group By question

2007-10-16 Thread Scott Marlowe
On 10/16/07, Jeff Lanzarotta <[EMAIL PROTECTED]> wrote: > Hello, > > I have a table that looks something like this: > > SKU Dept Col1 Col2 Col3 > --- -- --- --- -- > 1112 3 > 2123 4 > 321

Re: [GENERAL] Group By question

2007-10-16 Thread Sam Mason
On Tue, Oct 16, 2007 at 07:46:34AM -0700, Jeff Lanzarotta wrote: > Hello, > > I have a table that looks something like this: > > SKU Dept Col1 Col2 Col3 > --- -- --- --- -- > 1112 3 > 2123 4 > 32

Re: [GENERAL] group by and aggregate functions on regular expressions

2007-03-08 Thread Omar Eljumaily
select count(*), address where address ~* 'magil' or address ~* 'whitewater' etc group by address would that work? Rhys Stewart wrote: Hi all, i have a table with an address column. I wanted to count the number of rows with a given regex match. so i ended up with the following very verbose

Re: [GENERAL] Group By?

2005-11-28 Thread Bob Pawley
Thank you I'll give it a try. Bob - Original Message - From: "Jim C. Nasby" <[EMAIL PROTECTED]> To: "Bob Pawley" <[EMAIL PROTECTED]> Cc: "Guy Rouillier" <[EMAIL PROTECTED]>; "PostgreSQL General" Sent: Monday, Novembe

Re: [GENERAL] Group By?

2005-11-28 Thread Jim C. Nasby
onitor int4, valve int4) > >> > >>Row 1 ( 1, 1, 4, ) > >> > >>Row 2 (2, 2, 3, 5) > >> > >> > >> > >>Once this is done the devices will be organized into loops and each > >>device > >>in the loop will have a direct link to other pa

Re: [GENERAL] Group By?

2005-11-28 Thread Bob Pawley
eSQL General" Sent: Monday, November 28, 2005 2:18 PM Subject: Re: [GENERAL] Group By? So are you trying to get a list of all 'mon's and 'valve's for each given association? On Wed, Nov 23, 2005 at 04:31:04PM -0800, Bob Pawley wrote: Guy Sorry about the chart. It h

Re: [GENERAL] Group By?

2005-11-28 Thread Jim C. Nasby
; auto_control table as the information is being entered or would the control > table need to be fully completed? > > > > Hope this is finally clear. > > > > Thanks for you help. > > > > Bob > > - Original Message - > From: &quo

Re: [GENERAL] Group By?

2005-11-23 Thread Bob Pawley
p. Bob - Original Message ----- From: "Guy Rouillier" <[EMAIL PROTECTED]> To: "PostgreSQL General" Sent: Wednesday, November 23, 2005 2:17 PM Subject: Re: [GENERAL] Group By? Bob Pawley wrote: Bruno The table I previously sent came through distorted and probable

Re: [GENERAL] Group By?

2005-11-23 Thread Guy Rouillier
t;> could also have a report app do it for you. In the report app method, >> you would be best to return rows ordered by association and then >> device_ID and have the app check for when the association value >> changes. >> >>> >>> Is this best acco

Re: [GENERAL] Group By?

2005-11-22 Thread Bob Pawley
- Original Message - From: "Guy Rouillier" <[EMAIL PROTECTED]> To: "Postgre General" Sent: Monday, November 21, 2005 4:25 PM Subject: Re: [GENERAL] Group By? Converted your message to plain text as preferred on most mailing lists. Bob Pawley wrote: >I want to ta

Re: [GENERAL] Group By?

2005-11-21 Thread Bruno Wolff III
; serial varchar int4 > > 1 mon 1 > 2 valve 2 > 3 valve 1 > 4 mon 2 > 5 valve 1 > > > Auto_control > > loop_id mon valve valve > serial int4 int4 int4 > 1 1 3 5 > 2 2 4 > > > > - Or

Re: [GENERAL] Group By?

2005-11-21 Thread Bob Pawley
valve 1 4 mon 2 5 valve 1 Auto_control loop_id mon valve valve serial int4 int4 int4 1 1 3 5 2 2 4 - Original Message - From: "Guy Rouillier" <[EMAIL PROTECTED]> To: "Postgre General" Sent: Monday, November 21, 2005 4:25 PM Subje

Re: [GENERAL] Group By?

2005-11-21 Thread Bruno Wolff III
On Mon, Nov 21, 2005 at 15:53:15 -0800, Bob Pawley <[EMAIL PROTECTED]> wrote: > I want to take the serial ID of several values in different rows in one table > and insert them into a single row of another table. > > Would the 'group by' command be the best way to do this? >From your descriptio

Re: [GENERAL] Group By?

2005-11-21 Thread Guy Rouillier
Converted your message to plain text as preferred on most mailing lists. Bob Pawley wrote: > I want to take the serial ID of several values in different rows in > one table and insert them into a single row of another table. > > Would the 'group by' command be the best way to do this? Could you

Re: [GENERAL] GROUP BY requirement

2005-08-30 Thread Scott Marlowe
On Fri, 2005-08-26 at 14:39, Bill Moseley wrote: > I'm wondering if adding a GROUP BY (as required by Postgres) will > change the results of a select on a view. > > I have the following view which joins a "class" with a teacher. A > teacher is a "person" and I have an "instructors" link table. >

Re: [GENERAL] Group By and wildcards...

2005-02-20 Thread Sim Zacks
Even if the function did a select from d, it could still have plenty of duplicates. To remove that possibility you would have to use the distinct clause which is also generally less efficient then a group by. "Bruno Wolff III" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED] > On Sat, F

Re: [GENERAL] Group By and wildcards...

2005-02-19 Thread Oisin Glynn
quot; <[EMAIL PROTECTED]>; Sent: Saturday, February 19, 2005 13:36 Subject: Re: [GENERAL] Group By and wildcards... > On Sat, Feb 19, 2005 at 15:59:52 -0200, > Jon Lapham <[EMAIL PROTECTED]> wrote: > > > > Since I do not want to have to re-write all my aggregate fun

Re: [GENERAL] Group By and wildcards...

2005-02-19 Thread Greg Stark
Bruno Wolff III <[EMAIL PROTECTED]> writes: > On Sat, Feb 19, 2005 at 12:07:12 -0200, > Jon Lapham <[EMAIL PROTECTED]> wrote: > > > > SELECT a.*, b.*, c.*, SUM(d.blah) > > FROM a, b, c, d > > WHERE > > GROUP BY a.*, b.*, c.* > > > > Instead of having to expand the "GROUP BY a.*, b.*, c.*" usi

  1   2   >