Postgres 11 procedures and result sets
Hello List, I have a question regarding the new stored procedures in Postgres 11 (I tested beta4): I'd like to know if it is somehow possible to get a (or possibly more) result set from selects within the SP, as it is possible in MariaDB, MySQL, or SQL Server. What I found until now is to use inout-Parms. Then I have to define all columns in the select as inout-Params, but then I get only one line back, regardless how many lines >0 have been selected. I could also define an inout refcursor param, and parse that one, but that would make the new SP identical to the well known functions returning a refcursor. And when I use that approach, I have an additional step in Qt-Code for example, since I have to exec the SP, then parse to the refcursor result, exec the fetch and then parse the cursor output I'm interested in. Did I miss something? Thanks in advance! -- MfG Jan
Re: A GROUP BY question
Hi Stan, Am Dienstag, 13. August 2019, 13:10:18 CEST schrieb stan: > Maybe I have a basic misunderstanding. What I am trying to get is a total > cost for each project. This would be calculated by multiplying rate and > hours for each row, on a per project base, and then summing all of th > products of this multiplication. First of all, complex views including many joins, group by, having etc usually tend to show bad performance on large dataset if used for anything else than simple select from ... statement. So at least my personal experience. > CREATE OR REPLACE view tasks_view as > select > project.proj_no , > employee.first_name , > employee.last_name , > employee.id , > task_instance.hours , > work_type.type, > work_type.descrip, > rate.rate, > employee.hourly_rate , > rate.rate * task_instance.hours as result , ^ this gives you the product *per line* > SUM (rate.rate * task_instance.hours) ^ this gives you *sum of all products over all grouped lines* > from > task_instance > join rate on > rate.employee_key = task_instance.employee_key > AND > rate.work_type_key = task_instance.work_type_key > inner join employee on > rate.employee_key = employee.employee_key > inner join work_type on > rate.work_type_key = work_type.work_type_key > inner join project on > project.project_key = task_instance.project_key > GROUP BY > project.project_key , > employee.first_name , > employee.last_name , > employee.id , > task_instance.hours , > work_type.type, > work_type.descrip, > rate.rate, > employee.hourly_rate -^ You group by the columns you use in the sum, so you will get no sum at all, but the product *per line* as selected just before the sum > ORDER BY > project.proj_no > ; You will have to find out if you really need to group by some lines, and take a sum over those lines or need the product (rate.rate * task_instance.hours) per line. -- Best regards Jan
Re: Extend inner join to fetch not yet connected rows also
Hey, Am Sonntag, 22. September 2019, 13:21:46 CEST schrieb Arup Rakshit: > I have craftsmanships table which has (id, name) and users table (id, email, > ..). When a user has some craftsmanships, they are stored inside the > contractor_skills(user_id, craftsmanship_id, id) table. [...] > But I want to list all craftsmanships and has column should have `t` when > user_id #8 has it, else `f`. How can I extend thisquery? maybe something like select c.id, c.name, case when cs.user_id = 8 then true else false end as has from craftsmanships c left join contractor_skills cs on cs.craftsmanship_id = c.craftmanship_id; -- MfG Jan
Re: Extend inner join to fetch not yet connected rows also
Hi Arup, Am Sonntag, 22. September 2019, 14:48:20 CEST schrieb Arup Rakshit: > Hi Jan, > > > On 22-Sep-2019, at 5:38 PM, Jan Kohnert > > wrote: > > maybe something like > > > > select > > > >c.id, > >c.name, > >case when cs.user_id = 8 then true else false end as has > > > > from craftsmanships c > > left join contractor_skills cs > > > >on cs.craftsmanship_id = c.craftmanship_id; > > But this query fetched duplicate data: yeah, that's possible, since I don't exactly know your data model. If only the values above are required, you could simply use distinct: select distinct c.id, c.name, case when cs.user_id = 8 then true else false end as has from craftsmanships c left join contractor_skills cs on cs.craftsmanship_id = c.id order by c.id; -- MfG Jan
Re: Extend inner join to fetch not yet connected rows also
Hi Arup, Am Sonntag, 22. September 2019, 15:30:38 CEST schrieb Arup Rakshit: > When someone adds a craftsmanship to their skill set, the contractor_skills > table holds that relationship. I don’t think distinct is the correct tool, > as it will eliminate the correct data. users and craftsmanship has m:n > relationship via the join table contractor_skills. depending on the definition of table "contractor_skills" it can give you a n:m relationship between user_id and craftmanship_id, that is true. > SELECT > craftsmanships.id, > craftsmanships.name, > CASE WHEN contractor_skills.user_id IS NULL THEN > FALSE > ELSE > TRUE > END AS has > FROM > "craftsmanships" > LEFT JOIN "contractor_skills" ON "contractor_skills"."craftsmanship_id" = > "craftsmanships"."id" LEFT JOIN "users" ON "users"."id" = > "contractor_skills"."user_id" WHERE (contractor_skills.user_id = 8 > OR contractor_skills.user_id IS NULL) > ORDER BY > "craftsmanships"."id”; BUT: you don't use any of users' columns in select, where, or order by. And since users is in a left join it is just a table which is neither used nor relevant in that particular statement. In the end, it depends on how data is structured in your database and what you want to achieve. -- MfG Jan
Re: SQL Query Syntax help
Hi, Am Mittwoch, 22. Januar 2020, 12:00:45 CET schrieb srikkanth: [sniped HTML] something like select id, 'Col A' as "Col Name", "Col A" as "Col Value" from input union all select id, 'Col B' as "Col Name", "Col B" as "Col Value" from input ... order by 1, 2; should work. -- Kind regards Jan
Re: High Availability PostgresDB on K8s
Hi, Am Freitag, 19. August 2022, 17:09:35 CEST schrieb Marco Schmucki: > I want to run an HA-Postgres-DB on a Kubernetes cluster for testing purposes > (Proof of Concept). > > Do you have some links or documentation? We're usually using Bitnami's Helm charts [1], [2] to deploy PostgreSQL into our K8S clusters. They also have -ha variants that might fit your needs, though I haven't tested them myself. Be aware, that you'll get a single-master-multiple-replica, so you still only have one Postgres-instance actually *writing* data. As long as most of the traffic is reading traffic, that might just work well; if you have a lot of writing traffic, you will be limited by the master pod, the node it is running on, and of course the underlying PVC. If you need multi-master, you might have a look at CockroachDB [3] which mostly Postgres-compatible. [1] https://bitnami.com/stack/postgresql/helm [2] https://github.com/bitnami/charts/tree/master/bitnami/postgresql/ #installing-the-chart [3] https://www.cockroachlabs.com/product/kubernetes/ -- MfG Jan
Re: Subquery to select max(date) value
Am Dienstag, 12. Februar 2019, 23:23:49 CET schrieb Rich Shepard: > The query is to return the latest next_contact date for each person. Using > the max() aggregate function and modeling the example of lo_temp on page 13 > of the rel. 10 manual I wrote this statement: You don't even need a subselect to do so: select P.person_id, P.lname, P.fname, P.direct_phone, O.org_name, max(A.next_contact) as next_contact from People as P join Organizations as O on P.org_id = O.org_id join Activities as A on P.person_id = A.person_id group by P.person_id, P.lname, P.fname, P.direct_phone, O.org_name; -- Kind regards Jan
Implementing product-aggregate
Hi, we need a product aggregate and used to implement this as exp(sum(ln([COLUMN]))) While using the sum of logarithms is working RDBMS-independently, we'd like to switch to a more PostgreSQL native way of doing this and implement an aggregate to be used. Currently the implementation is create aggregate prod(numeric) (sfunc = numeric_mul, stype = numeric) This is simply calling the implementation funtion of the *-Operator for the numeric datatype. Since I could not find any documentation of this implementation function, I am wondering, if using a possibly internal function might be a bad idea. Are there any recommendations on this? Thanks for any input! -- MfG Jan
Re: Implementing product-aggregate
Hi again, Am Montag, 11. März 2024, 10:16:33 CET schrieb Jan Kohnert: > create aggregate prod(numeric) (sfunc = numeric_mul, stype = numeric) my basic idea was creating a function create function multiply(arg1 numeric, arg2 numeric) returns numeric language sql immutable returns null on null input return arg1 * arg2; and use that function instead of the undocumented numeric_mul as the sfunc in the aggregate definition. Then again, this seems odd, too, since we're only reimplementing basic stuff that's already there. I'm still undecided... -- MfG Jan
Re: Implementing product-aggregate
Hello, Am Donnerstag, 14. März 2024, 15:17:58 CET schrieb Tom Lane: > I wouldn't be concerned about relying on numeric_mul (or any of the > other functions underlying standard operators). They're undocumented > only because documenting both the functions and the operators would > bloat the documentation to little purpose. Using one makes your code > not so portable to non-Postgres DBMSes, but you already crossed that > bridge by deciding to use a custom aggregate. thank you for clearifying this. We're not too concerned about portability. Let's face the facts: Porting a reasonably complex database and the application using it from one DBMS to another will almost certainly introduce an awful lot of portability issues (f.e. the pseudo-types (big)serial, upserts, differences in merge implementations, progammability, and so on). My main concern was, that undocumented features sometimes tend to change without notice, since users are not expected to use them.. > A bigger question is whether this implementation actually has the > properties you want --- notably, maybe you should be using type > float8 not numeric. Numeric would get pretty slow and be carrying > an awful lot of decimal places by the end of the query, I fear. This needs to be checked on our side. I was expecting, that using an aggregate this way would be significantly faster than using exp(sum(log())). Though we're not multiplying too many lines in a statement, if using the aggregate slows down performance, we should propably stick the old way doing it. Best regards! -- MfG Jan