ERROR: control reached end of function without RETURN, except, it is not a function it is a procedure or a nameless block.
Hi Support, I get an error in a stored procedure - not a stored function mind you - control reached end of function without RETURN The error occurs when doing an exit in a stored procedure. Now I cannot give all of the stored procedures involved but I can simulate te problem with a very simple block: DO $BODY$ <> BEGIN begin exit simple_block; -- for demo purposes raise notice '%', 'unreachable!'; end; raise notice '%', 'End of block'; end; $BODY$ This will return the error on Postgres version 13, Version 12 and 9. Now I thought this was text-book behaviour of exiting a code block. I do not like programming this way but it is part of a migration project of Oracle to PG. What do I miss ? Kind regards, If you put this code snippet in a file and execute like: postgres@herwig-NUC10i7FNH:~$ psql -f do_test.sql psql:do_test.sql:12: ERROR: control reached end of function without RETURN CONTEXT: PL/pgSQL function inline_code_block Herwig
Re: ERROR: control reached end of function without RETURN, except, it is not a function it is a procedure or a nameless block.
Hi po 13. 9. 2021 v 13:23 odesílatel Herwig Goemans napsal: > Hi Support, > > > I get an error in a stored procedure - not a stored function mind you - > control reached end of function without RETURN > > The error occurs when doing an exit in a stored procedure. Now > I cannot give all of the stored procedures involved but I can simulate > te problem with a very simple block: > > > > DO > $BODY$ > <> > BEGIN > begin > exit simple_block; > -- for demo purposes >raise notice '%', 'unreachable!'; > end; > raise notice '%', 'End of block'; > end; > $BODY$ > > This will return the error on Postgres version 13, Version 12 and 9. > > > Now I thought this was text-book behaviour of exiting a code block. I do > not like programming this way but it is part of a migration project of > Oracle to PG. > > What do I miss ? > This is Postgres's bug You can use an just workaround - extra outer block BEGIN END; Regards Pavel > Kind regards, > > > If you put this code snippet in a file and execute like: > > postgres@herwig-NUC10i7FNH:~$ psql -f do_test.sql > psql:do_test.sql:12: ERROR: control reached end of function without RETURN > CONTEXT: PL/pgSQL function inline_code_block > > Herwig > > > >
Re: ERROR: control reached end of function without RETURN, except, it is not a function it is a procedure or a nameless block.
Pavel Stehule writes: > po 13. 9. 2021 v 13:23 odesílatel Herwig Goemans > napsal: >> I get an error in a stored procedure - not a stored function mind you - >> control reached end of function without RETURN >> The error occurs when doing an exit in a stored procedure. > This is Postgres's bug Yeah, agreed. Looks reasonably simple to fix, though. regards, tom lane
Fastest option to transfer db?
I have a database that I want to transfer from one VM to another. Both VM’s are running on the same (ProxMox) host. select pg_size_pretty(pg_database_size(‘dbname')); shows the database size to be 336GB. What is going to be the fastest method to transfer this data? - The database cluster has other databases on it, so taking the cluster offline and copying files is not an option - The specific database I want to transfer is live, receiving data on an ongoing basis. Some downtime is acceptable, days of downtime not so much. My first attempt was to run the following command from the destination machine: pg_dump -C -h source.machine.address dbname | psql -h 127.0.0.1 postgres This command pegged a single core on both machines, and produced network activity of around 30M (according to the proxmox monitor). By my calculations that implies that the transfer should take around 25 hours - not really good enough, and just feels sluggish, given that even just a gig connection should be able to transfer data at near gig speeds - obviously there is a bottleneck somewhere in the system/command. For my next attempt, I tried the following command from the SOURCE machine: pg_dump -Fc -Cc -U israel dbname | pg_restore -U israel -h dest.machine.address -d postgres This resulted in 100% CPU usage on the source machine by pg_dump, around 50% cpu usage on the source by postmaster, and around 30-50% cpu usage on the destination by postgres (I think postmaster vs postgres process name difference is due to CentOS vs Ubuntu? Not sure. Probably doesn’t matter.) Network throughput dropped to only 14M under this scenario, but of course that is compressed data, so time calculations are hard. During both attempts I saw no disk activity on the destination host (according to proxmox monitoring), nor did the RAM usage increase, so I’m not sure where the data is actually going, which bothers me. Is there a better way to do this? --- Israel Brewster Software Engineer Alaska Volcano Observatory Geophysical Institute - UAF 2156 Koyukuk Drive Fairbanks AK 99775-7320 Work: 907-474-5172 cell: 907-328-9145
Re: Query takes around 15 to 20 min over 20Lakh rows
Hi Tom/David Could you please help me getting started to optimise this query?? Thanks & Regards Shubham mittal On Tue, Sep 7, 2021, 8:57 PM Michael Lewis wrote: > Have you ever used this site to visualize the explain plan and spot bad > estimates and slow nodes? https://explain.depesz.com/s/WE1R > > This stands out to me- > > *Subquery Scan on npiactionjoin (cost=10,165,289.40..10,167,192.01 > rows=293 width=16) (actual time=118,413.432..118,806.684 rows=446,782 > loops=1)Filter: (npiactionjoin.rn = 1)* > > It seems that estimate is pretty far off and this node and the final node > above this are the biggest slowdowns. If you filtered down to the record > you want from task_history BEFORE the join, then maybe you would have > quicker results. I might try a materialized CTE or even an analyzed temp > table if that option is available to you, so the planner makes informed > decisions. > > By the way, the order by on that row_number seems like you are getting the > OLDEST activity related to the task which could maybe be cached rather than > re-calculated daily as this query runs. > > > *Michael Lewis | Database Engineer* > *Entrata* >
Re: Fastest option to transfer db?
What version of Postgres is the source? Can you make use of logical replication?
Re: Fastest option to transfer db?
Source DB is 11.12, destination is 13.4. I’ll look into logical replication - It sounds like it could be a good option. Thanks! --- Israel Brewster Software Engineer Alaska Volcano Observatory Geophysical Institute - UAF 2156 Koyukuk Drive Fairbanks AK 99775-7320 Work: 907-474-5172 cell: 907-328-9145 > On Sep 13, 2021, at 10:10 AM, Michael Lewis wrote: > > What version of Postgres is the source? Can you make use of logical > replication?
Re: Fastest option to transfer db?
Ok, I have logical replication up-and-running (I guess - seemed to simple to be working. Shouldn’t it be complicated, requiring many steps and configuration changes?), maxing out one CPU core on each machine (more or less), and showing network throughput of around 15M. If DU changes are to be believed, it’s transferring data at about 0.8GB/minute, implying something like a 8 hour transfer time. Of course, since it is replication, it has the benefit that any data that comes in during that 8 hour window should also be replicated, after which the two systems should remain in sync allowing for zero (or nearly so) downtime cutover. Which is nice. Any gotchas I need to be aware of during this initial transfer window, such as WAL files building up on the source machine? --- Israel Brewster Software Engineer Alaska Volcano Observatory Geophysical Institute - UAF 2156 Koyukuk Drive Fairbanks AK 99775-7320 Work: 907-474-5172 cell: 907-328-9145 > On Sep 13, 2021, at 10:10 AM, Michael Lewis wrote: > > What version of Postgres is the source? Can you make use of logical > replication?
what is pg_stat_activity.query for idle connections ?
11.3 on linux I have a DB with a worrisome number of connections with pg_stat_activity.query = '/* DBD::Pg ping test v3.5.3 */', all of them with state='idle'. I have code that uses perl's ping method to ping the DB and I suspect this is the source. Is '/* DBD::Pg ping test v3.5.3 */' the last thing it did, but because it's idle now, it's actually doing nothing ?
Re: what is pg_stat_activity.query for idle connections ?
David Gauthier writes: > 11.3 on linux > I have a DB with a worrisome number of connections with > pg_stat_activity.query = '/* DBD::Pg ping test v3.5.3 */', all of them with > state='idle'. I have code that uses perl's ping method to ping the DB and > I suspect this is the source. Is '/* DBD::Pg ping test v3.5.3 */' the last > thing it did, but because it's idle now, it's actually doing nothing ? Yes, exactly. regards, tom lane
To JIT (or not!) in Ubuntu packages
We've noticed that the Ubuntu postgresql-12 package has --with-llvm enabled on x86_64, but not on aarch64. Does anybody know if this was intentional, or just an oversight? For what it's worth, it seems the beta postgresql-14 package for Ubuntu still doesn't have --with-llvm. I'm not sure if this explains why our aarch64 DBs are missing a jit_provider GUC variable, but I expect it does explain why pg_jit_available() tells me no.
The tragedy of SQL
A fun philosophical discussion. I am no fan of “worse is better”, and particularly its poster child, SQL. The world’s economic output would be substantially higher (5%?) if our industry had settled on almost anything other than SQL for relational databases. So much of the design of *almost everything* in our industry is a reaction to SQL. ORMs fucking *everywhere* so you don’t have to use SQL. Bad query and database design. Inefficient system designs that use ORMs rather than relations. NoSQL databases. Countless hours on hours of developer time trying to work out how to write something in SQL that would be trivial in, say, Datalog. If I had $5 million to invest in a startup, I would hire as many of the core Postgres devs as I could to make a new database with all the sophistication of Postgres but based on Datalog (or something similar). (Or maybe add Datalog to Postgres). If that could get traction, it would lead in a decade to a revolution in productivity in our industry.
Re: The tragedy of SQL
They are making a decent decision. SQL is a *fucking terrible* language, which I don’t blame them for not wanting to learn. The whole industry, programming languages, infrastructure, everything would have developed differently if relations were a natural, pleasurable thing to use in any programming language. Like an Array, or a Hash. On Sep 13, 2021, 22:45 -0700, Hemil Ruparel , wrote: > SQL is not the problem. Problem are the devs. I love SQL. I hate orms. The > problem with databases is people refuse to treat it as the entity it is and > want to use their beautiful OO system. Problem is databases are not OO. We > need to recognize that and treat databases as databases.
Re: The tragedy of SQL
On 9/13/21 11:51 PM, Guyren Howe wrote: They are making a decent decision. SQL is a *fucking terrible* language, which I don’t blame them for not wanting to learn. The whole industry, programming languages, infrastructure, everything would have developed differently if relations were a natural, pleasurable thing to use in any programming language. Like an Array, or a Hash. On Sep 13, 2021, 22:45 -0700, Hemil Ruparel , wrote: SQL is not the problem. Problem are the devs. I love SQL. I hate orms. The problem with databases is people refuse to treat it as the entity it is and want to use their beautiful OO system. Problem is databases are not OO. We need to recognize that and treat databases as databases. All languages are fucking terrible. There are thousands of the them because some people bump into a feature they don't like and run off an make another fucking terrible language. For the love of God, please don't be one of those people. The rest of us find languages we can abide and do productive things with using features we like and avoiding those we don't. I've always felt it was no small miracle the vendors managed to agree to ODBC/JDBC driver specs (even though the SQL language definition is "more like guidelines"). Go scream at the DOM and JavaScript.
Re: The tragedy of SQL
Many languages are awesome. I'm always astonished at what great things people have come up with, over the years; it's been a wonderfully fertile field. We would certainly not be better off if we'd just buckled down, and used COBOL and FORTRAN... or even relatively good languages like C, APL, and Lisp. It is certainly possible to change too lightly, for small reasons. That doesn't mean that forever enduring the same problems is a good idea. On Tue, Sep 14, 2021 at 2:18 AM Rob Sargent wrote: > > On 9/13/21 11:51 PM, Guyren Howe wrote: > > They are making a decent decision. SQL is a *fucking terrible* language, > which I don’t blame them for not wanting to learn. > > The whole industry, programming languages, infrastructure, everything would > have developed differently if relations were a natural, pleasurable thing to > use in any programming language. Like an Array, or a Hash. > On Sep 13, 2021, 22:45 -0700, Hemil Ruparel , > wrote: > > SQL is not the problem. Problem are the devs. I love SQL. I hate orms. The > problem with databases is people refuse to treat it as the entity it is and > want to use their beautiful OO system. Problem is databases are not OO. We > need to recognize that and treat databases as databases. > > All languages are fucking terrible. There are thousands of the them because > some people bump into a feature they don't like and run off an make another > fucking terrible language. For the love of God, please don't be one of those > people. The rest of us find languages we can abide and do productive things > with using features we like and avoiding those we don't. I've always felt it > was no small miracle the vendors managed to agree to ODBC/JDBC driver specs > (even though the SQL language definition is "more like guidelines"). Go > scream at the DOM and JavaScript. -- Ray Brinzer