[GENERAL] Deleting PostGres ID Under Windows XP?
Somewhere along the line the install of one of my apps created a userid named "PostGres", I assume in connection with installing the PostGres database manager. Now I no longer have the option of having the system just boot up into my own ID: I have to select my ID from a login screen that displays my own ID and "PostGres". Googling, I get the impression that PostGres's password is not really intended tb known to the user. I'd like to drive a stake through this thing's heart. I know I can do it via Control Panel | User Accounts | Delete the account. The Question: Am I going to regret doing this? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Deleting PostGres ID Under Windows XP?
RE/ >> Am I going to regret doing this? > >If you want to keep using Postgres, yes. Search the web Suspicion.. *Confirmed*. Thanks! -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Getting data out of DB with bad WAL files.
Hi, I've got a DB where the WAL files were lost. I know I've lost some data, but is there anyway to get what is in the primary DB back out again? PG won't start with bad WAL files so...:( Thanks for the help. GB -- GB Clark II | Roaming FreeBSD Admin [EMAIL PROTECTED] | General Geek CTHULU for President - Why choose the lesser of two evils? ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://www.postgresql.org/search.mpl
Re: [GENERAL] Questions about tuning on FreeBSD...
On Monday 10 September 2001 18:52, Philip Hallstrom wrote: > Hi all - > I have some questions about tuning the various kernel parameters > on FreeBSD. I've read Bruce's article a couple of times and the various > parts of the documentation, but I still have some questions and was hoping > people here could confirm/deny my assumptions. > > The machine in question has 512 megs of ram and doesn't do anything else > significant other than postgresql. > Depending on the size of your DB, this should do OK... > As I understand there are two major tuneable parameters: > > - shared buffer cache: which I can think of as a sort of RAM-based > disk cache of recently accessed tables (or parts of tables). Ideally this > would be large enough to hold the entire database. The goal is to make > this large enough to hold the most commonly accessed tables. I run with shared buffers = 5120 > - sort memory batch size: this is the amount of memory that *each backend* > uses to do it's sorts/merges/joins. If the backend needs more than this > then it writes to temporary files. Again the goal would be to make all > your sorts/merges/joins fit in this size. Sort Mem = 4096 > The overall goal is to give enough memory to postgresql so that it writes > to disk as little as possible while making sure that the OS as a whole > doesn't have to start swapping. So as a starting point could one install > the OS, let it run for a bit, see how much memory it's using, see how much > memory is left and assign most of that to postgresql? > > > Regarding the KERNEL parameters. I can follow the general rules mentioned > in the docs for configuring things no problem. However is there any > danger in doing that? Can they be too big? Or can they only be too big > in conjunction with giving postgresql too much memory? In otherwords if I > set them to something, but don't run postgresql have I affected how the OS > will run by itself? > > Is there a way to determine the maximum number of backends that can be run > given a given amount of RAM? > > Also, if anyone has a Free# SYSV stuff BSD box with 512 ram, what are your > kernel settings? options SYSVSHM #SYSV-style shared memory options SHMMAXPGS=12288 options SHMMAX="(SHMMAXPGS*PAGE_SIZE+1)" options SHMSEG=256 options SHMMNI=512 options SHMMIN=1 options SYSVMSG #SYSV-style message queues options SYSVSEM #SYSV-style semaphores options SEMMNI=256 options SEMMNS=512 options SEMMNU=256 options SEMMAP=256 Note: Some of these might be WAY TOO high! I could not find enough docs to tell me, so I just added extra everywhere Maybe some of the experts can pick this apart... > > Thanks! > > -philip > > On that note: Could some of the PostgreSQL expert take a look my number of buffers and the kernel config and tell me if I'm running too much of anything? GB -- GB Clark II | Roaming FreeBSD Admin [EMAIL PROTECTED] | General Geek CTHULU for President - Why choose the lesser of two evils? ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
[GENERAL] Re: Auto Increment
I had looked everywhere to find the answer to this question. Now that I look at the FAQ again I see it in there. I tryed searching the FAQ for "auto increment" when it turns out to be "auto-increment" on the FAQs page. Being a MySQL user who was dead ended and had no choice but to find a new database with Transactions or something simliar because of MySQL's limitations I had several questions that seemed difficult to find during the switchover. 1. Auto increment - now serial datatype 2. 'SHOW TABLES' in mysql - now '\d' (I got lucky to stumble apon this.) 3. 'DESCRIBE' in mysql - now '\d tablename' (same as above.) As more people realize MySQL's limitations I think there will be more people making the switchover. As a result I'm sure many people find themselves asking the same questions I did. I would recommend these 3 things be easy to find the the FAQ for MySQL familiar users. My 2 cents... On Tue, 26 Dec 2000, Brett W. McCoy wrote: > On Tue, 26 Dec 2000 [EMAIL PROTECTED] wrote: > > > Is there a way set a primary key to Auto Increment like you can > > with MySQL? > > See the documentation on CREATE SEQUENCE, and also on CREATE TABLE and the > SERIAL type. > > [An aside: this is something definitely that qualifies as a frequently > asked question, as this is like the 4th time in a week this question has > been asked. Would posting a FAQ list on a regular basis to the list be > helpful for things like this, like is done on the Perl newsgroups?] > > -- Brett > http://www.chapelperilous.net/~bmccoy/ > --- > Hanson's Treatment of Time: > There are never enough hours in a day, but always too many days > before Saturday. >
[GENERAL] [Question] Help me with programming in C/C++
Hi. I have a question regarding programming in C and postgreSQL. Is there any way that I can save couple of results to one PGresult * ?? Like thisBEGIN;DECLARE portal1 CURSOR FOR select * from item where item = 'aa';DECLARE portal2 CURSOR FOR select * from item where item = 'bb';DECLARE portal3 CURSOR FOR select * from item where item = 'cc'; res = PQexec(conn,"FETCH ALL in portal*"); // Like thisEND;portal1 sould go first.I really need this function.I'm a C programmer but not a expert.If anyone know how to do this in C/C++...Please let me know.. Thanks in advence.
[GENERAL] [Question] Help me with programming in C/C++
Hi. I have a question regarding programming in C and postgreSQL. Is there any way that I can save couple of results to one PGresult * ?? I got reply from some guys.. but... Ok.. this is what I'm going to do.. I'm developing search engine. Let's assume. If user enter for keywords like "Search Engine" My program will separate each token.. and postgres will search like as below.. Something Like this BEGIN; DECLARE portal1 CURSOR FOR select * from db where description ~* '^search engine$'; DECLARE portal2 CURSOR FOR select * from db where description ~* 'search engine'; DECLARE portal3 CURSOR FOR select * from db where description ~* = 'search' and description ~* 'engine'; DECLARE portal4 CURSOR FOR select * from db where description ~* = 'search' or description ~* 'engine'; res = PQexec(conn,"FETCH ALL in portal*"); // Like this END; portal1 sould go first, portal4 should go last. And I don't wanna use any insert/copy command (Don't wanna make any new temporary table. becasue there are lots of querys) I really need this function. I'm a C programmer but not a expert. Any Idea? If anyone know how to do this in C/C++... Please let me know.. Thanks in advence.
No Subject
Hi all How can i put limited length varchar(20) to SQL function? I have: create table bb ( c varchar(20), d int4 ); create function ins ( ???varchar???, int4) returns int4 as 'insert into bb (c, d) values ($1, $2); select 1; ' language 'sql' ; but if i: select ins('foo', 1); then: select * from bb; c | d ---+-- foo|340322 (1 row) 340322 != 1 ? How can i fix this ? Thank you for reading ! And many many thanks for answering ! Bye kep:)
Re: [GENERAL] users in Postgresql
>> The contents of the pg_shadow table are written through to a file on disk >> called pg_pwd, so all the backends can easily access it. However, this >> write through is not automatic. The create user and alter user commands >> take care of that, but if you update pg_shadow directly, your changes will >> not be seen by currently active backends. > > Your changes never get to the file, ever, not just current backends. > > CREATE USER sql command updates the file, but an UPDATE on pg_shadow > does not. IMHO, that's a bug: It's not forbidden to update or insert into pg_shadow by rule, but if I do that I will get inconsistent authentication data. Why not revoke INSERT and UPDATE on pg_shadow? Or better: Why not use a trigger on pg_shadow, to handle pg_pwd correctly? The trigger code is allways in "create/alter user" command handler. The code should be as near as possible on data! > We use a file because the postmaster does the password authentication, > and we don't have any database connection the postmaster. pg_shadow is a file too, but not in text format like pg_pwd. Gerald.
RE: [GENERAL] Announce: Postgres Access Control Tool
Hi Mark, > Is there anyway I can get this run under Tcl/Tk on Windows NT? I tried > simply loading ./paco into wish but it failed looking up /usr/ > references. I haven't tested PACO on Windows. But this should work: PACO needs two Tcl/Tk extensions loaded as shared object files: libpgtcl.so - Tcl/Tk Postgres interface libtix.so - widget frameset On Windows this files are LIBPGTCL.DLL and LIBTIX.DLL. The first is part of Postgres and the second can be downloaded from ftp://www.neosoft.com/pub/tcl/sorted/packages-7.6/unknown/tixwin41p6bin.zip You have to edit the PACO source and correct the path. (Search for the "load" command at top of the file.) Please let me know if all has done ok. Gerald
[GENERAL] Building PL/Perl procedural language without --with-perl configure flag
Hi, I am trying to build PL/Perl procedural language to my PostgreSQL server installation but I want to do it without passing --with-perl flag to the configure script of PostgreSQL. It would also help if someone can explain the meaning of --with-perl flag in a detailed fashion altogether as it might help me determine the actual process of installing PL/Perl. There are two parts to the question however how do I get the source of PL/Perl firstly and how do I build it against an already built version of PostgreSQL as in when I have the (bin, lib, share) for PostgreSQL and what all pre-requisites are needed to install PL/Perl in the first place? Thanks
[GENERAL] Re: Building PL/Perl procedural language without --with-perl configure flag
The already installed Postgres edition was built using the same installation procedure as mentioned in the docs, but without the use of --with-perl flag. the point I ask the question is because I want to install PL/Perl as a separate extension as one does with PostGIS and not along with Postgres install. Is there a way out to solve that problem of building the PL/Perl language by somehow creating a custom Makefile as we have for contrib extensions or PostGIs etc... and then giving it the path of pg_config hence leading to an installation?
[GENERAL] Making changes to PostgreSQL's configure logic so as to have contrib modules installed in a specific directory and make them use PGXS?
Hi, I want to configure my PostgreSQL installation in such a manner such that the contrib modules of the PostgreSQL distribution are stored in a specific directory and they should use the PGXS extensions management system supported by Postgres, as they are currently packaged along with Postgres and follow a specific directory structure they are installed along with Postgres but I don't want that. I want all those contrib module extensions to use a separate pg_config and hence want their makefiles to use PGXS. How do I go about doing that. Thanks
[GENERAL] Testing an extension exhaustively?
Hi, If I have the Postgresql server installed on my machine i.e I have all the bins, libs and share directories of the Postgresql and I have the libs and sql's installed for one of the contrib extensions lets say "chkpass", how does one go about testing this extension exhaustively on the server? I ask this because I would want to do this manually first and then go about automating the testing of this extension. So rather than just execute CREATE EXTENSION and DROP EXTENSION I want some solid evidence that the extension is working fine under all circumstances and is not crashing the server at any moment? Looking for some new strategies and ideas to come my way through this. Thanks.
[GENERAL] Using different GCC, CFLAGS, CCFLAGS and CPPFLAGS to compile Postgres and PostGIS?
Hi, I am wondering about this question for a while with no definite answer to it, can someone explain me in detail to clear me out on the following question : What can go wrong or is it acceptable if I build Postgres from source with let's say GCC 4.x.y and some specific CFLAGS, CPPFLAGS and CCFLAGS and when I try to install an extension for Postgres such as PostGIS using a different version of GCC lets say GCC 4.x.z with varying CFLAGS, CCFLAGS and CPPFLAGS and if I follow the same process for other non-contrib extensions? I want to know the theory behind this and it would serve me a great help to understand the systems aspect of the process as well. Thanks
[GENERAL] Re: Using different GCC, CFLAGS, CCFLAGS and CPPFLAGS to compile Postgres and PostGIS?
Also can you explain if I built Postgres from source on one platform lets say RHEL_6 and deployed its artifacts like its binaries, libs and share on a CentOS and tried building extensions against Postgres on CentOS are there any dangers of doing that? On Wed, Feb 1, 2017 at 8:34 PM, postgres user wrote: > Hi, > > I am wondering about this question for a while with no definite answer to > it, can someone explain me in detail to clear me out on the following > question : > > What can go wrong or is it acceptable if I build Postgres from source with > let's say GCC 4.x.y and some specific CFLAGS, CPPFLAGS and CCFLAGS and when > I try to install an extension for Postgres such as PostGIS using a > different version of GCC lets say GCC 4.x.z with varying CFLAGS, CCFLAGS > and CPPFLAGS and if I follow the same process for other non-contrib > extensions? I want to know the theory behind this and it would serve me a > great help to understand the systems aspect of the process as well. > > Thanks >
[GENERAL] Do Postgres exceptions rise up the stack?
A basic question about raising exceptions in Postgres: If Function A calls Function B and Func B raises an exception, will the exception roll back the transaction in Func A by default? Or do I need to trap and re-raise the exception in Func A? Thanks. ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] SQL problem..
Your query won't work because there is no single Price associated with a range of dates. It doesn't make sense. Do you mean to select AVG(Price)? On 6/28/07, Bauhaus <[EMAIL PROTECTED]> wrote: Hello, I'm an Access/SQL novice and I have an sql problem: I have the following table Price: FuelID PriceDate Price LPG1/05/2007 0,2 LPG13/05/2007 0,21 SPS 2/05/2007 1,1 SPS 15/05/2007 1,08 And I have to make the following query: FuelID PriceDate_from PriceDate_To Price LPG1/05/2007 13/05/2007 0,2 SPS 2/05/2007 15/05/20071,1 LPG13/05/2007 0,21 SPS 15/05/2007 1,08 I tried this: SELECT FuelID, min(FuelDate) AS Pricedate_from, max(FuelDate) AS PriceDate_to FROM Price GROUP BY FuelID; Problem is, when I put Price in the select, I get the error 'Price not part of an aggregate function' :s Eitherway, it doesnt work, I only have one FuelDate_from and one FuelDate_to if I use min & max. While there should be several from...to... dates for a particular fuel. How can I solve this ? ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Do Postgres exceptions rise up the stack?
How about this scenario: func A() begin x = func B(); y = func C(); z = func D(); end Where func A, B, C, and D all update the db. If a funciton is raised in func D(), will all the transactions in the other children be rolled back? Or do I need to add code to enable this? On 6/30/07, Wiebe Cazemier <[EMAIL PROTECTED]> wrote: On Saturday 30 June 2007 23:14, Postgres User wrote: > A basic question about raising exceptions in Postgres: > > If Function A calls Function B > > and Func B raises an exception, will the exception roll back the > transaction in Func A by default? Or do I need to trap and re-raise > the exception in Func A? > > Thanks. Any exception aborts the transaction. That's how exceptions work. If you don't catch them, they bubble all the way to the surface. Otherwise it would be too much like if-statement error checking. ---(end of broadcast)--- TIP 6: explain analyze is your friend ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Do Postgres exceptions rise up the stack?
Where func A, B, C, and D all update the db. If an EXCEPTION is raised in func D(), will all the transactions in the other children be rolled back? Or do I need to add code to enable this? On 6/30/07, Postgres User <[EMAIL PROTECTED]> wrote: How about this scenario: func A() begin x = func B(); y = func C(); z = func D(); end Where func A, B, C, and D all update the db. If a funciton is raised in func D(), will all the transactions in the other children be rolled back? Or do I need to add code to enable this? On 6/30/07, Wiebe Cazemier <[EMAIL PROTECTED]> wrote: > On Saturday 30 June 2007 23:14, Postgres User wrote: > > > A basic question about raising exceptions in Postgres: > > > > If Function A calls Function B > > > > and Func B raises an exception, will the exception roll back the > > transaction in Func A by default? Or do I need to trap and re-raise > > the exception in Func A? > > > > Thanks. > > Any exception aborts the transaction. That's how exceptions work. If you don't > catch them, they bubble all the way to the surface. Otherwise it would be too > much like if-statement error checking. > > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] General search problem - challenge
I have a table of around 6,000 places in the world. Everytime my server receives a ping, I'm grabbing the content of an article from an RSS feed. Then I search the article for the presence of any the 6000 terms. A typical article is around 1200 words. I don't need to save the article in a table and the search is performed only once, so it's not about FTS. Any thoughts on the best way to execute these searches using a traditional language like C++ ? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Fastest way to join materalized view to child tables
Hi, I have a quasi materialized view that's maintained by INS, UPD, and DEL triggers on several child tables. The tables involved have different structures, but I needed a single view for selecting records based on a few common fields. This approach is much faster than querying the separate tables and trying to correlate and sort the results. materialized view - view_a child tables - table_a, table_b, table_c Here's my question- what's the fastest what to retrieve rows from each of the child tables after I get results from view_a ? I don't like using temp tables in Postgres (too much pain in the past), so first selecting into a temp table which could subsequently be joined against the child tables isn't appealing to me. The result set from materialized view_a will never exceed 60 rows, so I'm thinking about this: a) LOOP on a SELECT FROM view_a b) for each record, add the row id to one of 3 comma delimited strings (one per child table) c) perform a SELECT WHERE IN (delimited_string) from each child table Any comments? I guess I'm mainly concerned about the speed of the FOR SELECT LOOP... ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] How to join materalized view to child tables
Hi, I have a quasi materialized view that's maintained by INS, UPD, and DEL triggers on several child tables. The tables involved have different structures, but I needed a single view for selecting records based on a few common fields. This approach is much faster than querying the separate tables and trying to correlate and sort the results. materialized view - view_a child tables - table_a, table_b, table_c Here's my question- what's the fastest what to retrieve rows from each of the child tables after I get results from view_a ? I don't like using temp tables in Postgres (too much pain in the past), so first selecting into a temp table which could subsequently be joined against the child tables isn't appealing to me. The result set from materialized view_a will never exceed 60 rows, so I'm thinking about this: a) LOOP on a SELECT FROM view_a b) for each record, add the row id to one of 3 comma delimited strings (one per child table) c) perform a SELECT WHERE IN (delimited_string) from each child table Any comments? I guess I'm mainly concerned about the speed of the FOR SELECT LOOP... ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
[GENERAL] How to extract a substring using Regex
Hi, I'm new to Regex in Postgres. Can someone give me a quick pointer on how I'd SELECT the substring between ''and '' in a field? Sample field data: address city here Rogers, Jim zip code place and I'd like the SELECT to return only: Rogers, Jim Thanks! ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] How to extract a substring using Regex
Yes, I read the manual. I think I had a problem because of the special chars (< / >) that I'm trying to search for... Still looking for the right syntax. On 8/23/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote: > > On Aug 23, 2007, at 19:33 , Postgres User wrote: > > > I'm new to Regex in Postgres. Can someone give me a quick pointer on > > how I'd SELECT the substring between ''and '' in > > a field? > > Check out regexp_replace: > > http://www.postgresql.org/docs/8.2/interactive/functions- > matching.html#FUNCTIONS-POSIX-REGEXP > > One of the forms of substring might work for you, too. > > Michael Glaesemann > grzm seespotcode net > > > ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] How to extract a substring using Regex
> > Yes, I read the manual. I think I had a problem because of the > > special chars (< / >) that I'm trying to search for... Still looking > > for the right syntax. > > Why don't you show us what you've tried and the errors you're > getting? That way we can help you figure out what you're doing wrong > rather than just give you an answer. > > Michael Glaesemann SELECT substring(data_field from '(.)') FROM myTable ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] Simple Regex question, hoping for direct answer (no Socratic approach necessary)
I'm new to Regex in Postgres. Can someone give me a quick pointer on how I'd SELECT the substring between ''and '' in a field? Sample field data: address city here Rogers, Jim zip code place and I'd like the SELECT to return only: Rogers, Jim Thanks! ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] How to extract a substring using Regex
On 8/23/07, Michael Glaesemann <[EMAIL PROTECTED]> wrote: > > On Aug 23, 2007, at 20:01 , Postgres User wrote: > > >>> Yes, I read the manual. I think I had a problem because of the > >>> special chars (< / >) that I'm trying to search for... Still > >>> looking > >>> for the right syntax. > >> > >> Why don't you show us what you've tried and the errors you're > >> getting? That way we can help you figure out what you're doing wrong > >> rather than just give you an answer. > >> > >> Michael Glaesemann > > > > SELECT substring(data_field from '(.)') > > FROM myTable > > Looks like you might want to brush up on regular expressions in > general. Your expression will match a single character between the > tags. You might want to try something like .+ instead. > > Michael Glaesemann You're right, that was a typo, I didn't copy and paste. I found the problem, I was using 2 forward slashes instead of a backslash + forward slash when pattern matching. The correct regex to extract my substring: substring(data_field from '(.+)<\/name>') ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] public schema doubt
Hello, I did a migration from 8.2.4 to 8.2.5, I used pg_dumpall to backup all the db and then restore it into 8.2.5. In my 8.2.4 db, I don't have public schema (it was dropped when I create the db, so only myschema is there), but when I restore to 8.2.5, I found that it created a public schema for me. So why it doing this for me? I thought it should restore back excatly the same what I backup from 8.2.4? Should I just let the public schema there? Will it cause any problem? In the postgres doc http://www.postgresql.org/docs/8.2/interactive/ddl-schemas.html (5.7.2) it said if I create a table without specify any schema name, it will put it into public schema, so I test it in my case, but it not doing it, the table is create under myschema, strange. And I check the search_path, it show "$user",public . Could someone explain it to me? Thanks Louis
Re: [GENERAL] public schema doubt
Hello Martin, Yes, SHOW search_path show this "$user",public . Oh yes, I get it now, the "$user" will take priority. Another question, this public schema, in the usual practice way, do you delete it or just leave it there and create your own schema? Regards Louis - Original Message From: Martin Gainty <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Sent: Friday, November 16, 2007 3:17:40 PM Subject: RE: [GENERAL] public schema doubt Hello Louis SHOW search_path if public is not on search_path or does'nt come first then SET search_path TO public,$user; HTH/ Martin __ Disclaimer and confidentiality note Everything in this e-mail and any attachments relates to the official business of Sender. This transmission is of a confidential nature and Sender does not endorse distribution to any party other than intended recipient. Sender does not necessarily endorse content contained within this transmission. Hello, I did a migration from 8.2.4 to 8.2.5, I used pg_dumpall to backup all the db and then restore it into 8.2.5. In my 8.2.4 db, I don't have public schema (it was dropped when I create the db, so only myschema is there), but when I restore to 8.2.5, I found that it created a public schema for me. So why it doing this for me? I thought it should restore back excatly the same what I backup from 8.2.4? Should I just let the public schema there? Will it cause any problem? In the postgres doc http://www.postgresql.org/docs/8.2/interactive/ddl-schemas.html (5.7.2) it said if I create a table without specify any schema name, it will put it into public schema, so I test it in my case, but it not doing it, the table is create under myschema, strange. And I check the search_path, it show "$user",public . Could someone explain it to me? Thanks Louis Boo! Scare away worms, viruses and so much more! Try Windows Live OneCare! Try now!
[GENERAL] Postgres file structure doubt
Hi everyone, Got a doubt in my setup, please correct me if I'm wrong. In my postgres setup, /usr/local/pgsql (where postgres install) /usr/local/pgsql/data (PGDATA) /database/pg/mydata (tablespace which use for all the table I create) /database/pg/myindex (index which use for all the table I create) 1) In this setup, the actual user data are store in PGDATA the table structure & index are store in /database/pg/mydata & /database/pg/myindex Am I correct? 2) So to backup (not pg_dump), I should make sure it include these 2 folder right? /usr/local/pgsql/data /database/pg/ 3) I think my setup is not quite right, I should move the PGDATA to /database/pg right? Regards Louis
Re: [GENERAL] Postgres file structure doubt
Hello Scott, Thanks for clear my doubt. Yes, I'm planning to do PITR backup. Another question, from what I understand, when there are data transaction going on, postgres will store in the log file, which is in /usr/local/pgsql/data/pg_xlog, when these data will finally save it into /database/pg/mydata? For e.g, if I did a pg_dump backup a week ago, and the system crash today, but I'm able to get all the files in /usr/local/pgsql/data/pg_xlog, can I still recover my database to the latest state? Regards Louis - Original Message From: Scott Marlowe <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Monday, November 19, 2007 5:38:12 PM Subject: Re: [GENERAL] Postgres file structure doubt On Nov 19, 2007 11:24 AM, <[EMAIL PROTECTED]> wrote: > > Hi everyone, > > Got a doubt in my setup, please correct me if I'm wrong. > > In my postgres setup, > /usr/local/pgsql (where postgres install) > /usr/local/pgsql/data (PGDATA) > /database/pg/mydata (tablespace which use for all the table I create) > /database/pg/myindex (index which use for all the table I create) > > 1) In this setup, the actual user data are store in PGDATA > the table structure & index are store in /database/pg/mydata & > /database/pg/myindex > > Am I correct? The data that defines users, and tables, and other objects are in PGDATA. The data from users are stored in mydata/myindex. Not sure if that matches what you wrote or not... > 2) So to backup (not pg_dump), I should make sure it include these 2 folder > right? > > /usr/local/pgsql/data > /database/pg/ To backup, you should generally use pg_dump. Are you planning on using PITR? Are you planning on shutting down your database when you back it up? if you're not using PITR, you must shut down postgresql to take a file system backup. > 3) I think my setup is not quite right, I should move the PGDATA to > /database/pg right? Sorta a personaly choice really. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Simple math statement - problem
I have a large function that's doing a number of calcs. The final return value is wrong for a simple reason: any division statement where the numerator is less than the denominator is returning a zero. Each of these statements return a 0, even when properly cast: select 1/100 select Cast(1 / 100 As decimal) select Cast(1 / 100 As numeric(6,2)) How can I write statements that returns a decimal? The problem doesn't appear to be that Postgres won't return decimal values, as these statements return the correct value: select .01 select Cast(.01 As decimal) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Record variable not behaving as expected (bug?)
Sure, I'd be glad to provide any additional code or info that I can. This issue took me quite awhile to track down from the 200-line function that was breaking. When I execute the function defined above, here's what I see returned: select divide() ?column? 0 Hopefully someone can test on Windows to validate. On Nov 30, 2007 8:25 AM, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote: > On Fri, Nov 30, 2007 at 08:20:30AM -0800, Postgres User wrote: > > tom- did you test this on wndows? you can ignore the namespace- i'm > > using it consistently but removed from the test code to simplify > > > > this problem occurs repeatedly in my code. my guess is that it's a > > bug in the windows build. > > Seems rather unlikely to affect just windows. Can you post a script > that you can run against a blank database that shows the problem. > complete with output on your machine. > > > Have a nice day, > -- > Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > > Those who make peaceful revolution impossible will make violent revolution > > inevitable. > > -- John F Kennedy > > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.1 (GNU/Linux) > > iD8DBQFHUDmUIB7bNG8LQkwRAvNMAKCQl8+bPo3bca/a33T+WVfQ/Ng2yQCdG+H5 > wZyc/alsznWsWck20lheR00= > =RVrJ > -END PGP SIGNATURE- > > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Record variable not behaving as expected (bug?)
tom- did you test this on wndows? you can ignore the namespace- i'm using it consistently but removed from the test code to simplify this problem occurs repeatedly in my code. my guess is that it's a bug in the windows build. On Nov 30, 2007 8:13 AM, Tom Lane <[EMAIL PROTECTED]> wrote: > Martijn van Oosterhout <[EMAIL PROTECTED]> writes: > > On Fri, Nov 30, 2007 at 12:23:31AM -0800, Postgres User wrote: > >> However, I have found that my record variable is not assigned proper > >> field-level datatypes. As a result, I'm unable to write basic math > >> calcs in pg/sql without a lot of typecasting. > > > What version are you running. On my 8.1.9 test system it returns -1.00 > > as expected. > > Works for me too, in all branches back to 8.0. However, I noticed that > the test function references "test.table2" not just "table2", which > makes me wonder if maybe this is picking up some other table2 than the > OP thinks. A test.table2 with integer columns would explain the result. > >regards, tom lane > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] Record variable not behaving as expected (bug?)
According to the docs, record variables "take on the actual row structure of the row they are assigned during a SELECT or FOR command." However, I have found that my record variable is not assigned proper field-level datatypes. As a result, I'm unable to write basic math calcs in pg/sql without a lot of typecasting. When I execute the function below, a basic math statement fails unless I explicitly typecast the record's field values. This isn't what I expected; Postgresql should correctly typecast each field in the record var automatically at the SELECT statement. (Note: did not test with a row variable, and I prefer to use the record datatype) CREATE TABLE table2 ( "s_val" NUMERIC(6,2), "e_val" NUMERIC(6,2) ); CREATE FUNCTION divide () RETURNS numeric AS $body$ declare retval numeric(6,2); rec record; begin SELECT * INTO rec FROM test.table2 LIMIT 0; rec.s_val = 100.0; rec.e_val = 101.0; -- returns correct value w/ casting: --retval = ((rec.s_val::numeric(6,2) - rec.e_val::numeric(6,2)) / rec.s_val::numeric(6,2)) * 100; -- returns incorrect value, as if fields have invalid datatypes: retval = ((rec.s_val - rec.e_val) / rec.s_val) * 100; return retval; end $body$ LANGUAGE 'plpgsql'; ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Simple math statement - problem
The problem turned out to be related to my function.. Given this table: CREATE TABLE "table2" ( "s_val" numeric(6,2), "e_val" numeric(6,2) ) WITH OIDS; The following functions of code will set retval = NULL; declare retval numeric(6,2); rec record; begin SELECT * INTO rec FROM table2 LIMIT 0; rec.s_val = 100; rec.e_val = 101; retval = (rec.s_val - rec.e_val) / rec.s_val; return retval; end However, if I explicitly typecast, then it returns the proper value: retval = (rec.s_val::numeric(6,2) - rec.e_val::numeric(6,2)) / rec.s_val::numeric(6,2); On Nov 29, 2007 9:47 PM, Gregory Williamson <[EMAIL PROTECTED]> wrote: > > > A quick experiment shows that if either numerator or denominator are > decimal, that is preserved in the end result. Probably true for basic math > operations in general. > > GW > > > > -Original Message- > From: [EMAIL PROTECTED] on behalf of Gregory Williamson > Sent: Thu 11/29/2007 10:37 PM > To: Postgres User; pgsql-general > Subject: Re: [GENERAL] Simple math statement - problem > > The question: > > > > How can I write statements that returns a decimal? > > > > > billing=# select 1/100; > ?column? > -- > 0 > (1 row) > > As you said ... > > So make everything decimal: > billing=# select 1.0/100.0; > ?column? > > 0.0100 > > Or: > billing=# select 1::decimal/100::decimal; > ?column? > > 0.0100 > > I think that when you use integers you lose precision right out the gate. > Others can provide better insight I hope ... > > HTH, > > Greg Williamson > Senior DBA > GlobeXplorer LLC, a DigitalGlobe company > > Confidentiality Notice: This e-mail message, including any attachments, is > for the sole use of the intended recipient(s) and may contain confidential > and privileged information and must be protected in accordance with those > provisions. Any unauthorized review, use, disclosure or distribution is > prohibited. If you are not the intended recipient, please contact the sender > by reply e-mail and destroy all copies of the original message. > > (My corporate masters made me say this.) > > > > > -Original Message- > From: [EMAIL PROTECTED] on behalf of Gregory Williamson > Sent: Thu 11/29/2007 10:37 PM > To: Postgres User; pgsql-general > Subject: Re: [GENERAL] Simple math statement - problem > > The question: > > > > How can I write statements that returns a decimal? > > > > > billing=# select 1/100; > ?column? > -- > 0 > (1 row) > > As you said ... > > So make everything decimal: > billing=# select 1.0/100.0; > ?column? > > 0.0100 > > Or: > billing=# select 1::decimal/100::decimal; > ?column? > > 0.0100 > > I think that when you use integers you lose precision right out the gate. > Others can provide better insight I hope ... > > HTH, > > Greg Williamson > Senior DBA > GlobeXplorer LLC, a DigitalGlobe company > > Confidentiality Notice: This e-mail message, including any attachments, is > for the sole use of the intended recipient(s) and may contain confidential > and privileged information and must be protected in accordance with those > provisions. Any unauthorized review, use, disclosure or distribution is > prohibited. If you are not the intended recipient, please contact the sender > by reply e-mail and destroy all copies of the original message. > > (My corporate masters made me say this.) > > > > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Record variable not behaving as expected (bug?)
Sorry, here's the code: CREATE TABLE table2 ( "s_val" NUMERIC(6,2), "e_val" NUMERIC(6,2) ); CREATE FUNCTION divide () RETURNS numeric AS $body$ declare retval numeric(6,2); rec record; begin SELECT * INTO rec FROM test.table2 LIMIT 0; rec.s_val = 100.0; rec.e_val = 101.0; -- returns correct value w/ casting: --retval = ((rec.s_val::numeric(6,2) - rec.e_val::numeric(6,2)) / rec.s_val::numeric(6,2)) * 100; -- returns incorrect value, as if fields have invalid datatypes: retval = ((rec.s_val - rec.e_val) / rec.s_val) * 100; return retval; end $body$ LANGUAGE 'plpgsql'; On Nov 30, 2007 12:31 PM, Postgres User <[EMAIL PROTECTED]> wrote: > Sure, I'd be glad to provide any additional code or info that I can. > This issue took me quite awhile to track down from the 200-line > function that was breaking. > > When I execute the function defined above, here's the output: > > select divide() >?column? > > 0 > > > Seems rather unlikely to affect just windows. Can you post a script > > that you can run against a blank database that shows the problem. > > complete with output on your machine. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Record variable not behaving as expected (bug?)
8.2 / Windows (a development-only pc) On Nov 30, 2007 12:50 AM, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote: > On Fri, Nov 30, 2007 at 12:23:31AM -0800, Postgres User wrote: > > However, I have found that my record variable is not assigned proper > > field-level datatypes. As a result, I'm unable to write basic math > > calcs in pg/sql without a lot of typecasting. > > What version are you running. On my 8.1.9 test system it returns -1.00 > as expected. > > Have a nice day, > -- > Martijn van Oosterhout <[EMAIL PROTECTED]> http://svana.org/kleptog/ > > Those who make peaceful revolution impossible will make violent revolution > > inevitable. > > -- John F Kennedy > > -BEGIN PGP SIGNATURE- > Version: GnuPG v1.4.1 (GNU/Linux) > > iD8DBQFHT87KIB7bNG8LQkwRAuzqAJ9jdaDv/rxz5pG8bdYvO9suxZLGZACeL6BY > ZMvLJ5nKREBIsBrdk4nE748= > =/aEm > -END PGP SIGNATURE- > > ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Abusing Postgres in fun ways.
I'm creating a data queue on top of postgres and I'm wondering if I've made an incorrect assumption about isolation or synchronization or some similar issue. Every item in the queue is given a unique ID from a sequence. CREATE TABLE data_queue ( sequence_num BIGINT PRIMARY KEY, sender_key BIGINT NOT NULL, datablob bytea ); I read from the queue by passing in the last _highest_seen_sequence_num to a stored procedure: SELECT * from data_queue WHERE sequence_num > _highest_seen_sequence_num ORDER BY sequence_num ASC Obviously with readers and writers racing I need some sort of synchronization. I've found the advisory locks and those did seem to be my best bet. I used explicit locking for a while but ran into an issue with our daily backups and ACCESS_EXCLUSIVE (which I might be able to revisit) I'm also trying to create a setup where there is basically no blocking, writers can always write, readers are not blocked by writers (though there may be a delay in what is visible to the reader). Before I dump a bunch of SQL on the list, my plan in short is to stage writes to a similar table: stage_data_queue, and then copy them all into a table visible by readers. 1 Writers get a shared advisory lock, get the next sequence_num and Insert one row, then release a shared advisory lock (in one stored procedure) 2 At some point there is a 'tick' and another thread gets the corresponding exclusive advisory lock (letting all in flight writes finish). Then copy all rows into another table visible to the readers, then Truncate the staging table, and release the exclusive lock. (all in one stored procedure) My fear is that there is still a race here because the writer (1) calls unlock at the end of the stored procedure, and thus there is a window before the row is committed, and (2) may end up truncating that data... I think I could fix this by leaving the (1) shared lock locked through the end of the stored procedure, and calling back unlocking it later. I might also be able to fix this with Explicit Locks because I assume those will get properly unlocked after the Insert is truly committed. Am I on the wrong track here? -JD
[GENERAL] Abusing Postgres in interesting ways
Hello all, I'm creating a data queue on top of postgres and I'm wondering if I've made an incorrect assumption about isolation or synchronization or some similar issue. Every item in the queue is given a unique ID from a sequence. CREATE TABLE data_queue ( sequence_num BIGINT PRIMARY KEY, sender_key BIGINT NOT NULL, datablob bytea ); I read from the queue by passing in the last _highest_seen_sequence_num to a stored procedure: SELECT * from data_queue WHERE sequence_num > _highest_seen_sequence_num ORDER BY sequence_num ASC Obviously with readers and writers racing I need some sort of synchronization. I've found the advisory locks and those seem to be my best bet. I used explicit locking for a while but ran into an issue with our daily backups and ACCESS_EXCLUSIVE (which I might be able to revisit) I'm also trying to create a setup where there is basically no blocking, writers can always write, readers are not blocked by writers (though there may be a delay in what is visible to the reader). Before I dump a bunch of SQL on the list, my plan in short to stage writes to a similar table: stage_data_queue. 1 Writers get a shared advisory lock, Insert one row, and release shared advisory lock (in one stored procedure) 2 At some point there is a 'tick' and another thread gets the corresponding exclusive advisory lock (letting all in flight writes finish). Then copy all rows into another table visible to the readers, then Truncate the staging table, and releasing the exclusive lock. (all in one stored procedure) My fear is that there is still a race here because the writer (1) calls unlock at the end of the stored procedure, and thus there is a window before the row is committed, and (2) may end up truncating that data... I think I could fix this by leaving the (1) shared lock locked through the end of the stored procedure, and calling back unlocking it later. I might also be able to fix this with Explicit Locks because I assume those will get properly unlocked after the Insert is truly committed. Am I on the wrong track here? -JD
[GENERAL] Time Series on Postgres (HOWTO?)
I've been googling, but haven't found a good answer to what I should do if I want to store time series in Postgres. My current solution is store serialized (compressed) blobs of data. (So for example store 1 day worth of 1 minute samples (~1440 samples) stored as one row in a bytea. (Plus meta data) It would be nice if I could use 1 sample per column,(because updating individual columns/samples is clear to me) but postgres doesn't compress the row (which is bad because of high amount of repetitive data.. Easily 10X bigger. I've been considering a Double[] array, which would get compressed, but before I start down that path (I suppose I need to make some storedprocs to update individual samples), has anyone built anything like this? Any open source projects I should look at? Thanks.
[GENERAL] missing chunk number (Bug 5507)
I found this link: http://postgresql.1045698.n5.nabble.com/BUG-5507-missing-chunk-number-0-for-toast-value-X-in-pg-toast-X-td2126674.html And am also experiencing the same issue... More anecdotal evidence that this is a bug: We recently(couple days ago) completely recreated the DB, and started inserting, so not data rot. After a couple days of inserts, we could no longer read due to the TOAST error. We had WAL shipping to a warm standby (we are using 9.0), and it also had the same issues after replaying the WAL.
[GENERAL] Anyone use PG with kvm/virtio? Any gotchas or recommended settings?
Looks like the recommended settings are using the virtio interface, cache=none, and raw partitions (not qcow2). Anyone else run into any problems with kvm or virtio? We currently have a setup using qcow2, virtio, and the default cache settings, and experienced some data corruption (not preceded by crashes or restarts of postgres), and we are wondering if that might be the culprit.
[GENERAL] Is TimeZone applied with TIMESTAMP WITHOUT TIME ZONE and Extract( EPOCH ...)?
I'm noticing some interesting behavior around timestamp and extract epoch, and it appears that I'm getting a timezone applied somewhere. Specifically, If I do: select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME ZONE ); == 1264924800 select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME ZONE ); == 1270105200 Now if I do something similar in Java.. using a GregorianCalendar, with "GMT" TimeZone. I get Hello:2010-01-31 00:00:00.000 (UTC) Hello:126489600 Hello:2010-04-01 00:00:00.000 (UTC) Hello:127008000 Which gives a difference of 8 and 7 hours respectively, so both a timezone and a DST shift are at work here. Is this the expected behavior of extract epoch, is there a way to get it to always be in GMT?
[GENERAL] Re: Is TimeZone applied with TIMESTAMP WITHOUT TIME ZONE and Extract( EPOCH ...)?
Looks like a quick search says I need to specify the timezone... On Thu, Mar 17, 2011 at 11:22 AM, bubba postgres wrote: > > I'm noticing some interesting behavior around timestamp and extract epoch, > and it appears that I'm getting a timezone applied somewhere. > > Specifically, If I do: > select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME > ZONE ); == 1264924800 > select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME > ZONE ); == 1270105200 > > Now if I do something similar in Java.. using a GregorianCalendar, with > "GMT" TimeZone. > I get > Hello:2010-01-31 00:00:00.000 (UTC) > Hello:126489600 > > Hello:2010-04-01 00:00:00.000 (UTC) > Hello:127008000 > > Which gives a difference of 8 and 7 hours respectively, so both a timezone > and a DST shift are at work here. > > Is this the expected behavior of extract epoch, is there a way to get it to > always be in GMT? > > > > >
[GENERAL] Re: Is TimeZone applied with TIMESTAMP WITHOUT TIME ZONE and Extract( EPOCH ...)?
no.. still confused. I assume it's storing everythign in UTC.. did I need to specify a timezone when I inserted? On Thu, Mar 17, 2011 at 11:24 AM, bubba postgres wrote: > Looks like a quick search says I need to specify the timezone... > > > On Thu, Mar 17, 2011 at 11:22 AM, bubba postgres > wrote: > >> >> I'm noticing some interesting behavior around timestamp and extract epoch, >> and it appears that I'm getting a timezone applied somewhere. >> >> Specifically, If I do: >> select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME >> ZONE ); == 1264924800 >> select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME >> ZONE ); == 1270105200 >> >> Now if I do something similar in Java.. using a GregorianCalendar, with >> "GMT" TimeZone. >> I get >> Hello:2010-01-31 00:00:00.000 (UTC) >> Hello:126489600 >> >> Hello:2010-04-01 00:00:00.000 (UTC) >> Hello:127008000 >> >> Which gives a difference of 8 and 7 hours respectively, so both a timezone >> and a DST shift are at work here. >> >> Is this the expected behavior of extract epoch, is there a way to get it >> to always be in GMT? >> >> >> >> >> >
[GENERAL] Re: Is TimeZone applied with TIMESTAMP WITHOUT TIME ZONE and Extract( EPOCH ...)?
ok got it. select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME ZONE at time zone 'utc' ); On Thu, Mar 17, 2011 at 11:32 AM, bubba postgres wrote: > no.. still confused. > I assume it's storing everythign in UTC.. did I need to specify a timezone > when I inserted? > > > > On Thu, Mar 17, 2011 at 11:24 AM, bubba postgres > wrote: > >> Looks like a quick search says I need to specify the timezone... >> >> >> On Thu, Mar 17, 2011 at 11:22 AM, bubba postgres < >> bubba.postg...@gmail.com> wrote: >> >>> >>> I'm noticing some interesting behavior around timestamp and extract >>> epoch, and it appears that I'm getting a timezone applied somewhere. >>> >>> Specifically, If I do: >>> select EXTRACT( EPOCH FROM '2010-01-31 00:00:00'::TIMESTAMP WITHOUT TIME >>> ZONE ); == 1264924800 >>> select EXTRACT( EPOCH FROM '2010-04-01 00:00:00'::TIMESTAMP WITHOUT TIME >>> ZONE ); == 1270105200 >>> >>> Now if I do something similar in Java.. using a GregorianCalendar, with >>> "GMT" TimeZone. >>> I get >>> Hello:2010-01-31 00:00:00.000 (UTC) >>> Hello:126489600 >>> >>> Hello:2010-04-01 00:00:00.000 (UTC) >>> Hello:127008000 >>> >>> Which gives a difference of 8 and 7 hours respectively, so both a >>> timezone and a DST shift are at work here. >>> >>> Is this the expected behavior of extract epoch, is there a way to get it >>> to always be in GMT? >>> >>> >>> >>> >>> >> >
[GENERAL] why does extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) == 16?
Is this the correct behavior? It seems like if I specify the utc offset it should be 0, not 16.. It seems to be the opposite behavior from extract epoch. select extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' ) as defhour, extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'PST' ) as psthour, extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) as utchour, extract ( epoch FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) as utcepoch; 0,0,16,1262304000 @Test public void testFoo() { TimeZone tz = TimeZone.getTimeZone("GMT"); GregorianCalendar cal = new GregorianCalendar(tz); cal.set(2010,0,1,0,0,0); cal.set(GregorianCalendar.MILLISECOND, 0 ); System.out.println("" + cal.getTimeInMillis() ); System.out.println("" + String.format( "%1$tY-%1$tm-%1$td %1$tH:%1$tM:%1$tS.%1$tL", cal ) ); System.out.println("" + cal.get(GregorianCalendar.HOUR_OF_DAY ) ); } In Java: 126230400 2010-01-01 00:00:00.000 (UTC) 0
[GENERAL] Re: why does extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) == 16?
I found a work around... Not sure why this is the behavior select extract ( HOUR FROM (TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) at time zone 'utc' ) gives what I expect would be the correct answer BUT.. select extract ( EPOCH FROM (TIMESTAMP '2010-01-01 00:00:00' at time zone 'utc' ) at time zone 'utc' ) does not... Can anyone explain this? On Thu, Mar 17, 2011 at 5:05 PM, bubba postgres wrote: > Is this the correct behavior? It seems like if I specify the utc offset it > should be 0, not 16.. It seems to be the opposite behavior from extract > epoch. > > select extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' ) as defhour, > extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone 'PST' ) as > psthour, extract ( HOUR FROM TIMESTAMP '2010-01-01 00:00:00' at time zone > 'utc' ) as utchour, extract ( epoch FROM TIMESTAMP '2010-01-01 00:00:00' at > time zone 'utc' ) as utcepoch; > > 0,0,16,1262304000 > > > > > @Test > public void testFoo() { > TimeZone tz = TimeZone.getTimeZone("GMT"); > GregorianCalendar cal = new GregorianCalendar(tz); > cal.set(2010,0,1,0,0,0); > cal.set(GregorianCalendar.MILLISECOND, 0 ); > System.out.println("" + cal.getTimeInMillis() ); > System.out.println("" + String.format( "%1$tY-%1$tm-%1$td > %1$tH:%1$tM:%1$tS.%1$tL", cal ) ); > System.out.println("" + cal.get(GregorianCalendar.HOUR_OF_DAY ) ); > } > > In Java: > 126230400 > 2010-01-01 00:00:00.000 (UTC) > 0 >
[GENERAL] JDBC Binary transfer of Arrays
Hello! In my current application I am sending a lot of data to/from the DB with JDBC, and specifically arrays of Double. (or even Double[][]). Alas, my current implementation I converts everything into a string representation, which blows my memory out of the water and drops my transaction rate way way down while I spend most of my time working with a StringBuffer. I note that there is this web page: http://wiki.postgresql.org/wiki/JDBC-BinaryTransfer Which makes me think I might be saved. Has anyone used Binary Transfer for Double[] (or other) array data? Any pointers to some implementation would be great... Right now I have my own implementation that extends java.sql.Array, and contains the string and a typeName and num. Regards, -JD
Re: [GENERAL] JDBC Binary transfer of Arrays
After trying out the JDBC4 driver in DBCP, I see that Connection.createArray(...) still just creates a big string under the covers. Is that the expected behavior? Am I doing it wrong? On Thu, Mar 31, 2011 at 8:04 PM, bubba postgres wrote: > Hello! > In my current application I am sending a lot of data to/from the DB with > JDBC, and specifically arrays of Double. (or even Double[][]). > Alas, my current implementation I converts everything into a string > representation, which blows my memory out of the water and drops my > transaction rate way way down while I spend most of my time working with a > StringBuffer. > > I note that there is this web page: > http://wiki.postgresql.org/wiki/JDBC-BinaryTransfer > > Which makes me think I might be saved. Has anyone used Binary Transfer for > Double[] (or other) array data? > Any pointers to some implementation would be great... > Right now I have my own implementation that extends java.sql.Array, and > contains the string and a typeName and num. > > Regards, > -JD >
[GENERAL] DBlink, postgres to DB2
Hi, I am trying to connect DB2 from postgres using dblink, is there any configuration required at DB2 and postgres server. If any command string please provide it. Thanks Manmohan. K
Re: [GENERAL] DBlink, postgres to DB2
Thanks a lot Remi, Merlin and Pavel...I will give it a try. On Wed, Oct 22, 2014 at 6:36 AM, Merlin Moncure wrote: > On Wed, Oct 22, 2014 at 5:56 AM, Pavel Stehule > wrote: > > Hi > > > > try to use a Perl implementation https://github.com/davidfetter/DBI-Link > > > > There is a DBD driver for DB2 > http://search.cpan.org/dist/DBD-DB2/DB2.pod > > > > Or you can use a mentioned fdw wrapper - there is ODBC wrapper > > https://wiki.postgresql.org/wiki/Foreign_data_wrappers#odbc_fdw and you > can > > access DB2 via generic ODBC DB2 driver > > jdbc-fdw should work too. > > https://github.com/atris/JDBC_FDW > > merlin >
Re: [GENERAL] DBlink, postgres to DB2
Hi , While i try to install psqlDDBC, i get this error , i have SUSE Linux Enterprise Server 11 , trying to find a solution still have no idea. ./configure checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a thread-safe mkdir -p... /bin/mkdir -p checking for gawk... gawk checking whether make sets $(MAKE)... yes checking whether to enable maintainer-specific portions of Makefiles... no checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether we are using the GNU C compiler... yes checking whether gcc accepts -g... yes checking for gcc option to accept ISO C89... none needed checking for style of include used by make... GNU checking dependency style of gcc... gcc3 checking -Wall is a valid compile option... yes checking for odbc_config... no configure: error: odbc_config not found (required for unixODBC build) Regards Manmohan. K On Thu, Oct 23, 2014 at 2:41 PM, Миша Тюрин wrote: > > http://multicorn.org/ > another cool stuff to generalize interconnections > > > Wed, 22 Oct 2014 22:34:06 -0700 от Postgres India : > > Thanks a lot Remi, Merlin and Pavel...I will give it a try. > > On Wed, Oct 22, 2014 at 6:36 AM, Merlin Moncure <https://e.mail.ru/compose/?mailto=mailto%3ammonc...@gmail.com>> wrote: > > On Wed, Oct 22, 2014 at 5:56 AM, Pavel Stehule <https://e.mail.ru/compose/?mailto=mailto%3apavel.steh...@gmail.com>> > wrote: > > Hi > > > > try to use a Perl implementation https://github.com/davidfetter/DBI-Link > > > > There is a DBD driver for DB2 > http://search.cpan.org/dist/DBD-DB2/DB2.pod > > > > Or you can use a mentioned fdw wrapper - there is ODBC wrapper > > https://wiki.postgresql.org/wiki/Foreign_data_wrappers#odbc_fdw and you > can > > access DB2 via generic ODBC DB2 driver > > jdbc-fdw should work too. > > https://github.com/atris/JDBC_FDW > > merlin > > > > > >
[GENERAL] Active/Active clustering in postgres
Hi All, I am looking for PostgreSQL active/active clustering and whether PostgreSQL support any form of shared-storage clustering . Is there any methods or tools for implementing active/active clustering on Postgres supported by community or any third party tools. Regards Manmohan
[GENERAL] Regex query not using index
I'm running a simple query on 8.2. With this syntax, Explain indicate that the index is scanned: select * from eod where name = 'AA' However, when I change the query to use simple regex: select * from eod where name ~ 'AA' now Explain indicates a seq scan: Index Scan using equity_eod_symbol_idx on equity_eod (cost=0.00..8.27 rows=1 width=149) Index Cond: ((symbol)::text = 'AA'::text) Is there any way to 'encourage' Postgres to hit the index when using regex? Do I need to create a functional index or something? Without the index in play, I really can't use regex on any of my larger tables. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Regex query not using index
Thanks, my dumb mistake. I need to perform the equivalent of a WHERE clause OR expression using regex to match exact strings. _ this example hits the index: select * from eod where name ~ '^BA$' but when I try to add another possible value to the regex, it does a row scan: select * from eod where name ~ ^BA$|^AA$' both of these statements return the right results, but the 2nd ignores the index even though both values are left-anchored. any workaround- this behavior doesn't seem to make sense On Feb 19, 2008 8:45 PM, Erik Jones <[EMAIL PROTECTED]> wrote: > > On Feb 19, 2008, at 9:32 PM, Postgres User wrote: > > > I'm running a simple query on 8.2. With this syntax, Explain indicate > > that the index is scanned: > > select * from eod where name = 'AA' > > > > However, when I change the query to use simple regex: > > select * from eod where name ~ 'AA' > > > > now Explain indicates a seq scan: > > Index Scan using equity_eod_symbol_idx on equity_eod (cost=0.00..8.27 > > rows=1 width=149) > > Index Cond: ((symbol)::text = 'AA'::text) > > > > Is there any way to 'encourage' Postgres to hit the index when using > > regex? Do I need to create a functional index or something? > > Without the index in play, I really can't use regex on any of my > > larger tables. > > You need it to be anchored: > > select * from eod where name ~ '^AA'; > > If you're looking to be able to use indexes for searches within a > string then, for 8.2, you'll need to check out tsearch2. > > Erik Jones > > DBA | Emma(R) > [EMAIL PROTECTED] > 800.595.4401 or 615.292.5888 > 615.292.0777 (fax) > > Emma helps organizations everywhere communicate & market in style. > Visit us online at http://www.myemma.com > > > > ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Regex query not using index
im trying to allow the client to pass a varchar param into my function, and want to avoid any parsing of the parameter inside the function, or code to build a sql string. if the function can use this code, it will be compiled and optimized (unlike a dynamic sql stirng) select * from mytable where fielda ~ p_param (where p_param is the input parameter) On Feb 19, 2008 9:34 PM, Chris <[EMAIL PROTECTED]> wrote: > Postgres User wrote: > > Yes that works, but the whole point of the exercise is replace many OR > > statements with 1 regex expression. So it's not what I'm looking for. > > Why do you want it done this way? > > You can build an array of strings to check and use an in clause. > > Using php : > > $checks = array('AA', 'BA'); > > $query = "select * from table where name in ('" . implode("','", > $checks) . "')"; > > and it should use an index (up to a point anyway). > > -- > > Postgresql & php tutorials > http://www.designmagick.com/ > ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [GENERAL] Regex query not using index
Yes that works, but the whole point of the exercise is replace many OR statements with 1 regex expression. So it's not what I'm looking for. On Feb 19, 2008 9:16 PM, Chris <[EMAIL PROTECTED]> wrote: > Postgres User wrote: > > Thanks, my dumb mistake. > > I need to perform the equivalent of a WHERE clause OR expression using > > regex to match exact strings. > > > > _ > > > > this example hits the index: > > select * from eod where name ~ '^BA$' > > > > but when I try to add another possible value to the regex, it does a row > > scan: > > select * from eod where name ~ ^BA$|^AA$' > > > > both of these statements return the right results, but the 2nd ignores > > the index even though both values are left-anchored. > > > > any workaround- this behavior doesn't seem to make sense > > try changing it to > > select * from eod where (name ~ '^BA$' or name ~ '^AA$') > > though in this example they should both be name = 'XX' rather than regex'es. > > -- > Postgresql & php tutorials > http://www.designmagick.com/ > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Regex query not using index
by the way, your example works fine unless it's a null value or empty string unfortunately, postgres isn't smart enough to know that the when p_param below is null, that the WHERE condition can be ignored select * from table where name in (Coalesce(p_param, name)) which is the same as: select * from table where name in (name) postgres does a row scan on the above sql. too slow. On Feb 19, 2008 9:34 PM, Chris <[EMAIL PROTECTED]> wrote: > Postgres User wrote: > > Yes that works, but the whole point of the exercise is replace many OR > > statements with 1 regex expression. So it's not what I'm looking for. > > Why do you want it done this way? > > You can build an array of strings to check and use an in clause. > > Using php : > > $checks = array('AA', 'BA'); > > $query = "select * from table where name in ('" . implode("','", > $checks) . "')"; > > and it should use an index (up to a point anyway). > > -- > > Postgresql & php tutorials > http://www.designmagick.com/ > ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Regex query not using index
doh! tom, let me know if you decide to hack out a fix for this one of these nights ;) thanks for your help. On Feb 19, 2008 9:45 PM, Tom Lane <[EMAIL PROTECTED]> wrote: > "Postgres User" <[EMAIL PROTECTED]> writes: > > Yes that works, but the whole point of the exercise is replace many OR > > statements with 1 regex expression. So it's not what I'm looking for. > > Unfortunately, Postgres is not as intelligent as you are. There is > no mechanism to rewrite a multi-branch regex condition into multiple > indexscans. I recommend going back to the OR's. > > regards, tom lane > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Regex query not using index
a final question: why does this syntax do a seq scan + filter: select * from tablea where fielda = fielda -or- select * from tablea where fielda in (fielda) while this syntax results in no filter, seq scan only select * from tablea where 1 = 1 it seems that both where clauses should be ignored by the optimizer- or am i missing something On Feb 19, 2008 9:45 PM, Tom Lane <[EMAIL PROTECTED]> wrote: > "Postgres User" <[EMAIL PROTECTED]> writes: > > Yes that works, but the whole point of the exercise is replace many OR > > statements with 1 regex expression. So it's not what I'm looking for. > > Unfortunately, Postgres is not as intelligent as you are. There is > no mechanism to rewrite a multi-branch regex condition into multiple > indexscans. I recommend going back to the OR's. > > regards, tom lane > ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Regex query not using index
Tino, My users are developers and the goal was to accept a simple comma-delimited list of string values as a function's input parameter. The function would then parse this input param into a valid regex expression. I was trying to write a function that lets me avoid using Execute and instead write in-line SQL with all the benefits of pre-compilation and optimization. Regex offers such a technique- IF it could understand regex that represented a set of logical ORs and do an index scan (my rule is to avoid seq-scans) An example of regex that allows you to use in-line SQL with a condition equivalent to many OR conditions when using basic comparison operators: select * from table1 where name ~ '.*' '^Smith$' |^Jones$': And this works very well- except for the seq scan instead of an index scan On Feb 20, 2008 2:31 AM, Tino Wildenhain <[EMAIL PROTECTED]> wrote: > Postgres User wrote: > > im trying to allow the client to pass a varchar param into my > > function, and want to avoid any parsing of the parameter inside the > > function, or code to build a sql string. > > > > if the function can use this code, it will be compiled and optimized > > (unlike a dynamic sql stirng) > > > > select * from mytable where fielda ~ p_param > > No, you should never let users specify raw regex. at best they can > hog down your server. Regex is a state engine and you can create > endless loops. > > Maybe we can see the overall picture of your query? > > Regards > Tino > ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Regex query not using index
Tom, I was looking for another approach but didn't come across that array syntax in my searches (perhaps because it's newer. Thanks for a solution. Now to end my fixation, one last item. What about the case of a null or empty param value- is there a way to assign a condition value that Postgres will ignore when processing the query? This syntax results in a seq scan: WHERE fielda = Coalesce(param, fielda) because it applies only to non-nulls Is there another way to write this- perhaps using your array syntax on an empty array? Basically I'd PG to ignore the condition just as it ignores WHERE 1 = 1 On Wed, Feb 20, 2008 at 8:31 AM, Tom Lane <[EMAIL PROTECTED]> wrote: > "Postgres User" <[EMAIL PROTECTED]> writes: > > > My users are developers and the goal was to accept a simple > > comma-delimited list of string values as a function's input parameter. > > The function would then parse this input param into a valid regex > > expression. > > Why are you fixated on this being a regex? If you aren't actually > trying to expose regex capabilities to the users, you'll just be having > to suppress a bunch of strange behaviors for special characters. > > ISTM that the best solution is to use an array-of-text parameter, > along the lines of > > where name = any (array['Smith', 'Jones', ...]) > > For what you're doing, you'd not actually want the array[] syntax, > it would look more like > > where name = any ('{Smith,Jones}'::text[]) > > This should optimize into an indexscan in 8.2 or later. > > regards, tom lane > ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] Trigger transactions
if a function includes this SQL: Update Table1 Set field_1 = 'ab'; Insert Table2(field_2) VALUES('cd'); and I create an update trigger on Table1: Create Trigger Table1_Update AFTER Update On Table1 FOR EACH ROW: Select * From Table2 will the Select statement in the trigger see the row that I inserted in the main function? if not, is there another way to write these statements to that it does? thanks -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Trigger transactions
Question answered- needed to move Insert statement before Update in main function. On Fri, Mar 21, 2008 at 6:42 PM, Postgres User <[EMAIL PROTECTED]> wrote: > if a function includes this SQL: > > Update Table1 Set field_1 = 'ab'; > Insert Table2(field_2) VALUES('cd'); > > > and I create an update trigger on Table1: > > Create Trigger Table1_Update AFTER Update > On Table1 FOR EACH ROW: > > Select * From Table2 > > > will the Select statement in the trigger see the row that I inserted > in the main function? > if not, is there another way to write these statements to that it does? > > thanks > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] timestamp problem
Hello, I got 2 database, one for product use and another for developer use. Both are same. In these database, I got a procedure which has a TIMSTAMP datatype as IN parameter. My problem is, when my Java application use the developer database and call this procedure, it is working fine, but when I switch to product database, it show error that this procedure not exit, it is looking for a procedure with TIMSTAMPTZ, why ? Is there somewhere in postgres can set not look for TIMESTAMPTZ, but TIMESTAMP? Regards Louis
[GENERAL] Duplicated primary key id happen
Hello, I got a problem to restore a database, because there is a problem in my production database, which not suppose to happen like that, but it happen, not sure is a postgres bug or not. I got a table call CS_SR_MTHLY_RTN which has MTHLY_RTN_ID (INTEGER) as a primary key. So the MTHLY_RTN_ID value should be unique, but somehow there are 2 record which id is 15 in this table. The first record look fine to me, because the data in other columns look ok. But the second one, definitely I'm sure is wrong. There are value in column sr_batch_id and er_batch_id, and the value are very very big, 808464726 & 842018867 and the rest of the column value are null. How can this happen? Currently using Postgres 8.2.5 Regards Louis
[GENERAL] Duplicated primary key id happen
Hello, I got a problem to restore a database, because there is a problem in my production database, which not suppose to happen like that, but it happen, not sure is a postgres bug or not. I got a table call CS_SR_MTHLY_RTN which has MTHLY_RTN_ID (INTEGER) as a primary key. So the MTHLY_RTN_ID value should be unique, but somehow there are 2 record which id is 15 in this table. The first record look fine to me, because the data in other columns look ok. But the second one, definitely I'm sure is wrong. There are value in column sr_batch_id and er_batch_id, and the value are very very big, 808464726 & 842018867 and the rest of the column value are null. How can this happen? Currently using Postgres 8.2.5 Regards Louis
[GENERAL] Scripting function definitions as SQL?
Has anyone written a function that scripts out all the functions in a database as full SQL statements (Create Function.) I found the below SQL will return all the fields needed to build a SQL statement, but it would take some work to combine the field values correctly to get the right format. So does anyone know if the code has already been written by someone else? SELECT p.proname AS name, p.oid, p.proargtypes AS args, ds.description , p.prorettype AS rettype, p.proretset, p.probin, p.proisstrict AS strict, p.prosrc AS body, l.lanname AS lang, u.usename, p.prosecdef, p.provolatile, p.proisagg, n.nspname, proargnames, p.proargmodes, p.proallargtypes FROM pg_proc p LEFT OUTER JOIN pg_description ds ON ds.objoid = p.oid INNER JOIN pg_namespace n ON p.pronamespace = n.oid INNER JOIN pg_language l ON l.oid = p.prolang LEFT OUTER JOIN pg_user u ON u.usesysid = p.proowner WHERE n.nspname = 'main' ORDER BY p.proname, n.nspname -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Scripting function definitions as SQL?
Unfortunately I didn't see a way to tell pg_dump to dump only objects of a specific type, like functions or sequences. It requires additional coding to parse the output and that's less than ideal... > Does pg_dump not do what you want? On Sun, May 11, 2008 at 6:49 AM, Martijn van Oosterhout <[EMAIL PROTECTED]> wrote: > On Sun, May 11, 2008 at 06:12:40AM -0700, Postgres User wrote: > > Has anyone written a function that scripts out all the functions in a > > database as full SQL statements (Create Function.) > > > > I found the below SQL will return all the fields needed to build a SQL > > statement, but it would take some work to combine the field values > > correctly to get the right format. So does anyone know if the code > > has already been written by someone else? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Scripting function definitions as SQL?
Yes, but I'm doing this from a Delphi program in Windows and that's why I'm looking for a solution that's SQL-based. It would be nice if one of the system catalog views handled it. > hmmm .. "additional coding" seems a bit too much for a simple thing like > this: > pg_dump -s | perl -ne 'print if /^CREATE FUNCTION test_it/../^\s+LANGUAGE/' > > of course it would be cool to have switch to do it, but hey - it hardly > even qualifies as one-liner. it's more "an expression" than code. On Sun, May 11, 2008 at 11:43 AM, hubert depesz lubaczewski <[EMAIL PROTECTED]> wrote: > On Sun, May 11, 2008 at 11:28:37AM -0700, Postgres User wrote: > > Unfortunately I didn't see a way to tell pg_dump to dump only objects > > of a specific type, like functions or sequences. It requires > > additional coding to parse the output and that's less than ideal... > > hmmm .. "additional coding" seems a bit too much for a simple thing like > this: > pg_dump -s | perl -ne 'print if /^CREATE FUNCTION test_it/../^\s+LANGUAGE/' > > of course it would be cool to have switch to do it, but hey - it hardly > even qualifies as one-liner. it's more "an expression" than code. > > > -- > quicksil1er: "postgres is excellent, but like any DB it requires a > highly paid DBA. here's my CV!" :) > http://www.depesz.com/ - blog dla ciebie (i moje CV) > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] postgres crash when select a record
Hello, I wish some could help me on this. I got a table which has 100500 records, when I try to query this particular record select * from cs_sr_mthly_rtn where mthly_rtn_id = 61609; Postgres crash, and show this errors: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. And in the serverlog, it shows: LOG: server process (PID 395) was terminated by signal 11 LOG: terminating any other active server processes FATAL: the database system is in recovery mode LOG: all server processes terminated; reinitializing LOG: database system was interrupted at 2008-05-14 12:52:15 BST LOG: checkpoint record is at 0/48DEC034 LOG: redo record is at 0/48DEC034; undo record is at 0/0; shutdown TRUE LOG: next transaction ID: 0/705241; next OID: 49152 LOG: next MultiXactId: 1; next MultiXactOffset: 0 LOG: database system was not properly shut down; automatic recovery in progress LOG: redo starts at 0/48DEC07C LOG: unexpected pageaddr 0/433AC000 in log file 0, segment 74, offset 3850240 LOG: redo done at 0/4A3ABFC0 LOG: database system is ready Because of this, I'm not able to do pg_dump backup anymore. Anyone can help me to solve this problem? Thanks louis
Re: [GENERAL] postgres crash when select a record
No, I didn't run the query when the DB is in recovering. I did that after database system is ready - Original Message From: Glyn Astill <[EMAIL PROTECTED]> To: [EMAIL PROTECTED]; Postgres Sent: Wednesday, May 14, 2008 6:07:23 PM Subject: Re: [GENERAL] postgres crash when select a record Looks loke you tried to run your query whilst postgres was recovering from an improper shutdown, try once again when the last thing in the log is LOG: database system is ready - Original Message From: "[EMAIL PROTECTED]" <[EMAIL PROTECTED]> To: Postgres Sent: Wednesday, 14 May, 2008 5:07:03 PM Subject: [GENERAL] postgres crash when select a record Hello, I wish some could help me on this. I got a table which has 100500 records, when I try to query this particular record select * from cs_sr_mthly_rtn where mthly_rtn_id = 61609; Postgres crash, and show this errors: server closed the connection unexpectedly This probably means the server terminated abnormally before or while processing the request. And in the serverlog, it shows: LOG: server process (PID 395) was terminated by signal 11 LOG: terminating any other active server processes FATAL: the database system is in recovery mode LOG: all server processes terminated; reinitializing LOG: database system was interrupted at 2008-05-14 12:52:15 BST LOG: checkpoint record is at 0/48DEC034 LOG: redo record is at 0/48DEC034; undo record is at 0/0; shutdown TRUE LOG: next transaction ID: 0/705241; next OID: 49152 LOG: next MultiXactId: 1; next MultiXactOffset: 0 LOG: database system was not properly shut down; automatic recovery in progress LOG: redo starts at 0/48DEC07C LOG: unexpected pageaddr 0/433AC000 in log file 0, segment 74, offset 3850240 LOG: redo done at 0/4A3ABFC0 LOG: database system is ready Because of this, I'm not able to do pg_dump backup anymore. Anyone can help me to solve this problem? Thanks louis Sent from Yahoo! Mail. A Smarter Email.
Re: [GENERAL] postgres crash when select a record
Hello, Yes, I deleted that record, and now my backup is working fine! Thanks But why this problem came out? Is there anyway to trace it down and how it happen? Or is it a bug of postgres? Regards Louis - Original Message From: Pavel Stehule <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Cc: Postgres Sent: Wednesday, May 14, 2008 6:13:05 PM Subject: Re: [GENERAL] postgres crash when select a record Hello Delete this broken row Regards Pavel Stehule 2008/5/14 <[EMAIL PROTECTED]>: > Hello, > > I wish some could help me on this. > > I got a table which has 100500 records, when I try to query this particular > record > > select * from cs_sr_mthly_rtn where mthly_rtn_id = 61609; > Postgres crash, and show this errors: > > server closed the connection unexpectedly > This probably means the server terminated abnormally > before or while processing the request. > > And in the serverlog, it shows: > > LOG: server process (PID 395) was terminated by signal 11 > LOG: terminating any other active server processes > FATAL: the database system is in recovery mode > LOG: all server processes terminated; reinitializing > LOG: database system was interrupted at 2008-05-14 12:52:15 BST > LOG: checkpoint record is at 0/48DEC034 > LOG: redo record is at 0/48DEC034; undo record is at 0/0; shutdown TRUE > LOG: next transaction ID: 0/705241; next OID: 49152 > LOG: next MultiXactId: 1; next MultiXactOffset: 0 > LOG: database system was not properly shut down; automatic recovery in > progress > LOG: redo starts at 0/48DEC07C > LOG: unexpected pageaddr 0/433AC000 in log file 0, segment 74, offset > 3850240 > LOG: redo done at 0/4A3ABFC0 > LOG: database system is ready > > Because of this, I'm not able to do pg_dump backup anymore. > > Anyone can help me to solve this problem? > > > Thanks > louis
Re: [GENERAL] postgres crash when select a record
Hello Pavel, Thanks for the info, this is very great help. Regard Louis - Original Message From: Pavel Stehule <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Cc: Postgres Sent: Thursday, May 15, 2008 4:43:36 AM Subject: Re: [GENERAL] postgres crash when select a record Hello 2008/5/14 <[EMAIL PROTECTED]>: > Hello, > > Yes, I deleted that record, and now my backup is working fine! Thanks > > But why this problem came out? Is there anyway to trace it down and how it > happen? Or is it a bug of postgres? > I can't to eliminate PostgreSQL bug, but sometimes this problem signalize hw problems. You can search in archive similar cases. Postgres crashes, because stored row are in broken format. There are some projects that would solve it better - http://svana.org/kleptog/pgsql/pgfsck.html Regards Pavel Stehule > Regards > Louis > > - Original Message > From: Pavel Stehule <[EMAIL PROTECTED]> > To: [EMAIL PROTECTED] > Cc: Postgres > Sent: Wednesday, May 14, 2008 6:13:05 PM > Subject: Re: [GENERAL] postgres crash when select a record > > Hello > > Delete this broken row > > Regards > Pavel Stehule > > 2008/5/14 <[EMAIL PROTECTED]>: >> Hello, >> >> I wish some could help me on this. >> >> I got a table which has 100500 records, when I try to query this >> particular >> record >> >> select * from cs_sr_mthly_rtn where mthly_rtn_id = 61609; >> Postgres crash, and show this errors: >> >> server closed the connection unexpectedly >> This probably means the server terminated abnormally >> before or while processing the request. >> >> And in the serverlog, it shows: >> >> LOG: server process (PID 395) was terminated by signal 11 >> LOG: terminating any other active server processes >> FATAL: the database system is in recovery mode >> LOG: all server processes terminated; reinitializing >> LOG: database system was interrupted at 2008-05-14 12:52:15 BST >> LOG: checkpoint record is at 0/48DEC034 >> LOG: redo record is at 0/48DEC034; undo record is at 0/0; shutdown TRUE >> LOG: next transaction ID: 0/705241; next OID: 49152 >> LOG: next MultiXactId: 1; next MultiXactOffset: 0 >> LOG: database system was not properly shut down; automatic recovery in >> progress >> LOG: redo starts at 0/48DEC07C >> LOG: unexpected pageaddr 0/433AC000 in log file 0, segment 74, offset >> 3850240 >> LOG: redo done at 0/4A3ABFC0 >> LOG: database system is ready >> >> Because of this, I'm not able to do pg_dump backup anymore. >> >> Anyone can help me to solve this problem? >> >> >> Thanks >> louis >
[GENERAL] Converting each item in array to a query result row
Hi, I'd writing a query against a function (pg_proc) that contains 2 fields of an array type. Ideally, I'd like to select 1 row from the table, but return a query row for each item in the array. For example, if one row contains the array {"a", "b", "c"} I'd like the query to return 3 rows, one for each of these elements. Any idea if this is possible? Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Converting each item in array to a query result row
Thanks for all the replies. I'm going to post the results of using the recommended approach in another thread. On Fri, May 29, 2009 at 1:18 PM, Adam Ruth wrote: > Good point, I should have specified 8.3.7. > > Just one more reason to anxiously anticipate upgrading to 8.4. > > > > On 30/05/2009, at 2:56 AM, Tom Lane wrote: > >> Adam Ruth writes: >>> >>> Always test your performance assumptions. The plpgsql function is >>> faster than the sql function, a lot faster on smaller arrays. >> >> And, of course, it also pays to be precise about what you're testing >> and on what. Set-returning SQL functions got a lot faster in 8.4. >> Using CVS HEAD on a not-very-fast machine, I get these timings for >> the attached script (1 loop iterations in all cases) >> >> 10 elements 100 elements 1000 elements >> >> built-in unnest 2.44 6.52 47.96 >> SQL function 2.52 6.50 46.71 >> plpgsql function 3.63 12.47 101.68 >> >> So at least in this specific test condition, there's not much >> perceptible difference between the SQL function and the builtin, >> while plpgsql lags behind. >> >> regards, tom lane >> >> >> create or replace function testit(n int, l int) returns float8 as $$ >> declare arr int[]; >> st timestamptz; >> et timestamptz; >> begin >> arr := '{}'; >> for i in 1 .. n loop >> arr[i] = i; >> end loop; >> st := clock_timestamp(); >> for i in 1 .. l loop >> perform count(*) from unnest(arr); -- or unnest_sql or unnest_plpgsql >> end loop; >> et := clock_timestamp(); >> return extract(epoch from et - st); >> end $$ language plpgsql; >> >> CREATE or replace FUNCTION unnest_sql(anyarray) RETURNS SETOF anyelement >> AS >> $_$ >> SELECT ($1)[i] FROM generate_series(array_lower($1,1),array_upper($1,1)) >> i; >> $_$ >> LANGUAGE sql IMMUTABLE; >> >> create or replace function unnest_plpgsql(_a anyarray) returns setof >> anyelement as $$ >> begin >> for i in array_lower(_a,1) .. array_upper(_a,1) loop >> return next _a[i]; >> end loop; >> return; >> end; >> $$ language plpgsql strict immutable; > > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] SQL to return all function parameters- its working
Based on replies to another post (recommending use of 'generate_series'), I was able to write the following query that returns all paramters of a given function. Only one interesting thing to note- in order to return the proper argument type, I had to use proargtypes[i - 1] when I expected this to work:proargtypes[i] Any feedback would be appreciated... SELECT n.nspname AS name_space, p.proname AS function_name, p.oid AS function_oid, t.typname AS rettype, p.prosrc AS body, argument, argument_type FROM pg_proc p INNER JOIN (SELECT oid, proargnames[i] AS argument, proargtypes[i-1] AS argument_type FROM (SELECT oid, proargnames, proargtypes, generate_series(1, array_upper(proargnames,1)) AS i FROM pg_proc) s ) arg ON p.oid = arg.oid INNER JOIN pg_namespace n ON p.pronamespace = n.oid LEFT OUTER JOIN pg_type t ON t.oid = p.prorettype -- WHERE p.proname = 'func_name' -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] How to get the size of non fixed-length field from system catalog ?
Hi, I'm writing a small ORM tool and have written a number of queries to retrieve table metadata. One piece of data that I'm having trouble hunting down is the size of a CHAR field. For example, one table has a 'user_id' column of type CHAR(36). But when I look at the pg_attribute and pg_type tables, I can't seem to find this size value of 36. Can anyone share the SQL that returns the size of a CHAR? It is NOT the 'typlen' column. The answer may be the 'typelem' column, but I can't find details on how to decode it. Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] aliases for sequences and other DB objects?
Hi! Is it possible in Postgres to specify aliases for various DB objects, such as tables, views, sequences, etc.? (For now I would like to create them for sequences.) What I mean is something like in Oracle, such as create alias my_sequence_alias for my_sequence; select nextval('my_sequence_alias'); Thanks! Agoston
[GENERAL] Maintaining user roles and permissions in Postgres - general question
Does anyone have a recommendation for maintaining user permissions on a changing database? The lack of an option to grant specific rights to all objects of a given type within a Postgres db obviously places the burden on the administrator to keep roles updated as objects are added and dropped from a given database. Unfortunately for us, we don't have a dedicated db admin, so this task falls into the hands of developers who are probably less adapt at this kind of task ;) Is there a utility or set of scripts out there that helps a db owner with permissions admin? Thanks -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Schema search_path and $user
In the docs, I see repeated references to $user in the postgresql.conf schema search_path setting such as: search_path = '"$user",public' But I don't see any info on the meaning of '$user' here. Is $user some kind of variable within postgresql.conf that refers to the current user? Can it be replaced with a real group name to control schema search_paths for specific groups/users? Thanks. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Schema search_path and $user
Thanks for the link, I wasn't reading the right page(s) in the documentation. On Mon, Jul 6, 2009 at 12:19 PM, Tom Lane wrote: > Postgres User writes: >> In the docs, I see repeated references to $user in the postgresql.conf >> schema search_path setting such as: > >> search_path = '"$user",public' > >> But I don't see any info on the meaning of '$user' here. > > I guess you didn't read the actual documentation of search_path: > http://www.postgresql.org/docs/8.3/static/runtime-config-client.html#GUC-SEARCH-PATH > It says > > The value for search_path has to be a comma-separated list of schema > names. If one of the list items is the special value $user, then the > schema having the name returned by SESSION_USER is substituted, if there > is such a schema. (If not, $user is ignored.) > > regards, tom lane > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Alternative to temp tables?
Hi, I have a simple function that returns a set of rows: CREATE OR REPLACE FUNCTION foo() RETURNS SETOF record AS $$ BEGIN RETURN QUERY SELECT * FROM people WHERE last_name = 'jones'; END $$ LANGUAGE 'plpgsql' In a separate function, I call the function and store the results in a temp table using this syntax: INSERT INTO tmp_tbl SELECT * FROM foo() This works, but I'd like to know if there's another way to hold the results. Can I get the results from foo() and store those in a local var such as recs record[] OR recs people[] Or are temp tables the only way to hold table-based results? -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Using Insert - Default in a condition expression ??
Hi, I'm trying to write an INSERT INTO statement that will use a DEFAULT value when an input parameter is null. Here's the function that fails to compile. I tried replacing Coalesce with a Case statement but that fails as well. Note that if you replace the condition with a simple 'Default' it compiles successfully. Any ideas? CREATE OR REPLACE FUNCTION "name_add" ( p_name varchar, p_created_date date ) RETURNS integer AS $body$ DECLARE BEGIN INSERT INTO names ( name, created_date ) VALUES ( p_name, Coalesce(p_created_date, DEFAULT) ); Return 1; END ; $body$ LANGUAGE 'plpgsql' VOLATILE CALLED ON NULL INPUT SECURITY INVOKER; -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Using Insert - Default in a condition expression ??
>> I'm trying to write an INSERT INTO statement that will use a DEFAULT >> value when an input parameter is null. >> > Neither of my 2 methods are pretty. > 1) Use a trigger. > 2) Grab and cast the default value from the information_schema.columns > view and plug it in. > > Another option is to build your insert sql as a string and then execute it. > > Sim > Thanks... those approaches came to mind. But I need something more flexible as the Insert functions are generated by an ORM layer. It has to be simpler. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Using Insert - Default in a condition expression ??
>> > >> > I'm trying to write an INSERT INTO statement that will use a DEFAULT >> > value when an input parameter is null. >> > >> Neither of my 2 methods are pretty. >> 1) Use a trigger. >> 2) Grab and cast the default value from the information_schema.columns >> view and plug it in. >> >> Another option is to build your insert sql as a string and then execute it. > > Another option: > > rewrite your function: > > CREATE OR REPLACE FUNCTION "name_add" (p_name varchar, p_created_date > date) RETURNS integer AS $$ BEGIN case when $2 is null then INSERT INTO > names(name, created_date) values (p_name, default); else insert into > names values ($1, $2); end case; return 1; end; $$ language plpgsql; > > Andreas Kretschmer Again, this approach works for a simple example. But for a larger function with any number of input params and multiple columns with default values, it's not practical for an ORM code generator. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Composite types and tables - when to use?
Browsing the docs last night, I realized that I've never taken advantage of Postgres' powerful composite types. But a question came to mind- in what scenarios should you use a composite type in a table structure? That is, I can see the benefits of a composiite type insofar as it essentially lets you add virtual tuples to a table without having to alter the table structure to add new fields. Instead you can simply extend the composite type. But why take this approach? http://www.postgresql.org/docs/8.4/static/rowtypes.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Composite types and tables - when to use?
Browsing the docs last night, I realized that I've never taken advantage of Postgres' powerful composite types. But a question came to mind- in what scenarios should you use a composite type in a table structure? That is, I can see the benefits of a composiite type insofar as it essentially lets you add virtual tuples to a table without having to alter the table structure to add new fields. Instead you can simply extend the composite type. But why take this approach? http://www.postgresql.org/docs/8.4/static/rowtypes.html -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] interface for "non-SQL people"
EMS SQL Manager has a visual query builder, but it's a commerical product, ie it aint free. On Tue, Oct 6, 2009 at 12:47 PM, pere roca wrote: > > > hi, > some nice tool over there to let non-SQL knowing people to construct their > queries? I'm using pgAdmin III but I know some SQL. > there is no other option than constructing an HTML with forms, drop-down > menus...? > > thanks, > pERE > -- > View this message in context: > http://www.nabble.com/interface-for-%22non-SQL-people%22-tp25775414p25775414.html > Sent from the PostgreSQL - general mailing list archive at Nabble.com. > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] UDP Overflow / UDP Drops on Standby Postgres Service
Postgres General List, I am stumped trying to prevent an overflowing UDP buffer on a standby Postgres service. Any help would be most appreciated. Essentially a UDP buffer associated with the pg_standby process on my localhost interface gradually fills up once I start Postgres until it hits its maximum capacity and then proceeds to steadily drop packets. A restart of Postgres (of course) clears the buffer, but then it begins filling up again. As far as I can tell, this is not actually causing any problems. (It is only happening to the standby service, and failover data recovery shows nothing missing.) Nevertheless, I don't want any buffers to overflow. (I have also posted this question to ServerFault ( http://serverfault.com/questions/564905/udp-overflow-udp-drops-on-standby-postgres-service). That posting has even more detail than I have provided below, such as how I identified pg_standby by querying the /proc files.) ==Salient points==: a) by querying "/proc" information for UDP I can see non-empty buffers, and identify the "pg_standby" process as the culprit b) the overflow occurs even when my firewalls on both servers (iptables) are shut down c) my UDP buffers at 16MB+ seem more than big enough. I could make them larger but that would only mask the problem d) online discussions of similar problems seem to finger either older versions of Postgres or the Statistics Collector; to rule this out I have tried to turn off all statistics collection (track_activites/counts = off), but the problem continues: e) a verbose wire sniff of the UDP packet shows nothing useful f) there is not a great deal of database activity (e.g. roughly one 16MB WAL file is replicated from the primary to the secondary service every 45 minutes) g) I formerly ran Postgres 8.3.5, with an otherwise identical setup; this problem only began when I upgraded to 9.1.9 ==Background on my setup==: -- two CentOS 6.4 x86_64 bit systems (VMs), each running Postgres 9.1.9, each in a geographically separated (<50 miles) datacenter -- Postgres is active on my primary server and running in standby mode on my backup: the backup Postgres service is receiving its data two ways: -- as a warm standby processing WAL files via log shipping -- on failover the current WAL file on the primary (not yet shipped) is recovered from a DRBD partition synced from the primary box -- nothing else (of consequence) runs on these boxes except Postgres Thanks, Daniel
[GENERAL] Transport Compression (whatever became of that discussion?)
In this discussion there was a lot of talk of transport compression in Postgres, (also specifically wondering about JDBC as well) did anything ever come of that discussion? http://postgresql.1045698.n5.nabble.com/Compression-on-SSL-links-td2261205.html
[GENERAL] pg_dump on Hot standby : clarification on how to
I would just like to get some clarification from the list on how to do a pg_dump on the slave in the face of "canceling statement due to conflict with recovery". The following links seem to indicate that If I start an idle transaction on the master I should be able to do the pg_dump, but I tried this in psql on the master "start transaction", and was still unable to do a pg_dump on the slave at the same time. Is there something special about using dblink that would make this all work? > http://postgresql.1045698.n5.nabble.com/Hot-Standby-ERROR-canceling-statement-due-to-conflict-with-recovery-td3402417.html One solution is to begin idle transactions on the master by using e.g. dblink from the *standby* to the master before you start *pg_dump* on the *standby* and end them after *pg_dump* (or whatever) is finished.
Re: [GENERAL] pg_dump on Hot standby : clarification on how to
What I mean is if I do pg_dump on slave I get the " ERROR: canceling statement due to conflict with recovery". So I googled and tried the solution listed in the linked thread. I did a "start transaction" via psql on the master but I continued to get the error. Wondered if there was more to it than that. On Thu, May 12, 2011 at 5:08 PM, Andrew Sullivan wrote: > On Thu, May 12, 2011 at 11:26:38AM -0700, bubba postgres wrote: > > I would just like to get some clarification from the list on how to do a > > pg_dump on the slave in the face of "canceling statement due to conflict > > with recovery". > > The following links seem to indicate that If I start an idle transaction > on > > the master I should be able to do the pg_dump, but I tried this in psql > on > > the master "start transaction", and was still unable to do a pg_dump on > the > > slave at the same time. > > Is there something special about using dblink that would make this all > work? > > Could you define what you mean by "unable to do pg_dump on the slave"? > > I don't see why dblink would be the special thing. I think what you > want is to hold a transaction open on the master so that the WAL can't > get recycled. At least, that's what I understood from the post. I > haven't actually tried it yet, but to me it sounded like it ought to > work. > > A > > -- > Andrew Sullivan > a...@crankycanuck.ca > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general >
[GENERAL] General Postgres performance tips when using ARRAY
So, what are the gotcha's around manipulating Arrays in stored procs? It seems reasonable that an array_cat /etc would cause the creation of a new array, but does mutating an existing array also create a copy?
[GENERAL] Why are IDLE connections using cpu according to TOP.
I have an overloaded DB and I see several IDLE connections that are using significant CPU.. (Not Idle in transaction) Why would an idle process be eating so much cpu? Or is it not actually idle? Here is an example from pg_top: last pid: 11821; load avg: 6.11, 6.32, 7.64; up 1+21:05:31 50 processes: 3 running, 42 sleeping, 5 uninterruptable CPU states: 21.7% user, 0.0% nice, 7.8% system, 46.9% idle, 23.6% iowait Memory: 29G used, 149M free, 13M buffers, 27G cached Swap: PID USERNAME PRI NICE SIZE RES STATE TIME WCPUCPU COMMAND 4779 postgres 200 4383M 573M disk3:16 4.79% 39.42% postgres: gpup gpup 10.202.99.5(46391) UPDATE 11591 postgres 200 4383M 108M sleep 0:12 2.08% 19.61% postgres: gpup gpup 10.202.99.6(52459) idle 4191 postgres 200 4384M 709M sleep 4:33 2.50% 19.41% postgres: gpup gpup 10.202.99.6(42288) idle 10942 postgres 200 4383M 242M sleep 0:42 5.08% 16.86% postgres: gpup gpup 10.202.99.5(58373) idle 10930 postgres 200 4390M 281M sleep 0:43 1.62% 15.30% postgres: gpup gpup 10.202.99.6(52273) idle 11571 postgres 200 4390M 210M run 0:25 4.32% 14.51% postgres: gpup gpup 10.202.99.6(52455) SELECT 11533 postgres 200 4383M 109M run 0:14 2.31% 12.75% postgres: gpup gpup 10.202.99.6(52453) SELECT 7494 postgres 200 4384M 1611M disk2:31 2.44% 12.35% postgres: gpup gpup 10.202.99.6(53620) SELECT
[GENERAL] Are check constraints always evaluated on UPDATE?
Are there any optimizations around check constraints such that they will not be evaluated if constituent columns are not updated? Regards, -JD
[GENERAL] Odd performance difference in check constraint : SQL(slow) vs plpgsql(fast)
This is the reverse of what I thought I would find. In short my check constraint is extracting the epoch from a start timestamp, and an end timestamp to get the number of seconds difference. It then uses this number to check the array_upper() of an array to make sure it's the proper size The SQL version uses a case statement, and the plpgsql uses an IF/ELSE In a particular insert test The plpgsql version adds 1 second over the no constraints case. the sql version adds 10 seconds over the no constraints case. Why would this be? ---> CREATE OR REPLACE FUNCTION check_end_time_foo( _start_time TIMESTAMP, _end_time TIMESTAMP, _granularity SMALLINT, _values DOUBLE PRECISION[] ) RETURNS boolean AS $$ BEGIN if( _granularity = 5 ) THEN return( EXTRACT( EPOCH FROM _end_time at time zone 'utc' )::BIGINT - EXTRACT( EPOCH FROM _start_time at time zone 'utc' )::BIGINT = 60 * array_upper( _values,1 ) ); ELSEIF( _granularity = 7 ) THEN return( EXTRACT( EPOCH FROM _end_time at time zone 'utc' )::BIGINT - EXTRACT( EPOCH FROM _start_time at time zone 'utc' )::BIGINT = 900 * array_upper( _values,1 ) ); ELSEIF( _granularity = 9 ) THEN return( EXTRACT( EPOCH FROM _end_time at time zone 'utc' )::BIGINT - EXTRACT( EPOCH FROM _start_time at time zone 'utc' )::BIGINT = 3600 * array_upper( _values,1 ) ); ELSEIF( _granularity = 12 ) THEN return( ( (EXTRACT( YEAR FROM (_end_time at time zone 'utc') at time zone 'utc' )::INT * 12) + EXTRACT( MONTH FROM (_end_time at time zone 'utc' ) at time zone 'utc' )::INT ) - ( (EXTRACT( YEAR FROM (_start_time at time zone 'utc') at time zone 'utc' )::INT * 12) + EXTRACT( MONTH FROM (_start_time at time zone 'utc') at time zone 'utc' )::INT ) = array_upper( _values,1 ) ); END IF; END; $$ language plpgsql IMMUTABLE; alter table timeseries add CONSTRAINT timeseries_valid_end_time CHECK( check_end_time_foo( series_start_time, series_end_time, granularity, data_value ) ); -vs- alter table timeseries add CONSTRAINT timeseries_valid_end_time CHECK( CASE WHEN granularity = 5 THEN EXTRACT( EPOCH FROM series_end_time at time zone 'utc' )::BIGINT - EXTRACT( EPOCH FROM series_start_time at time zone 'utc' )::BIGINT = 60 * array_upper( data_value,1 ) WHEN granularity = 7 THEN EXTRACT( EPOCH FROM series_end_time at time zone 'utc' )::BIGINT - EXTRACT( EPOCH FROM series_start_time at time zone 'utc' )::BIGINT = 900 * array_upper( data_value,1 ) WHEN granularity = 9 THEN EXTRACT( EPOCH FROM series_end_time at time zone 'utc' )::BIGINT - EXTRACT( EPOCH FROM series_start_time at time zone 'utc' )::BIGINT = 3600 * array_upper( data_value,1 ) WHEN granularity = 12 THEN ((EXTRACT( YEAR FROM ( series_end_time at time zone 'utc') at time zone 'utc' )::INT * 12) + EXTRACT( MONTH FROM ( series_end_time at time zone 'utc' ) at time zone 'utc' )::INT ) - ( (EXTRACT( YEAR FROM ( series_start_time at time zone 'utc') at time zone 'utc' )::INT * 12) + EXTRACT( MONTH FROM ( series_start_time at time zone 'utc') at time zone 'utc' )::INT ) = array_upper( data_value,1 ) ELSE false END );