Re: [GENERAL] recovery from out of disk space
2010/10/20 Alexander B : > I use PostGre 8.3 on Ubuntu. > > I realized last Friday that my /opt hard disk was full. The disk was full > about 2-4 hours of moderate activity. The disk ran out of space for reasons > outside of postgres > I'm really at a loss of what to do. Does anyone have suggestions for what > my next step should be. T > > More details: > > When I run postgress...Postgres seems to come up...However, it is completely > unresponsive to my application. Everything just hangs. > > postgres 7364 0.0 1.5 1089056 27164 ? S 09:35 0:00 > /opt/PostgreSQL/8.3/bin/postgres -D /opt/PostgreSQL/8.3/data > postgres 7365 0.0 0.0 10700 984 ? Ss 09:35 0:00 postgres: > logger process > postgres 7367 0.0 0.4 1089572 7244 ? Ss 09:35 0:00 postgres: > writer process > postgres 7368 0.0 0.0 1089572 1056 ? Ss 09:35 0:00 postgres: > wal writer process > postgres 7369 0.0 0.0 1089712 1348 ? Ss 09:35 0:00 postgres: > autovacuum launcher process > postgres 7370 0.0 0.0 10828 1236 ? Ss 09:35 0:00 postgres: > stats collector process > > I also tried to run pg_dump to make a backup and that failed with the > following message: > > (reverse-i-search)`./.': cd ../../ > r...@domu-12-31-39-0f-44-82:/opt/PostgreSQL/8.3/bin# p -v -c -f > /opt/backup/bb.txt -U nuxeo nuxeodb > pg_dump: reading schemas > pg_dump: reading user-defined functions > pg_dump: reading user-defined types > pg_dump: reading procedural languages > pg_dump: reading user-defined aggregate functions > pg_dump: reading user-defined operators > pg_dump: reading user-defined operator classes > pg_dump: reading user-defined text search parsers > pg_dump: reading user-defined text search templates > pg_dump: reading user-defined text search dictionaries > pg_dump: reading user-defined text search configurations > pg_dump: reading user-defined operator families > pg_dump: reading user-defined conversions > pg_dump: reading user-defined tables > pg_dump: SQL command failed > pg_dump: Error message from server: ERROR: permission denied for relation > totalcount > pg_dump: The command was: LOCK TABLE public.totalcount IN ACCESS SHARE MODE > pg_dump: *** aborted because of error > > Maybe you are not a superuser and does not have acess to the table ? Try to dump the DB as 'postgres' user instead of Nuexo one. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PHP PDO->bindValue() vs row execute speed difference
2010/11/1 Georgi Ivanov : > Hi, > I have following situation: > $q = 'select * from tbl1 where id = :id'; > $stmt = $dbh->prepare($q); > $stmt->bindValue(':id', $id , PDO::PARAM_INT); > $stmt->execute(); > //1000 ms > and > > $q1 = ' select * from tbl1 where id = 100 '; > $stmt = $dbh->prepare($q); > //NO binding here ! > $stmt->execute(); > //2 ms > > The queries are a bit more complex, but this is enough to get the idea. > > So the first query runs for about 1000 ms > > The second query( w/o binding) runs for about 2 ms. > If I'm correct, the first query is interpreted as : select * from tbl1 where > id = (INT ) > and I don't get good execution plan. > > The second one is fast, because the DB see the literal 100 as value for ID > and makes a better execution plan. > Am I correct in my thoughts ? Yes. But usualy for a PK there is no trouble and planner should use index. you can give a try with psql 'prepare foo ... ; explain execute foo(100); ' vs 'explain select where id = 100' > Is there anything I can do in tuning the DB or just to skip/rewrite PHP PDO > ? Once you have the explain output for the named prepared statement, you'll know. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why so many xlogs?
see graph which plots number of wal segments in the last week > http://depesz.com/various/bad-wal.jpg > > it virtually never goes below 215, and it spikes to 270-300. > > In here: http://www.depesz.com/various/bad-wal.log.gz is log from my test > script since 20th of october. > > Any ideas why number of segments is higher than expected? > > Just so that I am clear: I do not want to lower it by changing > checkpoint_segments. I'm looking for information/enlightenment about why > it works the way it works, and what could be possibly wrong. > > Best regards, > > depesz > > -- > Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/ > jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007 > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why so many xlogs?
2010/11/1 hubert depesz lubaczewski : > On Mon, Nov 01, 2010 at 07:22:50PM +0100, Cédric Villemain wrote: >> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 * >> > checkpoint_segments ). >> (2 + checkpoint_completion_target) * checkpoint_segments + 1 >> => 291 > > this is formula gave to me by rhodiumtoad on irc, but we tested with lower > checkpoint_completion_target and it didn't change *anything*. You'll have between 211 and 291 files for 0.1 to 0.9 checkpoint_completion_target. You'd have more than the number of files given by formula used during your tests ? > > Best regards, > > depesz > > -- > Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/ > jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007 > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why so many xlogs?
2010/11/1 hubert depesz lubaczewski : > On Mon, Nov 01, 2010 at 07:18:22PM +0100, Filip Rembiałkowski wrote: >> 2010/11/1 hubert depesz lubaczewski : >> >> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 * >> > checkpoint_segments ). >> >> why? >> >> for a server overloaded with R/W transactions, it's possible to go beyond >> this. >> checkpoints just do not keep up. >> right now I have an 8.3 with checkpoint_segments=3, constantly running >> pgbench and I see 8 WAL segments. > > you will notice in the logs that the system doesn't look like very > loaded. > i mean - there is fair amount of work, but nothing even resembling > "overloaded". There exists some checkpoint which occur more frequently than perhaps expected. (less than 15 minutes)... The logline about checkpoint might be usefull. Still I wonder what your question is exactly ? > > Best regards, > > depesz > > -- > Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/ > jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007 > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why so many xlogs?
2010/11/1 hubert depesz lubaczewski : > On Mon, Nov 01, 2010 at 08:13:49PM +0100, Cédric Villemain wrote: >> 2010/11/1 hubert depesz lubaczewski : >> > On Mon, Nov 01, 2010 at 07:22:50PM +0100, Cédric Villemain wrote: >> >> > as I understand, max number of xlog files in pg_xlog should be ( 1 + 2 * >> >> > checkpoint_segments ). >> >> (2 + checkpoint_completion_target) * checkpoint_segments + 1 >> >> => 291 >> > >> > this is formula gave to me by rhodiumtoad on irc, but we tested with lower >> > checkpoint_completion_target and it didn't change *anything*. >> >> You'll have between 211 and 291 files for 0.1 to 0.9 >> checkpoint_completion_target. >> You'd have more than the number of files given by formula used during >> your tests ? > > yes. we decreased checkpoint_completion_target to 0.5, and the numbers > of xlog segments *did not change*. It should stick at a maximum of 3 * checkpoint_segments + 1, if it exceed it will remove the extra files after. > also - can you explain why "fraction of total time" (time!) would > directly relate to number of xlog files existing in pg_xlog? I mean - > you're not the first person to suggest it, but I don't see any way that > these two could be related. It's guess that while your checkpoint is longer by this factor(X%), the number of wal files needed might be multiplied by the same ratio. (1+X%) To handle extra files created while the checklpoint is still running. > > Best regards, > > depesz > > -- > Linkedin: http://www.linkedin.com/in/depesz / blog: http://www.depesz.com/ > jid/gtalk: dep...@depesz.com / aim:depeszhdl / skype:depesz_hdl / gg:6749007 > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] migrate from 8.1 to 9.0
2010/11/8 AI Rumman : > I am going to migrate my produciton DB from postgresql 8.1 to 9.0.1. > Anyone please tell me what the important things I have to look for this > migration. > Thanking you all. You MUST read Releases Notes for each major version between to see what change and what may impact your application. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] migrate from 8.1 to 9.0
2010/11/8 Vick Khera : > On Mon, Nov 8, 2010 at 5:23 AM, Thom Brown wrote: >> Implicit casting might bite you since that was removed in 8.3. >> > > Also if you use bytea fields to store binary data, the encoding format > on return of the data is different. Make sure your client library > handles that for you (or explicitly code for it). > > These are the two major issues we had to address between our > 8.1->8.3->9.0 update over the last few years. It is also possible to hit issues with plpgsql code quality. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Why facebook used mysql ?
2010/11/9 Tom Lane : > Vick Khera writes: >> On Tue, Nov 9, 2010 at 10:26 AM, Scott Ribe >> wrote: >>> Also, my understanding is that if you go way back on the PostgreSQL >>> timeline to versions 6 and earliest 7.x, it was a little shaky. (I started >>> with 7.3 or 7.4, and it has been rock solid.) > >> In those same times, mysql was also, um, other than rock solid. > > I don't have enough operational experience with mysql to speak to how > reliable it was back in the day. What it *did* have over postgres back > then was speed. It was a whole lot faster, particularly on the sort of > single-stream-of-simple-queries cases that people who don't know > databases are likely to set up as benchmarks. (mysql still beats us on > cases like that, though not by as much.) I think that drove quite a > few early adoption decisions, and now folks are locked in; the cost of > conversion outweighs the (perceived) benefits. Facebook have writen "Flashcache [is] built primarily as a block cache for InnoDB but is general purpose and can be used by other applications as well." https://github.com/facebook/flashcache/ A good tool by the way. It is the only place where I like to see SSD disk. (not at facebook, but with 'volatile' data) > > regards, tom lane > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Looking for auto starting procedures
2010/12/2 : > Hello ! > > And thanks to all, for answers. > > Naturally, cron does not operate on DB tables and if I add/remove/change > a line in a control table dynamically, cron would not notice that ;-) > So I had to write a daemon, which acts on that table. This might happen > about ~5 - 20 seconds and cron is operates on minutes only. Perhaps you are trying to solve something with SQL server behavior and you may find more interesting ways "à la PostgreSQL". I wonder if you are not looking after something like PgQ[1] or LISTEN/NOTIFY [2] > > An auto running stored procedure would solve the problem, if therewould be > a way to run this procedure on server startup automatically > [as it looks, write another daemon for this]. on server start ?! no daemon is necesary. > > The usual notification from postgreSQL does not allow to write > an [own, better to evaluate] identifier in the syslog. This is > not a matter of the syslog daemon, its on the program which logs. > May be, I have just not found this. Maybe you need to adjust your 'roles' [3] so that log lines can be identified to your role. Maybe you'll have better answers if you define your objectives. [1] http://wiki.postgresql.org/wiki/PGQ_Tutorial [2] http://www.postgresql.org/docs/current/interactive/sql-notify.html [3] http://www.postgresql.org/docs/current/interactive/sql-createrole.html -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgresql as a dictionary coder backend?
2011/1/23 Attila Nagy : > Hello, > > I'm looking for a database backend for a dictionary coder project. It would > have three major tasks: > - take a text corpus, get their words and substitute each word by a 64 bit > integer (the word:integer is always constant) and store the result > (encoding) ok. PostgreSQL allow to do that easily. > - take the previous result and substitute the integers with words (decoding) idem. > - the words should be reference counted, so if a word can be no longer found > in any of the encoded messages, delete it (and optionally free it's integer > ID, but 64 bit is believed to be enough for a long time, although having > smaller IDs result smaller encoded files). This could be achieved by > informing the database of the words of a deleted message, so it could > decrement those refcounts and delete the records if needed. Yes, like what despez do : http://www.depesz.com/index.php/2009/07/10/getting-list-of-unique-elements/ > > I can easily do this with any RDBMS, with a table of three columns: auto > incremented ID, word and refcount, with a unique index on word. > The challenge could be: > - that it should scale to several TBs of size and several (hundred) billion > of records. One scenario would be to store about 40 TBs of words and the > average word length would be about 50-60 bytes (that's about 800*10^9 > records). It should work well both for inserting and searching (encoding and > decoding) words. I strongly suggest you to have a look at intarray contrib (it is provided with PostgreSQL. > - I need atomicity and durability, but having these on a word (record) level > takes too much IOPS and have no use, so it would be good to have an > interface for inserting about 1000-50 words in one call, assign a unique > ID to each unique words and store them (if the word has had already an ID, > increment its refcount) and give back the IDs for each words. This > transaction could be committed as one, so the transactions could be big, > sparing IOPS. Array allow a very good compression of the data per row. (still it is not a RDBMS way to use array for that, but it is good for performances) > - I need concurrency, so when the above happens from two sources at the same > time, the same word in the two transactions must get the same ID one transaction will finish before the other to allow that. (but they can start at the same time) > > Is postgresql a good choice for doing this and if yes, what would be the > optimal (for both time and space efficiency at encoding and decoding) use > case? PostgreSQL should work for that, yes. You'll have to compensate the size with good hardware and good SQL (and probably some optimization like using arrays) -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [HACKERS] PLJava for Postgres 9.2.
> Yes, am aware PLJava is a 3rd party lib, just surprised the same party > hasn't built them given they seem to be built all the way to 9.1. > > My question was primarily about obtaining pgsx.mk file which is a part of > the PostgreSQL project. With linux you do something like that for pljava $ make PG_CONFIG=/usr/pgsql-9.2/bin/pg_config \ JAVA_HOME=/usr/java/default The pg_config is used to find the pgxs.mk (the real command is: «pg_config -- pgxs»). > > Paul > > > > From: Andrew Dunstan > To: Paul Hammond > Cc: "pgsql-hack...@postgresql.org" ; > "pgsql-general@postgresql.org" Sent: > Friday, 17 May 2013, 0:03 > Subject: Re: [GENERAL] [HACKERS] PLJava for Postgres 9.2. > > On 05/16/2013 05:59 PM, Paul Hammond wrote: > > Hi all, > > > > I've downloaded PLJava, the latest version, which doesn't seem to have > > a binary distribution at all for 9.2, so I'm trying to build it from > > the source for Postgres 9.2. I have the DB itself installed on Windows > > 7 64 bit as a binary install. I've had to do a fair bit of hacking > > with the makefiles on cygwin to get PLJava to build, but I have > > succeeded in compiling the Java and JNI code, the pljava_all and > > deploy_all targets effectively. > > Cygwin is not a recommended build platform for native Windows builds. > See the docs for the recommended ways to build Postgres. > > > But I'm coming unstuck at the next target where it's doing the target > > c_all. It's trying to find the following makefile in the Postgres dist: > > > > /lib/pgxs/src/makefiles/pgxs.mk: No such > > file or directory > > > > What do I need to do to obtain the required files, and does anybody > > know why, given Postgres 9.2 is out some time, and 9.3 is in beta, why > > no prebuild binary PLJavas exist for 9.2? > > Because nobody has built them? > > > FYI, PL/Java is not maintained by the PostgreSQL project. > > > cheers > > andrew -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] pasting into psql garbles text
Le samedi 27 juin 2009, Merlin Moncure a écrit : > I've noticed over a wide variety of operating systems that when you > paste from an application into psql through a terminal (currently > using the default gnome terminal in ubuntu), large pastes tend to get > garbled with some of the input getting truncated. While working on > functions, this is annoying in the extreme. I had notice the same error with konsole : http://bugs.kde.org/show_bug.cgi?id=150957 > > Interestingly, the one platform that tends not to suffer from this is > windows so I'm guessing this is a readline problem. Has anybody else > noticed this? Is there a workaround? > > merlin -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Installing postgresql on Debian Lenny-->my /etc/apt/sources.list.
Le vendredi 11 septembre 2009, Ricky Tompu Breaky a écrit : > Dear Peter Eisentraut. > > My aptitude still can not find it. you didn't search with aptitude but tried to install. My guess is that it is already installed. try a 'dpkg -l postgresql-8.3' it will output something like : [...] ii postgresql-8.3 8.3.8-1 object-relational SQL database, version 8.3 server if it is installed. You have probably a misconfigured server which don't start *or* (more proabaly) the server is listenning on another port than 5432. > > Please keep telling me. > > Thank you very much in advance. > > Here is the output of my postgres installation attempt with aptitude and > my current '/etc/apt/sources.list': > " > lenny:/etc/apt# dpkg-reconfigure gforge-db-postgresql > lenny:/etc/apt# aptitude install postgresql-8.3 postgresql-client-8.3 > postgresql-client Reading package lists... Done > Building dependency tree > Reading state information... Done > Reading extended state information > Initializing package states... Done > Reading task descriptions... Done > The following partially installed packages will be configured: > gforge-web-apache2 > 0 packages upgraded, 0 newly installed, 0 to remove and 199 not > upgraded. Need to get 0B of archives. After unpacking 0B will be used. > Writing extended state information... Done > Setting up gforge-web-apache2 (4.7~rc2-7lenny1) ... > Calculating defaults > Reading defaults from /etc/gforge/gforge.conf > Creating /etc/gforge/gforge.conf > SSL Enabled > Creating /etc/gforge/httpd.conf > Creating /etc/gforge/httpd.secrets > Creating /etc/gforge/local.inc > Creating other includes > DBI connect('dbname=gforge','gforge',...) failed: could not connect to > server: No such file or directory Is the server running locally and > accepting connections on Unix domain socket > "/var/run/postgresql/.s.PGSQL.5432"? at /usr/lib/gforge/lib/include.pl > line 37 Error while connecting to database: > at /usr/lib/gforge/lib/include.pl line 39. dpkg: error processing > gforge-web-apache2 (--configure): subprocess post-installation script > returned error exit status 255 Errors were encountered while > processing: gforge-web-apache2 E: Sub-process /usr/bin/dpkg returned an > error code (1) A package failed to install. Trying to recover: > Reading package lists... Done > Building dependency tree > Reading state information... Done > Reading extended state information > Initializing package states... Done > Reading task descriptions... Done > > lenny:/etc/apt# > " > > my '/etc/apt/sources.list': > # > # deb cdrom:[Debian GNU/Linux 5.0.2 _Lenny_ - Official i386 NETINST > Binary-1 20090629-11:06]/ lenny main > > #deb cdrom:[Debian GNU/Linux 5.0.2 _Lenny_ - Official i386 NETINST > Binary-1 20090629-11:06]/ lenny main > > deb http://ftp.de.debian.org/debian/ lenny main > deb-src http://ftp.de.debian.org/debian/ lenny main > > deb http://security.debian.org/ lenny/updates main > deb-src http://security.debian.org/ lenny/updates main > > deb http://volatile.debian.org/debian-volatile lenny/volatile main > deb-src http://volatile.debian.org/debian-volatile lenny/volatile main > > deb http://ftp2.de.debian.org/debian/ lenny main non-free > deb-src http://ftp2.de.debian.org/debian/ lenny main non-free > > deb http://security.debian.org/ lenny/updates main non-free > deb-src http://security.debian.org/ lenny/updates main non-free > > deb http://www.lamaresh.net/apt lenny main > > deb http://dl.google.com/linux/deb/ stable non-free > > deb http://download.skype.com/linux/repos/debian/ stable non-free > > deb ftp://ftp.it.debian.org/debian/ lenny main contrib non-free > deb-src ftp://ftp.it.debian.org/debian/ lenny main contrib non-free > deb ftp://ftp.it.debian.org/debian/ lenny-proposed-updates main contrib > non-free deb-src ftp://ftp.it.debian.org/debian/ lenny-proposed-updates > main contrib non-free > > deb http://deb.opera.com/opera/ stable non-free > > deb http://pkg-voip.buildserver.net/debian etch main > > deb http://mirror.noreply.org/pub/tor lenny main > deb-src http://mirror.noreply.org/pub/tor lenny main > > deb http://javadesktop.org/lg3d/debian stable contrib > deb http://ftp.bononia.it/debian lenny main contrib non-free > deb http://ftp.bononia.it/debian lenny-proposed-updates main contrib > non-free > > deb http://ftp.debian.org/debian/ lenny main contrib non-free > > === > On Fri, 11 Sep 2009 09:56:33 +0300 > > Peter Eisentraut wrote: > > On Fri, 2009-09-11 at 13:17 +0700, Ricky Tompu Breaky wrote: > > > It seems that the aptitude can not find the postgresql. > > > > Please post the exact output from the screen. > -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Looking for way to replicate master db to multiple mobile databases
Le jeudi 24 septembre 2009, Selena Deckelmann a écrit : > Hi! > > On Wed, Sep 23, 2009 at 10:11 AM, Bryan Montgomery wrote: > > Hi, > > I'm looking for a way to replicate am master database to multiple (100+) > > databases that are taken in to the field. Currently for each laptop we > > dump and load the tables. However,there is only a small percentage of > > data that changes on a frequent basis. > > > > I've been looking around and come across pyerplica, londiste and bucardo > > - the documentation on most of these is fairly sparse. It seems that > > Bucardo may be the best bet - at least initially. > > Bucardo is a good choice for this usage model because it was > originally designed to work over a lossy network connections. yes, but isn't bucardo designed to 2 nodes only ? > > You could issue 'kicks' for each laptop sync when you know for sure > that a laptop has got an active network connection to your master. > It's also pretty efficient with updates, only copying the current row > (that's changed) a single time, rather than multiple times if there > have been multiple changes to that row since the last time a sync > occurred. > > -selena > -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Looking for way to replicate master db to multiple mobile databases
Le vendredi 25 septembre 2009, Selena Deckelmann a écrit : > On Thu, Sep 24, 2009 at 8:09 AM, Cédric Villemain > > wrote: > > Le jeudi 24 septembre 2009, Selena Deckelmann a écrit : > >> Hi! > >> > >> On Wed, Sep 23, 2009 at 10:11 AM, Bryan Montgomery > > > > wrote: > >> > Hi, > >> > I'm looking for a way to replicate am master database to multiple > >> > (100+) databases that are taken in to the field. Currently for each > >> > laptop we dump and load the tables. However,there is only a small > >> > percentage of data that changes on a frequent basis. > >> > > >> > I've been looking around and come across pyerplica, londiste and > >> > bucardo - the documentation on most of these is fairly sparse. It > >> > seems that Bucardo may be the best bet - at least initially. > >> > >> Bucardo is a good choice for this usage model because it was > >> originally designed to work over a lossy network connections. > > > > yes, but isn't bucardo designed to 2 nodes only ? > > No, definitely not! You can replicate to any number of systems. And > you can group them in whatever groups you'd like. Multi-master (as > Joshua said) only works between two nodes, but master->slave can be > from a master, to any number of slaves. > Ah! thank you for clarifying that. Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] full text + snowball + ispell?
Le vendredi 09 octobre 2009 19:31:56, Dominic Bevacqua a écrit : > Hi > > This is probably a naive question, but what would I gain from using > snowball and ispell dictionaries in tsearch2, as described here: > > http://www.sai.msu.su/~megera/postgres/gist/tsearch/V2/docs/tsearch-V2-intr > o.html > > ? this description is only relevant for postgresql where tsearch is not in the core. Actually, the core already have snowball stemmer. Anyway, you can rebuild the snowball with some more custom rules if needed, or .. well many things are possible, but pg 8.3 and 8.4 come with a full functionnal text search by default. > > Presumably there is a performance hit associated with doing this (more > from ispell I guess), but what would the benefits be? > > Hope someone can enlighten me! > > Thanks, > > Dominic. > > Dominic Bevacqua > Director > BPM Logic Ltd. > http://www.bpmlogic.com > -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] [ANNOUNCE] [pgeu-general] PGDay.EU 2009 - approaching fast!
Le lundi 26 octobre 2009 15:12:10, Raymond O'Donnell a écrit : > On 20/10/2009 12:52, Thom Brown wrote: > > And there's special hotel rates at the Kyriad Hotel, in Porte d'Ivry > > for those attending the conference. See http://2009.pgday.eu/hotels > > for details. > > I gather from another Irish attendee that this one is booked out now. Sure ? You *have to* ask for the ParisTech rooms (there were still 10 rooms available on friday, only for Paristech) Else the hotel is full. > > Ray. > -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Using column aliasses in the same query
2011/4/17 Robert J.C. Ivens : > > On 17 apr 2011, at 13:21, Leif Biberg Kristensen wrote: > >> On Sunday 17 April 2011 13:01:45 Robert J.C. Ivens wrote: >>> Hi, >>> >>> I am not sure if there ever was a feature request for using defined column >>> aliases in the rest of a query. This would make queries with a lot of >>> logic in those aliased columns a lot smaller and this easier to >>> write/debug. >>> >>> I already know you can use the following syntax: >>> >>> SELECT col1, col2, col3, (col2-col3) as col4 FROM (SELECT col1, (long and >>> lots of logic here) as col2, col3 FROM table) s WHERE col2 < aValue >>> >>> But when you need to use (calculated) values from the actual record and or >>> have sub-selects in your main select that also need to use these values >>> things get really hairy. I don't know if the SQL specification allows it >>> but I know that RDBMS's like Sybase already support this. >>> >>> Any thoughts? >> >> It's easy to define a view or an SQL function and stash the hairy logic >> there. >> >> regards, Leif > > True, but that is essentially the same thing as the example query I gave. > There are plenty of cases where this approach is not workable. select bar.*, b-c from (select i,i,i from foo ) as bar(a,b,c) where c!=1; you can also look at: http://www.postgresql.org/docs/9.0/static/queries-with.html -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Extract (Recover) data from a cluster built on a different architecture (ARM).
2011/4/24 Phil Couling : > Hi > > I'm looking for a way to extract the data from a PostgreSQL 8.3.14 > database (cluster) that was built using an an ARM/Linux server. The > problem is that the hardware itself is a brick and the replacement > hardware will be X86/AMD64. > > Sadly my backups are all copies of the DB files and don't include a > recent text based dump (fail!). All attempts so far to start up a > server using the original files have failed with a couple of different > errors (Failed to parse...). I'm rapidly coming to the conclusion > that this is to do with endianness. X86 is little endian whereas ARM > is primarily big endian. I would test with qemu to get a virtualized ARM then install postgresql and dump the data. > > Are there any tools for recovering data from a database built with a > different architecture or is my data toast unless I can lay my hands > on an ARM box? I don't think it exists one > > Thanks so much for your time. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Partitioning an existing table
2011/4/25 Phoenix Kiula : > Hi. > > The partitioning documentation in PG is very clear on how to partition > a new table. Create child tables, and have triggers that manage > INSERT, UPDATE and DELETE commands. > > How about doing this with existing massive tables? (Over 120 million rows) > > I could create a new parent table with child tables, and then INSERT > all these millions of rows to put them into the right partition. But > is that recommended? It will be faster to INSERT directly in the good child tables and you can make multiple INSERT in parrallel (depend of your IO system and number of tables) > > Thanks > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [HACKERS] PostgreSQL Core Team
2011/4/27 Dave Page : > I'm pleased to announce that effective immediately, Magnus Hagander > will be joining the PostgreSQL Core Team. > > Magnus has been a contributor to PostgreSQL for over 12 years, and > played a major part in the development and ongoing maintenance of the > native Windows port, quickly becoming a committer to help with his > efforts. He's one of the project's webmasters and sysadmins and also > contributes to related projects such as pgAdmin. In his spare time, he > serves as President of the Board of PostgreSQL Europe. Congratulations Magnus ! > > Regards, Dave. > > -- > Dave Page > PostgreSQL Core Team > http://www.postgresql.org/ > > -- > Sent via pgsql-hackers mailing list (pgsql-hack...@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-hackers > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [ADMIN] Can we Flush the Postgres Shared Memory ?
2011/5/3 Merlin Moncure : > On Tue, May 3, 2011 at 8:30 AM, raghu ram wrote: >> On Tue, May 3, 2011 at 6:01 PM, Raghavendra >> wrote: >>> >>> On Tue, May 3, 2011 at 5:37 PM, Simon Riggs wrote: >>>> >>>> On Tue, May 3, 2011 at 11:54 AM, raghu ram >>>> wrote: >>>> >>>> > It may be a silly question, still out of curiosity I want to know, is >>>> > there >>>> > any possible way to flush the Postgres Shared Memory without restarting >>>> > the >>>> > cluster. >>>> > In Oracle, we can flush the SGA, can we get the same feature here.. >>>> > Thanks in Advance. >>>> >>>> >>>> The CHECKPOINT command will do this for you. >>> >>> >> >> According to PostgreSQL documentation, whenever you execute "CHECKPOINT" in >> the database,it will flush the modified data files presented in the Shared >> Buffers retuned to the Disk. >> http://www.postgresql.org/docs/9.0/static/sql-checkpoint.html >> Is this clears the entire shared memory cache and same time,if i execute >> fresh SQL statement, Data will be retuned from disk?? > > no it will not, or at least there is no guarantee it will be. the > only way to reset the buffers in that sense is to restart the database > (and even then they might not be read from disk, because they could > sit in the o/s cache). to force a read from the drive you'd have to > reboot the server, or at least shut it down and use a lot of memory > for some other purpose. with linux, you can : "echo 3 > /proc/sys/vm/drop_caches" for the OS cache > > merlin > > -- > Sent via pgsql-admin mailing list (pgsql-ad...@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-admin > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [ADMIN] Can we Flush the Postgres Shared Memory ?
2011/5/3 Merlin Moncure : > On Tue, May 3, 2011 at 8:55 AM, Cédric Villemain > wrote: >> 2011/5/3 Merlin Moncure : >>> >>> no it will not, or at least there is no guarantee it will be. the >>> only way to reset the buffers in that sense is to restart the database >>> (and even then they might not be read from disk, because they could >>> sit in the o/s cache). to force a read from the drive you'd have to >>> reboot the server, or at least shut it down and use a lot of memory >>> for some other purpose. >> >> with linux, you can : "echo 3 > /proc/sys/vm/drop_caches" for the OS cache >> > > yeah -- good point. aside: does that also drop cache on the drive/raid card? no -- good point too ! (damn! how SAN users will do...maybe EMC or other are good enough to provide some control panel for that ? ) and as I read on the link provided by Tomas, it is better to issue a 'sync' before trying to drop cache (I do that sometime, but postgresql flush its write before shutdown, so I expected the dirty pages in OS cache not to be relative to postgresql files.) -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] undead index
2011/5/4 Jens Wilke : > > This index was deleted several weeks ago. > [...] > > after pg_dumpall|psql from 8.4 to 9.0 the undead index revived on the target > DB: I understood that you droped an index and when you dump/restore you get your index again. Did I miss something ? -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] undead index
2011/5/6 Jens Wilke : > On Thursday 05 May 2011 16:46:05 Cédric Villemain wrote: > >> I understood that you droped an index and when you dump/restore you >> get your index again. > > Yes, that's it, after the pg_upgrade error, i removed the target data > directory, and initialzed a new target DB. > After pg_dumpall|pg_dump i got an index that is not visible in the source DB. > It had either been deleted weeks ago or disappeared from the system catalog > for any other reason. > I can't find this index in a new pg_dumpall output from the source DB. Okay! (I didn't understood correctly), please check the question from Tom. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] track functions call
2011/5/10 Mark : > Is there in PostgreSQL posibility to track which function in C file is called > by postgres functions. For example I would like to see which functions are > calling in ts_rank. select proname,prosrc from pg_proc where proname = 'ts_rank'; proname |prosrc -+-- ts_rank | ts_rank_wttf ts_rank | ts_rank_wtt ts_rank | ts_rank_ttf ts_rank | ts_rank_tt The same table contain list of arg type and return type, etc... > Thanks for reply > Mark > > -- > View this message in context: > http://postgresql.1045698.n5.nabble.com/track-functions-call-tp4384220p4384220.html > Sent from the PostgreSQL - general mailing list archive at Nabble.com. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] track functions call
2011/5/10 Mark : > Thanks for quick reply, > but I want to know, which of these method is called in concrete situation. I > suppose, that ts_rank call only one of these functions(ts_rank_wttf , > ts_rank_wtt , ts_rank_ttf ,ts_rank_tt ). Is it possible? Yes, same table: select proname,prosrc,prorettype,proargtypes from pg_proc where proname = 'ts_rank'; proname |prosrc| prorettype |proargtypes -+--++--- ts_rank | ts_rank_wttf |700 | 1021 3614 3615 23 ts_rank | ts_rank_wtt |700 | 1021 3614 3615 ts_rank | ts_rank_ttf |700 | 3614 3615 23 ts_rank | ts_rank_tt |700 | 3614 3615 select oid,typname from pg_type where oid in ('1021','3614','3615','23','700'); oid | typname --+-- 23 | int4 700 | float4 1021 | _float4 3614 | tsvector 3615 | tsquery so you can find what go with what for the ts_rank function : ts_rank([ weights float4[], ] vector tsvector, query tsquery [, normalization integer ]) returns float4 > Thanks for reply > Mark > > -- > View this message in context: > http://postgresql.1045698.n5.nabble.com/track-functions-call-tp4384220p4385392.html > Sent from the PostgreSQL - general mailing list archive at Nabble.com. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] invalid byte sequence for encoding "UTF8": 0xf1612220
2011/5/12 Craig Ringer : > On 05/11/2011 03:16 PM, AI Rumman wrote: >> >> I am trying to migrate a database from Postgresql 8.2 to Postgresql 8.3 >> and getting the following error: >> >> pg_restore: [archiver (db)] Error from TOC entry 2764; 0 29708702 TABLE >> DATA originaldata postgres >> pg_restore: [archiver (db)] COPY failed: ERROR: invalid byte sequence >> for encoding "UTF8": 0xf1612220 >> HINT: This error can also happen if the byte sequence does not match >> the encoding expected by the server, which is controlled by >> "client_encoding". >> CONTEXT: COPY wi_originaldata, line 3592 >> >> I took a dump from 8.2 server and then tried to restore at 8.3. >> >> Both the client_encoding and server_encoding are UTF8 at both the servers. > > Newer versions of Pg got better at caching bad unicode. While this helps > prevent bad data getting into the database, it's a right pain if you're > moving data over from an older version with less strict checks. > > I don't know of any way to relax the checks for the purpose of importing > dumps. You'll need to fix your dump files before loading them (by finding > the faulty text and fixing it) or fix it in the origin database before > migrating the data. Neither approach is nice or easy, but nobody has yet > stepped up to write a unicode verifier tool that checks old databases' text > fields against stricter rules... The 2 following articles have SQL functions and documentation you may find useful: http://tapoueh.org/articles/blog/_Getting_out_of_SQL_ASCII,_part_1.html http://tapoueh.org/articles/blog/_Getting_out_of_SQL_ASCII,_part_2.html > > -- > Craig Ringer > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Role for CSV import
2011/5/15 Rob Sargent : > > > Tarlika Elisabeth Schmitz wrote: >> >> I have a database that will be populated solely by CSV import. >> There are several CSV file formats, all denormalized. >> >> I have created interim tables which match the CSV file formats. An >> insert trigger distributes the data to their appropriate destination >> tables. The destination tables themselves have insert/update triggers >> for automated data clean-up. Any unresolvable inconsistencies are >> reported in a log table. >> >> I don't want the triggers to fire for every insert/update. There might >> be situations where I have to perform some data clean-up manually. >> >> So, my idea is to create a role for import, query current_user in the >> trigger, perform the trigger actions for importuser and just return the >> row unadulterated for adminuser. >> >> I would give privileges to the importuser for the tables being >> explicitly and implicitly populated. >> >> Is that the best way to organize this? >> >> >> = >> setup: PostgreSQL 8.4 >> dbname = schema name = admin name >> >> >> > > You seem to be writing denormalized import records for the sole purpose of > writing other normalized records. Have you you looked into writing a > programme in a relatively high-level jdbc-friendly language which reads the > csv file, normalizes the data (the code already in your triggers) and > flushes on every say 1000 independent records? The "clean-up" and logging > might also be done by the import app (all depending on what's being cleaned > up and logged :) ) pgloader may be useful: http://pgloader.projects.postgresql.org > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Memcached for Database server
2011/5/17 Craig Ringer : > On 05/17/2011 01:38 PM, Satoshi Nagayasu wrote: >> >> Hi, >> >> 2011/05/17 14:31, Adarsh Sharma wrote: >>> >>> Rick Genter wrote: >>>> >>>> On May 16, 2011, at 10:09 PM, Adarsh Sharma wrote: >>>> >>>> >>>>> Dear all, >>>>> >>>>> I need to research on Memcache in the next few days. >>>>> >>>>> What I want to know is it worth to have memcahed enable in our >>>>> Mysql/ Postgres Production Servers. >>>>> We have databases from 20 to 230 GB and it's not the OLTP just a >>>>> simple OLAP where data is fetched and stored in some meaningful format. >>>>> >>>>> >>>>> What are benefits& why we used memcahed? >>>>> >>>>> What are the bottlenecks to meet? >>>>> >>>> >>>> You need to read about memcached. Memcached is not something you >>>> "enable". You have to program to it. >>>> >>> >>> Thanks Rick, just one question.. >>> >>> At what stage we need memcached & what is the purpose of using it. >>> >>> I just want to know whether it is worth to use memcahced or not as per >>> our requirements. >> >> I just built a software to enable query caching for PostgreSQL >> with using memcached, which adds a proxy layer. >> >> http://pgsnaga.blogspot.com/2011/03/postgresql-query-cache-pqc.html > > Much like with memcached, it looks like you still have to handle your own > cache invalidation with your cache daemon, and it can return outdated or > inconsistent results. Your examples clearly show that. It'd be nice if the > google code front page clearly pointed out that it's not a fully transparent > cache in that it can return stale or inconsistent data and the app has to be > aware of that. > > How do you handle statements that rely on current_timestamp, random(), etc? > What about if their reliance is via a function? Is that just an understood > limitation of the cache, that it'll cache even queries that don't really > make sense to cache? there is also pgmemcache http://pgfoundry.org/projects/pgmemcache/ It is not a proxy but an extension to access memcache from within postgresql. You can use it to build your own querycache. > > -- > Craig Ringer > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] max_connections proposal
2011/5/27 Tom Lane : > Craig Ringer writes: >> On 05/26/2011 09:48 PM, Tom Lane wrote: >>> Craig Ringer writes: >>>> max_connections = 100 # (change requires restart) >>>> # WARNING: If you're about to increase max_connections above 100, you >>>> # should probably be using a connection pool instead. See: >>>> # http://wiki.postgresql.org/max_connections > >>> This gives the impression that performance is great at 100 and falls off >>> a cliff at 101, which is both incorrect and likely to lower peoples' >>> opinion of the software. > >> Fair call; the use of a specific value is misleading. > >>> I'd suggest wording more like "if you're >>> considering raising max_connections into the thousands, you should >>> probably use a connection pool instead". > >> Best performance is often obtained with the number of _active_ >> connections in the 10s to 30s on commonplace hardware. I'd want to use >> "hundreds" - because mailing list posts etc suggest that people start >> running into problems under load at the 400-500 mark, and more >> importantly because it's well worth moving to pooling _way_ before that >> point. > > OK, maybe word it as "If you're considering raising max_connections much > above 100, ..." ? "Be aware that a too large value can be counter-productive and a connection pooler can be more appropriate." No scale... I am really happy to face more and more servers where 'top' truncate the list of processors... We will have to scale and not make that limitation a feature, imho. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Shared Buffer Size
2011/5/30 Toby Corkindale : > On 28/05/11 18:42, Carl von Clausewitz wrote: >> >> a few months ago, when I installed my first PostgreSQL, I have had the >> same problem. I've try to get any information about optimal memory >> config, and working, but there wasn't any "optimal memory setting >> calculator" on the internet, just some guide in the posgre documentation >> >> (http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC). >> I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for >> PostgreSQL and a little PHP app with 2 user), and I have theese setting >> in postgresql.conf (which are not the default): >> > [snip] >> >> work_mem = 64MB# min 64kB >> maintenance_work_mem = 1024MB# min 1MB >> max_stack_depth = 64MB# min 100kB > > Just a warning - but be careful about setting work_mem to high values. > The actual memory used by a query can be many times the value, depending on > the complexity of your query. > > In a particular query I saw last week, we were regularly exceeding the > available memory on a server, because the query was requiring 80 times the > value of work_mem, and work_mem had been set to a high value. > > Reducing work_mem back to just 4MB reduced memory usage by a couple of > gigabytes, and had almost no effect on the execution time. (Actually, it was > marginally faster - probably because more memory was left for the operating > system's cache) Maybe, you're also aware that linux may decide to swap to protect its buffer cache (depend of the strategy it got in its configuration) and also that you may be limited by commitable memory. On a default install where the swap is NOT at least twice the RAM size, you're not able to commit all RAM you have. But, it protects the buffer cache for the not allocatable memory. So maybe you've hitten a step where you did swap your work_mem... anyway interesting to have a query where a large work_mem is not better... Will it be hard to isolate the case and make it public ? In the long term it might be a good test to add to a performance farm if it is not based on a non-optimum linux configuration (I mean if the issue *need* the work_mem to be reduced to be fixed). > > Toby > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Shared Buffer Size
2011/5/31 Toby Corkindale : > On 30/05/11 20:41, Cédric Villemain wrote: >> >> 2011/5/30 Toby Corkindale: >>> >>> On 28/05/11 18:42, Carl von Clausewitz wrote: >>>> >>>> a few months ago, when I installed my first PostgreSQL, I have had the >>>> same problem. I've try to get any information about optimal memory >>>> config, and working, but there wasn't any "optimal memory setting >>>> calculator" on the internet, just some guide in the posgre documentation >>>> >>>> >>>> (http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC). >>>> I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for >>>> PostgreSQL and a little PHP app with 2 user), and I have theese setting >>>> in postgresql.conf (which are not the default): >>>> >>> [snip] >>>> >>>> work_mem = 64MB# min 64kB >>>> maintenance_work_mem = 1024MB# min 1MB >>>> max_stack_depth = 64MB# min 100kB >>> >>> Just a warning - but be careful about setting work_mem to high values. >>> The actual memory used by a query can be many times the value, depending >>> on >>> the complexity of your query. >>> >>> In a particular query I saw last week, we were regularly exceeding the >>> available memory on a server, because the query was requiring 80 times >>> the >>> value of work_mem, and work_mem had been set to a high value. >>> >>> Reducing work_mem back to just 4MB reduced memory usage by a couple of >>> gigabytes, and had almost no effect on the execution time. (Actually, it >>> was >>> marginally faster - probably because more memory was left for the >>> operating >>> system's cache) >> >> Maybe, you're also aware that linux may decide to swap to protect its >> buffer cache (depend of the strategy it got in its configuration) and >> also that you may be limited by commitable memory. On a default >> install where the swap is NOT at least twice the RAM size, you're not >> able to commit all RAM you have. But, it protects the buffer cache for >> the not allocatable memory. >> >> So maybe you've hitten a step where you did swap your work_mem... >> anyway interesting to have a query where a large work_mem is not >> better... Will it be hard to isolate the case and make it public ? In >> the long term it might be a good test to add to a performance farm if >> it is not based on a non-optimum linux configuration (I mean if the >> issue *need* the work_mem to be reduced to be fixed). > > > In this case, it was not just slowing down due to the amount of work_mem > allocated -- it was exceeding several gigabytes of memory usage and crashing > out. Lower values of work_mem allowed the query to succeed, but it used > almost 3G.. Even lower values of work_mem could do the query in only a few > hundred MB - and was faster. > > I note that if you exceed work_mem in a query,then I guess the temp files > created are cached by the VM cache, so it's not like the performance hit > will be *too* bad? correct. > > > I agree that the slowness of the 3GB version could be due to swapping or > something like that.. or just due to the VM cache being eliminated as I > suggested. > > Either way - the problem was that this (machine-generated) query was > pivoting and joining many views-of-views. It's a pretty nasty query. > > The key fact is that postgres (8.3) seems to allocate the full work_mem > amount every time it needs *some* work_mem - even if it could have happily > got by on just a few MB. So if your query allocates work_mem a hundred > times, it'll consume $work_mem * 100 -- or die trying. work_mem is not allocated fully from the beginning. It is allocated initialy at XXKB (or MB), then doubled each time we are near the end of the allocated memory. (I am unsure of detail like when exactly we alloc more mem but the logic is this one) > > I'm curious to know if Postgres 9.0 has improved this -- I'm going to try > re-running this query on it once I get a chance, but due to contractual > agreements this isn't quite as simple to test as you might think. > (And running the test over a much smaller example data set might not trigger > the same query plan) > I'll get there eventually though :) > 9.0 may help you for generated queries so it is worth testing it I think. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Shared Buffer Size
2011/5/28 Carl von Clausewitz : > Hi Preetika, > a few months ago, when I installed my first PostgreSQL, I have had the same > problem. I've try to get any information about optimal memory config, and > working, but there wasn't any "optimal memory setting calculator" on the > internet, just some guide in the posgre documentation > (http://www.postgresql.org/docs/9.0/interactive/kernel-resources.html#SYSVIPC). > I got FreeBSD 8.2 AMD64, with 8 GB of memory (this server is just for > PostgreSQL and a little PHP app with 2 user), and I have theese setting in > postgresql.conf (which are not the default): > listen_addresses = '192.168.1.1' # what IP address(es) to listen on; > port = 5432 # (change requires restart) > max_connections = 200 # (There are 20 user, with Microsoft Access client and > ODBC connections... (min 6 connection / user)) > shared_buffers = 1900MB # min 128kB > temp_buffers = 64MB # min 800kB > work_mem = 64MB # min 64kB > maintenance_work_mem = 1024MB # min 1MB > max_stack_depth = 64MB # min 100kB this max_stack_depth is unsane it should be the result of (ulimit -u) - 1MB maximum. Except if you have a kernel build with a stack_depth of 64MB, which would surprised me. (common kernel have 8MB or 16MB of stack_depth) > shared_preload_libraries = '$libdir/plpgsql.so' # (change requires restart) > checkpoint_segments = 32 # in logfile segments, min 1, 16MB each > checkpoint_timeout = 15min # range 30s-1h > checkpoint_completion_target = 0.5 # checkpoint target duration, 0.0 - 1.0 > effective_cache_size = 4096MB > track_activities = on > track_counts = on > #track_functions = none # none, pl, all > #track_activity_query_size = 1024 # (change requires restart) > update_process_title = off > deadlock_timeout = 1s > max_locks_per_transaction = 256 # min 10 > And the sysctl.conf from BSD, which are relevant for theese postgre > settings: > kern.ipc.shmall=524288 > kern.ipc.shmmax=2147483648 > kern.ipc.semmap=512 > kern.ipc.shm_use_phys=1 > And the last one is the loader.conf from BSD, which are relevant for theese > postgre settings: > kern.ipc.semmni=512 > kern.ipc.semmns=1024 > kern.ipc.semmnu=512 > Theese settings based on my experience, with lot of reboot and restart and > reload config - I hope this can help you, and I accept any comment, if I > need to set everything else :-) > Thanks, > Carl > 2011/5/27 preetika tyagi >> >> Hi Derrick, >> Thank you for your response. >> I saw this document and trying to understand "Interaction with the >> Operating System Cache" which is mentioned in this document. >> I have the following question- >> Hows does the shared buffer in Postgres rely on the Operating System >> cache? >> Suppose my RAM is 8 GB and shared_buffer is 24 MB in postgres. And there >> are some dirty pages in shared_buffer and I need to write a dirty page back >> to the disk to bring in a new page. What happens in this case? The dirty >> page will be written to the disk considering the shared_buffer size as 24 >> MB? or it will not be written and will stay in RAM which is 8 GB? >> Thanks, >> Preetika >> >> On Fri, May 27, 2011 at 2:11 PM, Derrick Rice >> wrote: >>> >>> Check out the "Inside the PostgreSQL Buffer Cache" link here: >>> >>> http://projects.2ndquadrant.com/talks >>> >>> Thanks to Greg Smith (active here). >>> >>> Derrick >>> >>> On Fri, May 27, 2011 at 3:36 PM, preetika tyagi >>> wrote: >>>> >>>> Hi All, >>>> >>>> I am little confused about the internal working of PostgreSQL. There is >>>> a parameter shared_buffer in postgres.conf and I am assuming that it is >>>> used >>>> for buffer management in PostgreSQL. If there is a need to bring in a new >>>> page in the buffer and size exceeds the shared_buffer limit, a victim dirty >>>> page will be written back to the disk. >>>> >>>> However, I have read on many links that PostgreSQL depends on the OS for >>>> caching. (http://www.varlena.com/GeneralBits/Tidbits/perf.html#shbuf) >>>> >>>> So my question is, the actual limit of the shared buffer will be defined >>>> by OS or the shared_buffer parameter in the postgres.conf to figure whether >>>> a victim dirty page needs to be selected for disk write or not? >>>> >>>> Thanks! >>> >> > > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening
2011/6/16 Alexander Farber : > Hello, > > I'm still suffering with my Drupal 7.2 site and > PostgreSQL 8.4.8 every evening, for example > right now. I have tried different combinations > for /etc/pgbouncer.ini - for example now I have: > > [databases] > pref = host=/tmp user=pref password=XXX dbname=pref > > [pgbouncer] > logfile = /var/log/pgbouncer.log > pidfile = /var/run/pgbouncer/pgbouncer.pid > ;listen_addr = 127.0.0.1 > listen_port = 6432 > unix_socket_dir = /tmp > > auth_type = md5 > auth_file = /var/lib/pgsql/data/global/pg_auth > > pool_mode = session > you probably want to use transaction mode here, instead of session. -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening
2011/6/19 Alexander Farber : > Hello everyone, > > after the suggestion from this mailing list, > I have installed pgbouncer at my > CentOS 5.6 / 64 bit server and > activated its transaction mode: > > [databases] > pref = host=/tmp user=pref password=XXX dbname=pref > > [pgbouncer] > logfile = /var/log/pgbouncer.log > pidfile = /var/run/pgbouncer/pgbouncer.pid > ;listen_addr = 127.0.0.1 > listen_port = 6432 > unix_socket_dir = /tmp > > auth_type = md5 > auth_file = /var/lib/pgsql/data/global/pg_auth > > pool_mode = transaction > > server_check_delay = 10 > > max_client_conn = 200 > default_pool_size = 20 > > log_connections = 0 > log_disconnections = 0 > log_pooler_errors = 1 > > Now the server stopped crashing even > at peak times and "pg_top -I" only shows > few simultaneous commands active: > > last pid: 13476; load avg: 4.03, 4.02, 4.29; up 2+22:57:32 > 19:37:05 > 16 processes: 3 running, 13 sleeping > CPU states: 67.8% user, 0.0% nice, 0.7% system, 27.0% idle, 4.5% iowait > Memory: 3363M used, 561M free, 374M buffers, 2377M cached > Swap: 7812M free > > PID USERNAME PRI NICE SIZE RES STATE TIME WCPU CPU COMMAND > 13018 postgres 20 0 1173M 179M run 4:08 11.39% 99.99% > postgres: pref pref [local] SELECT > 13144 postgres 18 0 1172M 179M run 3:38 6.11% 84.15% > postgres: pref pref [local] SELECT > 1636 postgres 16 0 1170M 152M run 186:34 4.67% 20.79% > postgres: pref pref [local] SELECT > 12761 postgres 16 0 1173M 180M sleep 3:16 20.22% 5.94% > postgres: pref pref [local] idle > > And in the /var/log/pgbouncer.log shows: > > 2011-06-19 19:28:05.772 3031 LOG Stats: 29 req/s, in 4087 b/s, out > 116615 b/s,query 106024 us > 2011-06-19 19:29:05.772 3031 LOG Stats: 29 req/s, in 3665 b/s, out > 39510 b/s,query 71303 us > 2011-06-19 19:30:05.772 3031 LOG Stats: 32 req/s, in 4318 b/s, out > 90909 b/s,query 115946 us > 2011-06-19 19:31:05.773 3031 LOG Stats: 33 req/s, in 4686 b/s, out > 79397 b/s,query 84436 us > 2011-06-19 19:32:05.774 3031 LOG Stats: 31 req/s, in 4478 b/s, out > 108103 b/s,query 104060 us > > But I have a new (not so bad problem) - > > My php script displaying player stats: > http://preferans.de/user.php?id=OK493430777441 > will sometimes exit with the PDO error: > > SQLSTATE[26000]: Invalid sql statement name: > 7 ERROR: prepared statement > "pdo_stmt_0016" does not exist > > When I reload it, it works ok. > > The SQL statements called by the script are: > > try { > # enable persistent connections and throw exception on any errors > $options = array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, > PDO::ATTR_PERSISTENT => true); > $db = new PDO('pgsql:host=' . DBHOST . ' port=6432; dbname=' . > DBNAME, DBUSER, DBPASS, $options); > > $sth = $db->prepare(' > select first_name, city, avatar, login > logout as online > from pref_users where id=? > '); > $sth->execute(array($id)); > > and so on - a dozen SELECT statements. > > I wonder, if this "prepared statement not found" > problem is caused by transaction mode of pgbouncer > and if there is a way to workaround that? Ooops, I would have been more explicit here! You need to add begin/commit to build a transaction. From PHP::PDO doc: http://www.php.net/manual/en/pdo.begintransaction.php beginTransaction(); /* Change the database schema */ $sth = $dbh->exec("DROP TABLE fruit"); /* Commit the changes */ $dbh->commit(); /* Database connection is now back in autocommit mode */ ?> An alternative can be to use pre_prepare: https://github.com/dimitri/preprepare Please read the README carefully for this one if you intend to use it. > > And I can't switch to pgbouncer session mode, > because it will hang at peak time - tried that already. > > Thank you > Alex > > P.S. Here again my specs: > > pgbouncer-1.3.4-1.rhel5 > postgresql-libs-8.4.8-1PGDG.rhel5 > compat-postgresql-libs-4-1PGDG.rhel5 > postgresql-8.4.8-1PGDG.rhel5 > postgresql-server-8.4.8-1PGDG.rhel5 > postgresql-devel-8.4.8-1PGDG.rhel5 > php53-pgsql-5.3.3-1.el5_6.1 > php53-pdo-5.3.3-1.el5_6.1 > php53-5.3.3-1.el5_6.1 > > I have 4GB RAM and postgresql.conf contains: > max_connections = 50 > shared_buffers = 1024MB > #listen_addresses = 'localhost' (i.e. unix socket only) > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening
2011/6/19 Alexander Farber : > Hello Cedric and others, > > On Sun, Jun 19, 2011 at 9:56 PM, Cédric Villemain > wrote: >> 2011/6/19 Alexander Farber : >>> [pgbouncer] >>> logfile = /var/log/pgbouncer.log >>> pidfile = /var/run/pgbouncer/pgbouncer.pid >>> listen_port = 6432 >>> unix_socket_dir = /tmp >>> auth_type = md5 >>> auth_file = /var/lib/pgsql/data/global/pg_auth >>> pool_mode = transaction >>> server_check_delay = 10 >>> max_client_conn = 200 >>> default_pool_size = 20 > >>> My php script displaying player stats: >>> http://preferans.de/user.php?id=OK493430777441 >>> will sometimes exit with the PDO error: >>> >>> SQLSTATE[26000]: Invalid sql statement name: >>> 7 ERROR: prepared statement >>> "pdo_stmt_0016" does not exist >>> >>> try { >>> # enable persistent connections and throw exception on any errors >>> $options = array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION, >>> PDO::ATTR_PERSISTENT => true); >>> $db = new PDO('pgsql:host=' . DBHOST . ' port=6432; dbname=' . >>> DBNAME, DBUSER, DBPASS, $options); >>> You have your first request here : >>> $sth = $db->prepare(' >>> select first_name, city, avatar, login > logout as online >>> from pref_users where id=? >>> '); then the second: >>> $sth->execute(array($id)); in auto-commit, each request/transaction will probably be affected to a different connection with pgbouncer in mode transaction. So you need to have a begin/commit before/after them to be sure everything is executed together. >>> >>> and so on - a dozen SELECT statements. >>> >> Ooops, I would have been more explicit here! >> You need to add begin/commit to build a transaction. >> From PHP::PDO doc: >> http://www.php.net/manual/en/pdo.begintransaction.php >> > /* Begin a transaction, turning off autocommit */ >> $dbh->beginTransaction(); >> >> /* Change the database schema */ >> $sth = $dbh->exec("DROP TABLE fruit"); >> >> /* Commit the changes */ >> $dbh->commit(); >> >> /* Database connection is now back in autocommit mode */ >> ?> >> >> An alternative can be to use pre_prepare: >> https://github.com/dimitri/preprepare >> >> Please read the README carefully for this one if you intend to use it. > > why add a begin/commit if I only > have SELECT statements > there (in the default mode) and > the data isn't critical to me > (just some player statistics and > notes by other players - i.e. > a statistic or note is ok to be lost > occasionally)? > > Also I've changed my PHP-script > to non-persistent connections: > > $options = array(PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION); > $db = new PDO(sprintf('pgsql:host=%s port=%u; dbname=%s', > DBHOST, DBPORT, DBNAME), DBUSER, DBPASS, $options); > > and restarted Apache 2.2.3, > but that error is still there: > > SQLSTATE[26000]: Invalid sql > statement name: 7 ERROR: prepared > statement "pdo_stmt_000a" does not exist > > Regards > Alex > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL 8.4.8 bringing my website down every evening
2011/6/21 Lincoln Yeoh : > At 04:13 AM 6/20/2011, Alexander Farber wrote: >> >> why add a begin/commit if I only >> have SELECT statements >> there (in the default mode) and >> the data isn't critical to me >> (just some player statistics and >> notes by other players - i.e. >> a statistic or note is ok to be lost >> occasionally)? > > If you're not intending to write anything to the database you could try do a > rollback instead of commit. > > I'm not familiar with your system, but if you are getting > >> SQLSTATE[25P02]: In failed sql transaction: >> 7 ERROR: current transaction is aborted, >> commands ignored until end of transaction block I understood that the OP put a begin/commit around *each* request (so one around prepare, and one around exec) Php documentation contains useful information with examples. Alexander, I suggest you to just add a begin before the 'prepare' and a commit after the 'exec', not between each. > > Maybe you can do a rollback, begin, followed by your sql statements then > rollback again. > > You would get warnings for a rollback followed by rollback, but they might > not be fatal (not sure about your config). > > There might be a problem with one of the SQL queries and that's why it > cannot be committed. > > Alternatively the program logic might be bypassing a COMMIT. A BEGIN > followed by a BEGIN would cause an "already in transaction" error which > would cause the "transaction is aborted" problem. this is not correct cedric=# begin ; BEGIN cedric=# begin ; ATTENTION: une transaction est déjà en cours BEGIN cedric=# select 1; ?column? -- 1 (1 ligne) cedric=# commit; COMMIT -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgsql error
2011/7/25 Mcleod, John : > Hello all, > > I'm new to pgsql and I'm taking over for a project manager that left the > company. > > I'm receiving the following error… > > > > CONTEXT: writing block 614 of relation 394198/412175 > > WARNING: could not write block 614 of 394198/412175 > > DETAIL: Multiple failures --- write error may be permanent. Maybe you have disk full ? > > ERROR: xlog flush request 0/34D53680 is not satisfied --- flushed only to > 0/34CD1EB0 > > > > Is there anyone who has seen this and can help me. > > > > Thank you. > > > > John W. McLeod > > > > -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with planner
2011/8/8 hubert depesz lubaczewski : > Hi, > we have 8.3.11 installation on client site, with table, which looks like > this: > > $ \d objects > Table "public.objects" > Column | Type | > Modifiers > -+--+--- > ... > state | text | > ... > ending_tsz | timestamp with time zone | default (now() + '4 > mons'::interval) > ... > Indexes: > "objects_stat_user_id_creation_tsz" btree (state, user_id, creation_tsz) > "objects_ending_tsz_active" btree (ending_tsz) WHERE state = 'active'::text > "objects_ending_tsz_idx" btree (ending_tsz) > ... > > > and we have a query: > select count(*) from objects where state='active'::text and ending_tsz <= ( > select now() - '1 day'::interval ); > > Normally this query has been getting plan, using > objects_ending_tsz_active, which is sane and fast. > > But today, without any sensible reason, it switched to: > > QUERY PLAN > - > Aggregate (cost=6719810.62..6719810.63 rows=1 width=0) > InitPlan > -> Result (cost=0.00..0.01 rows=1 width=0) > -> Bitmap Heap Scan on objects (cost=1289719.58..6711422.56 rows=3355219 > width=0) > Recheck Cond: ((state = 'active'::text) AND (ending_tsz <= $0)) > -> BitmapAnd (cost=1289719.58..1289719.58 rows=3355219 width=0) > -> Bitmap Index Scan on objects_stat_user_id_creation_tsz > (cost=0.00..334318.95 rows=10065657 width=0) > Index Cond: (state = 'active'::text) > -> Bitmap Index Scan on objects_ending_tsz_idx > (cost=0.00..953722.77 rows=24986738 width=0) > Index Cond: (ending_tsz <= $0) > (10 rows) > > running analyze objects 2 times in a row fixed the issue, but hour later > - the problem came back. > > what can be wrong? The plan turn bad without any new ANALYZE, right ? does the table increase more quickly now than before ? is it now way larger than before ? Also, do you have an explain with the 'good' plan ? -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Problem with planner
2011/8/9 hubert depesz lubaczewski : > On Tue, Aug 09, 2011 at 04:08:39PM -0400, Tom Lane wrote: >> I suppose what's going on here is that the "state" and "ending_tsz" >> columns are highly correlated, such that there are lots of 'active' >> items but hardly any of them ended more than a day ago? If so, > > yes, that's correct. > >> you're going to have to rethink the representation somehow to get >> good results, because there's no way the planner will see this until >> we have cross-column stats in some form. >> >> The least invasive fix that I can think of offhand is to set up an >> index (non-partial) on the expression >> >> case when state = 'active' then ending_tsz else null end >> >> and phrase the query as >> >> WHERE (case when state = 'active' then ending_tsz else null end) <= >> (now() - '1 day'::interval) >> >> This should result in condensing the stats about active items' >> ending_tsz into a format the planner can deal with, assuming >> you're running a PG version that will keep and use stats on >> expression indexes. > > it's 8.3.11. > I solved the problem by adding "enable_bitmapscan = false" (and keeping > the query in original format, with subselect) which caused the plan to > be ok. > > but I'm much more interested to understand why pg chooses *not* to use > index which is tailored specifically for the query - it has exactly > matching where clause, and it indexes the column that we use for > comparison. > > the thing is - i solved the problem for now. I might add new index the > way you suggest, and it might help. but it's is very unnerving that > postgresql will just choose to ignore specially made index, perfectly > matching the criteria in query. > > since I can't test it - is there any chance (Cédric suggested something > like this) that some newer version has more logic to try harder to use > best index? I wondered if it is the same logic to choose between bitmap and indexscan in both 8.3 and HEAD. It looks like it is (except that now you can put the not-wanted index on another tablepsace and increase the cost of accessing it, which is another no-so-pretty way to workaround the issue). > > Best regards, > > depesz > > -- > The best thing about modern society is how easy it is to avoid contact with > it. > http://depesz.com/ > -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to find freak UTF-8 character?
2011/10/2 Leif Biberg Kristensen : > On Sunday 2. October 2011 15.53.50 pasman pasmański wrote: >> Its simple to remove strange chars with regex_replace. > > True, but first you have to know how to represent a «strange char» in > Postgresql :P > > It isn't all that obvious, and it's difficult to search for the solution. I > tried a lot of different search terms in Google, and none of them turned up > anything near what I needed. you may have miss this one : http://tapoueh.org/blog/2010/02/23-getting-out-of-sql_ascii-part-2.html > > regards, Leif > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Never-idle autovacuum, and does (auto)vacuuming fill the OS cache ?
zed automatically; > have I already said that I long to upgrade ? :p) > >> > * Does vacuuming fill the OS's disk cache, and is it an issue if it >> > does ? >> Well, it _affects_ the OS's disk cache. Whether it fills it is >> controlled by the cache algorithms and the amount of memory you have >> devoted to cache. Every time you touch the disk, you potentially >> alter the cache in favour of what you just saw. >> >> In the above artificial examples, the vacuums that run "after everyone >> went home" will almost certainly end up taking over the cache, because >> there's no other activity to keep other things in the disk cache. In >> the second example, though, with a lot of read-only activity all the >> time, the things that are most popular are likely to remain in a >> (modern) disk cache most of the time because they're called so often >> that the vacuumed page doesn't end up being enough traffic to cause an >> eviction (or, anyway, to evict for any significant time). > > Ok, so say my churn happens only in the last 10 minutes of data and readonly > queries only look at the last 24 hours of data, if vacuuming is triggered > every 48 hours, that's 24 hours of data that will potentially get back into > the cache with no benefit (or if I'm not mistaken, with PG =< 8.3 it's much > more than 24 hours). Pity. Is there a counter-example where there is a caching > benefit to the current behaviour ? If not, that might be a low-hanging fruit > to improve postgres performance. Not a direct answer but some items after reading the thread: * 8.4 come with visibility map and it is nice to reduce IO usage (without trouble anymore with FSM_ GUC) * postgresql cache got its own logic, DB oriented. * operating system cache is proud enough to not waste all of your cache when reading one file sequentialy. * you may increase the number of autovacuum workers too, (depend of your IO and CPU) * it is better to change the autovacuum settings via cron than running vacuum. If you wonder, you can use pgfincore to track your OS cache usage per table&index and the PostgreSQL cache with pg_buffercache. Note that pgfincore is lock free, but pg_buffercache may impact your performance (it is still interesting to use it to check how your shared buffers are used and it can helps optimising your shared_memory size) -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] many sql file and one transaction
2011/10/18 salah jubeh : > Hello, > Thanks for the reply. > I considered cat as an option but I did not go for it, because of the > number of sql files I have is large which makes the code not readable > The second thing, which is more important is because I have some advantages > with using -f such as the line number which causing the error. you can do : cat module1.sql \ module2.sql \ module_etc.sql \ | psql -f - > Regards > > > > > > > From: "andr...@a-kretschmer.de" > To: salah jubeh > Sent: Tuesday, October 18, 2011 2:23 PM > Subject: Re: [GENERAL] many sql file and one transaction > > > Zitat von salah jubeh : > >> Hello, >> >> >> I have many SQL script files to update schema, delete data, unit >> test etc. I want to run all the files in one transaction using >> shell script to ease the installation procedure. I can do that from >> the psql client by using the \i option >> >> >> BEGIN; >> >> \i / .../ module1.sql >> >> \i / .../ module2.sql >> >> \i / .../ module_etc.sql >> COMMIT; >> >> >> Is there a way to do that using psql command shell script. >> >> >> I.E. >> >> I want to run the following in one transaction, >> >> psql -f module1.sql >> >> psql -f module2.sql >> >> psql -f module_etc.sql > > > cat module1.sql module2.sql module_etc.sql | psql > > > Regards, Andreas > > > > -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] help understanding the bitmap heap scan costs
Le lundi 21 mai 2012 15:35:55, Luca Ferrari a écrit : > Hi all, > I don't fully understand how is the cost of a bitmap heap scan > computed. For instance when the explain output node is similar to the > following: > > Bitmap Heap Scan on test (cost=17376.49..48595.93 rows=566707 width=6) >Recheck Cond: ((text1 = 'A'::text) OR (text1 = 'C'::text)) >Filter: (num1 > 1) > > how is the cost of the node (48595.93 - 17376.49) computed? I think it > should be something like: > (reltuples * ( index_filtering_factor_A + index_filtering_factor_B) ) > * (cpu_tuple_cost + cpu_operator_cost) > + (reltuples * ( index_filtering_factor_A + index_filtering_factor_B) > ) / tuples_per_pages > > but this does not equal the optimizer cost, so I guess I'm doing > something wrong. Suggestions? the random_page_cost is evaluated with random_page_cost and seq_page_cost, it depends of the number of rows fetch and the number of relpages. Read ./src/backend/optimizer/path/costsize.c /* * For small numbers of pages we should charge spc_random_page_cost * apiece, while if nearly all the table's pages are being read, it's more * appropriate to charge spc_seq_page_cost apiece. The effect is * nonlinear, too. For lack of a better idea, interpolate like this to * determine the cost per page. */ if (pages_fetched >= 2.0) cost_per_page = spc_random_page_cost - (spc_random_page_cost - spc_seq_page_cost) * sqrt(pages_fetched / T); else cost_per_page = spc_random_page_cost; -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Global Named Prepared Statements
Le lundi 21 mai 2012 16:08:27, Merlin Moncure a écrit : > On Mon, May 21, 2012 at 8:55 AM, Samba wrote: > > If Stored Procedures are equivalent to prepared statements [ as far as > > preparing the query plan is concerned], then what i'm looking for is > > perhaps a Global Prepared Statements at the client/driver side. > > > > Specifically, It wold be good if the JDBC driver prepares all the queries > > for invoking stored procedures at once per JVM so that each connection > > need not incur the cost of preparing [parsing and storing] those queries > > per connection. > > > > Thus we can put all the queries [stored procedure calls] at a single > > place, and prepare those queries during boot of the server [or > > deployment of the web application], and then execute those queries > > endless times by closing just the resultset object while keeping the > > statement open for ever. > > > > I know this is not form to discuss the JDBC related questions but put my > > thoughts here to complete the question i raised. If folks think this idea > > is valid then i will take it up with the JDBC Driver team. > > Well, there is a client side component to statement preparation which > the JDBC driver also does. > > There is a reason why there are no global plans in postgres: it > complicates everything in the sense that there you have to deal with > shared memory, locking. and scope/lifetime issues. Even though it can > be a big reduction in memory consumption you're on the wrong side of > the tradeoff for most cases. If you want to leverage server side > objects with >1 client connections I strongly advise looking at a > connection pooler -- not the lame client side pooling solutions you > typically see with the java stack -- but something like pgbouncer. > This amortizes memory costs of server side plans. pgbouncer is > mostly compatible with JDBC; you have to disable automatic statement > preparation. and there is preprepare to help with prepared_statement and pgbouncer: https://github.com/dimitri/preprepare -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] insert ... returning in plpgsql
Le mardi 2 octobre 2012 15:01:08, Willy-Bas Loos a écrit : > Hi, > (postgres 9.1) > I was doing something like this in a plpgsql function, but i got a Syntax > Error. > > t_var:=(insert into table1(field2) values ('x') returning field1); > > Is there no support for using RETURNING in insert, update, delete queries > to fill a variable in plpgsql? Use the following : INSERT ... RETURNING expressions INTO [STRICT] target; UPDATE ... RETURNING expressions INTO [STRICT] target; DELETE ... RETURNING expressions INTO [STRICT] target; http://www.postgresql.org/docs/9.1/static/plpgsql-statements.html Else if you have multiple rows do something like: FOR my_update IN UPDATE ... RETURNING * LOOP -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Plproxy with returns table() make PG segfault
678657 in ExecScanFetch (recheckMtd=, > accessMtd=, node=) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/executor/execScan.c:82 > #11 ExecScan (node=0x7f26eb7e7ee0, accessMtd=, > recheckMtd=) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/executor/execScan.c:132 > #12 0x7f26ea670da8 in ExecProcNode (node=0x7f26eb7e7ee0) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/executor/execProcnode.c: > 416 #13 0x7f26ea66fbf2 in ExecutePlan (dest=, > direction=, numberTuples=, > sendTuples=, operation=, > planstate=, estate=) > at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/executor/execMain.c:1440 > #14 standard_ExecutorRun (queryDesc=0x7f26eb7a1790, > direction=NoMovementScanDirection, count=0) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/executor/execMain.c:314 > #15 0x7f26ea7461d7 in PortalRunSelect (portal=0x7f26eb7dbd90, > forward=, count=0, dest=0x7f26eb6d87a0) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/tcop/pquery.c:943 > #16 0x7f26ea747640 in PortalRun (portal=, > count=, isTopLevel=, dest=, > altdest=, completionTag=) > at /opt/src/postgresql-9.1-9.1.6/build/../src/backend/tcop/pquery.c:787 > #17 0x7f26ea7438a3 in exec_simple_query (query_string= out>) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/tcop/postgres.c:1020 > #18 0x7f26ea744965 in PostgresMain (argc=, > argv=, username=) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/tcop/postgres.c:3968 > #19 0x7f26ea702da3 in BackendRun (port=) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/postmaster/postmaster.c: > 3617 #20 BackendStartup (port=) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/postmaster/postmaster.c: > 3302 #21 ServerLoop () at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/postmaster/postmaster.c: > 1466 #22 0x7f26ea705861 in PostmasterMain (argc=-345045536, > argv=0x7f26eb6cb200) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/postmaster/postmaster.c: > 1127 #23 0x7f26ea6a3dc3 in main (argc=5, argv=0x7f26eb6cb1e0) at > /opt/src/postgresql-9.1-9.1.6/build/../src/backend/main/main.c:199 > (gdb) quit > > > We can see here that "return setof record" works fine, but "return > table()" crashes. > > Even if we can use the first syntax, I don't think we can accept that. -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation signature.asc Description: This is a digitally signed message part.
Re: [GENERAL] Table Bloat still there after the Vacuum
2010/4/26 akp geek : > Hi All - > I have a table bloated with following details > rows:29431 pages:516039 shouldbe:534 (966.4X) wasted size:4223016960 (3 GB) > * I think this info come from check_postgres nagios script. As said in the doc, this info is not 100% sure : it depend on the last analyze *and* the type the columns *and* the distribution of data in those columns. I suggest you to check the pg_stats.avg_width relative to this table. And, vacuum verbose information is good for diagnostics... > I did a vacuum on the database and also I did vacuumdb > full on the table. Still there is no change. Can you please suggest if there > is any other operation that can be done to take care of the issue > Thanks for the help > > Regards -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Table Bloat still there after the Vacuum
2010/4/26 Cédric Villemain : > 2010/4/26 akp geek : >> Hi All - >> I have a table bloated with following details >> rows:29431 pages:516039 shouldbe:534 (966.4X) wasted size:4223016960 (3 GB) >> * > > I think this info come from check_postgres nagios script. > As said in the doc, this info is not 100% sure : it depend on the last > analyze *and* the type the columns *and* the distribution of data in > those columns. > I suggest you to check the pg_stats.avg_width relative to this table. > > And, vacuum verbose information is good for diagnostics... and the verbose said there is actually a problem :-) check long running transaction, idle in connection, ... they prevent the vacuum things > >> I did a vacuum on the database and also I did vacuumdb >> full on the table. Still there is no change. Can you please suggest if there >> is any other operation that can be done to take care of the issue >> Thanks for the help >> >> Regards > > > > -- > Cédric Villemain > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Storing many big files in database- should I do it?
2010/4/27 Rod : > Hello, > > I have a web application where users upload/share files. > After file is uploaded it is copied to S3 and all subsequent downloads > are done from there. > So in a file's lifetime it's accessed only twice- when created and > when copied to S3. > > Files are documents, of different size from few kilobytes to 200 > Megabytes. Number of files: thousands to hundreds of thousands. > > My dilemma is - Should I store files in PGSQL database or store in > filesystem and keep only metadata in database? > > I see the possible cons of using PGSQL as storage: > - more network bandwidth required comparing to access NFS-mounted filesystem ? > - if database becomes corrupt you can't recover individual files > - you can't backup live database unless you install complicated > replication add-ons > - more CPU required to store/retrieve files (comparing to filesystem access) > - size overhead, e.g. storing 1000 bytes will take 1000 bytes in > database + 100 bytes for db metadata, index, etc. with lot of files > this will be a lot of overhead. > > Are these concerns valid? yes > Anyone had this kind of design problem and how did you solve it? store your files in a filesystem, and keep the path to the file (plus metadata, acl, etc...) in database. > > Thanks. > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Open Source BI Tool
2010/4/27 akp geek : > Hi all - > > I would like to know, if there is a open source BI tool for > creating reports against Postgres database ? appreciate your help Pentaho have some good tools http://www.pentaho.com/ > > Regards > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Storing many big files in database- should I do it?
2010/4/28 Adrian Klaver : > On Tuesday 27 April 2010 5:45:43 pm Anthony wrote: >> On Tue, Apr 27, 2010 at 5:17 AM, Cédric Villemain < >> >> cedric.villemain.deb...@gmail.com> wrote: >> > store your files in a filesystem, and keep the path to the file (plus >> > metadata, acl, etc...) in database. >> >> What type of filesystem is good for this? A filesystem with support for >> storing tens of thousands of files in a single directory, or should one >> play the 41/56/34/41563489.ext game? I'll prefer go with XFS or ext{3-4}. In both case with a path game. You path game will let you handle the scalability of your uploads. (so the first increment is the first directory) something like 1/2/3/4/foo.file 2/2/3/4/bar.file etc... You might explore a hash function or something that split a SHA1(or other) sum of the file to get the path. >> >> Are there any open source systems which handle keeping a filesystem and >> database in sync for this purpose, or is it a wheel that keeps getting >> reinvented? >> >> I know "store your files in a filesystem" is the best long-term solution. >> But it's just so much easier to just throw everything in the database. > > In the for what it is worth department check out this Wiki: > http://sourceforge.net/apps/mediawiki/fuse/index.php?title=DatabaseFileSystems and postgres fuse also :-D > > -- > Adrian Klaver > adrian.kla...@gmail.com > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [SPAM] Re: [GENERAL] Best way to replicate to large number of nodes
2010/4/22 Brian Peschel : > > On 04/22/2010 10:12 AM, Ben Chobot wrote: >> >> On Apr 21, 2010, at 1:41 PM, Brian Peschel wrote: >> >> >>> >>> I have a replication problem I am hoping someone has come across before >>> and can provide a few ideas. >>> >>> I am looking at a configuration of on 'writable' node and anywhere from >>> 10 to 300 'read-only' nodes. Almost all of these nodes will be across a WAN >>> from the writable node (some over slow VPN links too). I am looking for a >>> way to replicate as quickly as possible from the writable node to all the >>> read-only nodes. I can pretty much guarantee the read-only nodes will never >>> become master nodes. Also, the updates to the writable node are bunched and >>> at known times (ie only updated when I want it updated, not constant >>> updates), but when changes occur, there are a lot of them at once. >>> >> >> Two things you didn't address are the acceptable latency of keeping the >> read-only nodes in sync with the master - can they be different for a day? A >> minute? Do you need things to stay synchronous? Also, how big is your >> dataset? A simple pg_dump and some hot scp action after you batched updates >> might be able to solve your problem. > > Latency is important. I would say 10 to 15 minutes max, but the shorter the > better. I don't have an exact size, but I believe the entire DB is about 10 > gig. > > We had an idea of creating our apps write the SQL statements to a file, > rather than using an ODBC drive to directly change the DBs. Then we could > scp/rsync the files to the remote machines and execute them there. This > just seems like a very manual process though. You need to have a look at PgQ. (in short, skytools will do exactly what you want if I understand correctly your requirments, londiste being somewhat like slony) > > - Brian > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Inheritance efficiency
2010/5/1 John R Pierce : > Greg Smith wrote: >> >> Enterprise grade doesn't mean anything. Partitioning designs that require >> thousands of child tables to work right are fundamentally misdesigned >> anyway, so there is no reason for any of the contributors to the project to >> work on improving support for them. There are far too many obvious >> improvements that could be made to PostgreSQL, ones that will benefit vastly >> more people, to divert resources toward something you shouldn't be dong >> anyway like that. >> > > my sql developer, who's been doing oracle for 15+ years, says postgres' > partitioning is flawed from his perspective because if you have a prepared > statement like.. > > SELECT fields FROM partitioned_table WHERE primarykey = $1; > > it doesn't optimize this very well and ends up looking at all the sub-table > indicies. ir you instead execute the statement > > SELECT fields FROM parritioned_table WHERE primarykey = constant; > > he says the planner will go straight to the correct partition. > > i haven't confirmed this for myself. It has nothing to do with partitionning but how the planner works. Even if the use case remain correct > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] About the limit of storage
2010/5/2 Sean : > I need to have a PostgreSQL server with 10TB disk. I created a DB which > contains three tables. > > I understand that PostgreSQL has the following hard limit: > size per table: 32TB > size per DB: none > I wonder what I should do if the sum of my data exceed 10TB. It does not > exceed any limit that PostgreSQL system imposes, but it does exceed my > server's storage. Can anyone give a hint? Then you can add other storage, and make a 'tablespace' wich point to it. http://www.postgresql.org/docs/current/static/manage-ag-tablespaces.html > Thanks, > Sean > > Hotmail has tools for the New Busy. Search, chat and e-mail from your inbox. > Learn more. -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] (psuedo) random serial for PK or at least UNIQUE NOT NULL?
2010/5/5 Vincenzo Romano : > 2010/5/4 Alvaro Herrera : >> Vincenzo Romano wrote: >>> Hi all. >>> >>> I'm willing to change an BIGINT ID column (actually a SERIAL8) with a >>> BIGINT whose valules are (pseudo)random. >>> The main objective is to avoid guessability. >>> I whish I could also use it as the PK (as it's now) but that's not >>> really important now. >>> Any hint? >> >> http://wiki.postgresql.org/wiki/Pseudo_encrypt > > That's a nice starting point. > I should apply that function twice for each 32-bit chunk of a > 64-bit BIGINT and then merge the results into a single BIGINT ... > Boring but doable! Hum, not sure. If I understand correctly, you won't be able to get the original int64 from the generated one. Better rewrite the function for 2^64 isn't it ? > > -- > Vincenzo Romano > NotOrAnd Information Technologies > NON QVIETIS MARIBVS NAVTA PERITVS > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] (psuedo) random serial for PK or at least UNIQUE NOT NULL?
2010/5/5 Vincenzo Romano : >>>>>>> http://wiki.postgresql.org/wiki/Pseudo_encrypt >>> That's a nice starting point. >>> I should apply that function twice for each 32-bit chunk of a >>> 64-bit BIGINT and then merge the results into a single BIGINT ... >>> Boring but doable! >> >> Hum, not sure. If I understand correctly, you won't be able to get the >> original int64 from the generated one. > > Why should I bother? Dunno, but it was in the original spec. > >> Better rewrite the function for 2^64 isn't it ? > > Sure!. > > -- > Vincenzo Romano > NotOrAnd Information Technologies > NON QVIETIS MARIBVS NAVTA PERITVS > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] question about unique indexes
2010/5/10 Jonathan Vanasco : > -- running pg 8.4 > > i have a table defining geographic locations > > id > lat > long > country_id not null > state_id > city_id > postal_code_id > > i was given a unique index on > (country_id, state_id, city_id, postal_code_id) > > the unique index isn't working as i'd expect it to. i was hoping someone > could explain why: > > in the two records below, only country_id and state_id are assigned ( aside > from the serial ) > > geographic_location_id | coordinates_latitude | coordinates_longitude | > country_id | state_id | city_id | postal_code_id > +--+---++--+-+ > 312 | | | > 233 | 65 | | > 443 | | | > 233 | 65 | | > > i was under the expectation that the unique constraint would apply in this > place. > > from the docs: > When an index is declared unique, multiple table rows with equal > indexed values are not allowed. Null values are not considered equal. A > multicolumn unique index will only reject cases where all indexed columns > are equal in multiple rows. NULLs are not considered equal, so you can have an UNIQUE on a column with multiple times a NULL. You migth want to explicitely add a 'NOT NULL' to your columns here. > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] effective_io_concurrency details
2010/5/26 pasman pasmański : > Hello. > > I have 2 questions: > Is effective_io_concurrency working on WinXP sp2 ? no > and what is the difference between effective_io_concurrency = 0 > and effective_io_concurrency = 1 0 disable prefetching. 1 allow a short prefetch window > > > Postgres 8.4.4 > > pasman > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg/linux How much swap relative to physical memory is needed?
2010/6/10 Alan Hodgson : > On Thursday, June 10, 2010, Kelly Burkhart wrote: >> Should I be concerned? >> >> Thanks, > > The default Linux kernel settings will tend to swap stuff out to make more > memory available for filesystem cache. You can lower the value set in > /proc/sys/vm/swappiness to decrease this tendency. I don't think it's > hurting anything, though; it just has pages it thinks can be usefully > swapped out. you can read * /usr/src/linux/Documentation/vm/overcommit-account to get more info. But depending of your OOMKiller mode, the size of the swap is used to determine the total commit memory. So if your swap is too small with huge RAM, and mode 2 for OOM you might not used all of your available memory. > > -- > "No animals were harmed in the recording of this episode. We tried but that > damn monkey was just too fast." > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Inserting additional data into pg_statistics
2010/6/21 Teodor Macicas : > --- > Hi Tom, Modifying the pg_statistics is not a good idea for most > practical purposes. The modification, however, becomes a necessity to > implement automatic physical design techniques. We are developing an > automatic physical designer for Postgres. The designer will add features > that most commercial systems provide right now, such as automatically > selecting indexes for queries. My colleagues recently demonstrated a > prototype version of the system at SIGMOD, and the demo description can be > found at http://www.cs.cmu.edu/~ddash/parinda-sigmod.pdf > > We want to extend the system by doing the physical design outside the > production database, and hence need to replicate the pg_statistics of the > production database in another standing database. This is the reason, we > would like to move the pg_statistics across the database, and both direct > sql/pg_dump-restore mechanisms fail us. If not already there, watch how to hook the statistics when they are used/requested in the query planner, not modifying system catalog. So you can provide false stats to the plannerstats that you can store in another table, not in the pg_catalog. It looks to me that you are doing something similar to that : http://www.pgcon.org/2010/schedule/events/233.en.html (your REF 7) but with the 'offline' option, right ? May I suggest you to read on 'segment exclusion' idea in the postgresql wiki ? http://wiki.postgresql.org/wiki/Segment_Exclusion sometime I am pretty sure the hooks for stats are not there, but ... if you provide a (good) way to hook them without performance impact when the hook is not used, that should be good for more than only your project. > > -Dash Debabrata > > > Tom Lane wrote: >> >> Teodor Macicas writes: >> >>> >>> Why I can't ? And for my purpose is not a bad idea. I mean, I have to do >>> this and somehow I should find a solution. >>> >> >> >>> >>> In order to use ANALYZE I need the same data on 2nd machine, but the data >>> is quite large and the only information I need are the statistics from >>> pg_statistic. >>> >> >> Er, if you haven't got the data on the second machine, then you *don't* >> need or want that stuff in its pg_statistic. It won't do you any good >> to have incorrect information in there. >> >> regards, tom lane >> > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How can I use parameters in plain sql
2010/9/3 Merlin Moncure : > On Fri, Sep 3, 2010 at 3:47 PM, John Adams wrote: >>> psql has some client side manged variables, and you can of course use >>> pl/pgsql. >> Do you mean I should use a pl/pgsql stored procedure or do I have to somehow >> mark the sql as pl/pgsql? How? >> Because in sql server it is all the same i.e. plain sql=tsql > > pl/pgsql is only used in functions: > > create function foo(i int) returns setof mytable as > $$ > begin > return query select * from mytable where id = i; > end; > $$ language plpgsql; > > then > > > plpgsql is reserved for fancy things that are tricky to do with plain > sql. it's got loops, robust error handling, etc. > > http://www.postgresql.org/docs/8.4/static/plpgsql.html > Also, in PostgreSQL 9.0 you can have anonymous code blocks with 'DO'. http://www.postgresql.org/docs/9.0/static/sql-do.html -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PG website testing
2010/10/4 Thom Brown : > Hi all, > > We're currently testing a new javascript change on the PostgreSQL > docs. This is to make sure monospaced fonts still appear at a > reasonable size between browsers. I'd appreciate it if some of you > could do some browser testing. http://magnus.webdev.postgresql.org/ > Only docs for 8.3 and below are available. Please also check the main > site for font issues, and post any issues you find here. looks fine with iceweasel and konqueror. > > Thanks :) > > -- > Thom Brown > Twitter: @darkixion > IRC (freenode): dark_ixion > Registered Linux user: #516935 > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is there a logical reason for 25P02?
2010/10/5 Konstantin Izmailov : > Let me ask another question related to the INSERT as well. I'm running > periodic import into Postgres and sometimes the application tries to insert > a duplicate row which, as expected, results in "integrity violation" error. > This aborts entire transaction (which is huge). I can use "SAVEPOINT > ...;INSERT ...;RELEASE SAVEPOINT ..." but I'm concerned with potential > performance hit. I haven't had time to benchmark the difference in > performance, so could you please suggest if the difference will be > noticeable. Is there a better approach? Is it possible to customize INSERT > behavior to not abort transaction due to an integrity violation? Would it be > possible to alter COPY command behavior as well (to gracefully continue > after integrity violation)? you probably want pgloader : http://pgloader.projects.postgresql.org/ -- Cédric Villemain 2ndQuadrant http://2ndQuadrant.fr/ PostgreSQL : Expertise, Formation et Support -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] storing windows path strings
2010/1/29 Scott Frankel : > > Hi all, > What's the proper way to store directory path strings in a table, especially > ones with backslashes like windows? > I'm currently using a prepared statement with bind value. Do I need to > pre-parse all user entries to identify any backslash characters before > passing the string to my insert statement? > Searches through the documentation turned up references > to escape_string_warning (boolean) and standard_conforming_strings > (boolean). I'm not sure I'll have access to server side config. > Thanks in advance! > Scott > > eg: > CREATE TABLE foo ( > foo_id SERIAL PRIMARY KEY, > name VARCHAR(32) UNIQUE NOT NULL, > dirpath text DEFAULT NULL); > > INSERT INTO foo (name, dirpath) VALUES ('bar', 'c:\windows\path\to\bar'); > --> WARNING: nonstandard use of \\ in a string literal explicetely set ON the standard_conforming_string in the postgresql.conf *but* take care it don't break your application. INSERT INTO foo (name, dirpath) VALUES ('bar', 'c:\windows\path\to\bar'); > > > > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Multiple buffer cache?
2010/2/6 Bret S. Lambert : > On Fri, Feb 05, 2010 at 11:41:13PM +0200, Alexei Vladishev wrote: >> Greetings, >> >> Is there a way of configuring PostgreSQL so that one specific table would >> use, say, 4GB of buffer cache while other tables would use the rest? >> >> I would like to keep the table and its indexes always in "hot" >> state, so that >> other queries won't pollute this part of the buffer cache. It would ensure >> reliable performance and much less disk IOPS working with the table. > > Fiddling with the buffer cache like that would require some sort of > OS support, if I'm not mistaken in what you're asking for. > > And then, even if the support is there, you'd need to outline exactly > how you're planning on pushing this button. > > Specifically, what's your usage pattern that would make this a > win for you? > > If the table and its indexes can already fit into the buffer cache, > and it's as commonly accessed as you think it is, the OS should > probably have it cached anyway. that's all true. I am working on pgfincore which allow in some way to prepare buffer cache. You need pg > 8.4 and linux (probably working with bsd too) I don't consider it ready fo rproduction, but fine for debugging things, if you reall care the buffer cache preload, tell me, I'll stabilize the code in a shorter time ;) http://villemain.org/projects/pgfincore > >> >> Is it possible? >> >> Thanks for any hints! >> >> Alexei >> >> -- >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-general > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Multiple buffer cache?
2010/2/6 Bret S. Lambert : > On Sat, Feb 06, 2010 at 02:44:32PM +0100, C?dric Villemain wrote: >> 2010/2/6 Bret S. Lambert : >> > On Fri, Feb 05, 2010 at 11:41:13PM +0200, Alexei Vladishev wrote: >> >> Greetings, >> >> >> >> Is there a way of configuring PostgreSQL so that one specific table would >> >> use, say, 4GB of buffer cache while other tables would use the rest? >> >> >> >> I would like to keep the table and its indexes always in "hot" >> >> state, so that >> >> other queries won't pollute this part of the buffer cache. It would ensure >> >> reliable performance and much less disk IOPS working with the table. >> > >> > Fiddling with the buffer cache like that would require some sort of >> > OS support, if I'm not mistaken in what you're asking for. >> > >> > And then, even if the support is there, you'd need to outline exactly >> > how you're planning on pushing this button. >> > >> > Specifically, what's your usage pattern that would make this a >> > win for you? >> > >> > If the table and its indexes can already fit into the buffer cache, >> > and it's as commonly accessed as you think it is, the OS should >> > probably have it cached anyway. >> >> that's all true. >> >> I am working on pgfincore which allow in some way to prepare buffer cache. >> You need pg > 8.4 and linux (probably working with bsd too) > > Why do something with a non-portable interface? Most OSes support > coherently mmap(2)'ing disk blocks into memory; in fact, I'm somewhat > taken aback that the postgres buffer cache isn't implemented in that > manner, but I'm willing to give the devs credit for having not done > so for good reasons. currrently, pgfincore is implement via mmap'ing and mincore. This is the mincore implementation which might differ. In next release there will probably a system call 'fincore()' which is probably going to be integrated in linux kernel. Doing all of those in a portable way is not fully possible (thinks windows), but it may be possible using #ifdef things to catch the good system call for each OS (if possible/implemented). > >> >> I don't consider it ready fo rproduction, but fine for debugging >> things, if you reall care the buffer cache preload, tell me, I'll >> stabilize the code in a shorter time ;) >> http://villemain.org/projects/pgfincore >> >> > >> >> >> >> Is it possible? >> >> >> >> Thanks for any hints! >> >> >> >> Alexei >> >> >> >> -- >> >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >> >> To make changes to your subscription: >> >> http://www.postgresql.org/mailpref/pgsql-general >> > >> > -- >> > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >> > To make changes to your subscription: >> > http://www.postgresql.org/mailpref/pgsql-general >> > >> >> >> >> -- >> C?dric Villemain > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Multiple buffer cache?
2010/2/6 BillR : > I don't know how to do this with PostgreSQL, but I am pretty sure what > Alexei is looking for is what Oracle and SQL Server people refer to as > 'pinning' a table or other DB object (into memory). I would be interested to > know if PostgreSQL does this too. I think it is a very useful feature. > > How to pin a table in cache with Oracle (for an example): > http://www.jlcomp.demon.co.uk/faq/pin_table.html > > Couple more examples. > > http://blogs.oracle.com/stevenChan/2007/05/pinning_objects_to_improve_app.ht > ml > > http://www.mssqltips.com/tip.asp?tip=1317 > > In some large enterprise systems I have worked on (e.g. tier one telecom > companies), besides the standard Oracle installation the billing systems > used one database product where everything was in memory. This was used > *mostly* for static lookup data to help speed up the performance of the > system. When you have say, 300 million customers, every little bit helps. :) > > Hopefully someone knows how with Postgres. In some way pgfincore let you do that with function pgfadv_willneed(table/index). It will try to load blocks of the relations without killing your IO. But, I am not sure it is the correct answer for the problem here. > > Cheers > > -Original Message- > From: pgsql-general-ow...@postgresql.org > [mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Bret S. Lambert > Sent: February-06-10 4:50 AM > To: Alexei Vladishev > Cc: pgsql-general@postgresql.org > Subject: Re: [GENERAL] Multiple buffer cache? > > On Fri, Feb 05, 2010 at 11:41:13PM +0200, Alexei Vladishev wrote: >> Greetings, >> >> Is there a way of configuring PostgreSQL so that one specific table would >> use, say, 4GB of buffer cache while other tables would use the rest? >> >> I would like to keep the table and its indexes always in "hot" >> state, so that >> other queries won't pollute this part of the buffer cache. It would ensure >> reliable performance and much less disk IOPS working with the table. > > Fiddling with the buffer cache like that would require some sort of > OS support, if I'm not mistaken in what you're asking for. > > And then, even if the support is there, you'd need to outline exactly > how you're planning on pushing this button. > > Specifically, what's your usage pattern that would make this a > win for you? > > If the table and its indexes can already fit into the buffer cache, > and it's as commonly accessed as you think it is, the OS should > probably have it cached anyway. > >> >> Is it possible? >> >> Thanks for any hints! >> >> Alexei >> >> -- >> Sent via pgsql-general mailing list (pgsql-general@postgresql.org) >> To make changes to your subscription: >> http://www.postgresql.org/mailpref/pgsql-general > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > > > Spam/Virus scanning by CanIt Pro > > For more information see > http://www.kgbinternet.com/SpamFilter.htm > > To control your spam filter, log in at > http://filter.kgbinternet.com > > > -- > BEGIN-ANTISPAM-VOTING-LINKS > -- > > Teach CanIt if this mail (ID 80190050) is spam: > Spam: > http://filter.kgbinternet.com/canit/b.php?i=80190050&m=fbb7172fd677&t=201002 > 06&c=s > Not spam: > http://filter.kgbinternet.com/canit/b.php?i=80190050&m=fbb7172fd677&t=201002 > 06&c=n > Forget vote: > http://filter.kgbinternet.com/canit/b.php?i=80190050&m=fbb7172fd677&t=201002 > 06&c=f > ------ > END-ANTISPAM-VOTING-LINKS > > __ Information from ESET Smart Security, version of virus signature > database 4841 (20100206) __ > > The message was checked by ESET Smart Security. > > http://www.eset.com > > > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] GIN : Working with term positions
2011/10/26 Yoann Moreau : > On 21/10/11 12:23, Yoann Moreau wrote: >> >> Hello, >> I'm using a GIN index for a text column on a big table. I use it to rank >> the rows, but I also need to get the term positions for each document of a >> subset of documents. I assume these positions are stored in the index, >> because doc says positions can be used for cover density ranking and because >> to_tsvector function gives them : >> >> select * from to_tsvector('I get lexemes and I get term positions.'); >> to_tsvector >> >> 'get':2,6 'lexem':3 'posit':8 'term':7 >> >> I can get the term positions with to_tsvector but only by parsing the >> result string, is there any more handy way ? Something like : >> select * from term_and_positions('I get lexemes and I get term >> positions.'); >> term | positions >> -+--- >> 'get' | {2,6} >> 'lexem' | {3} >> >> >> Then, from the term positions, I need to get the character offset of these >> term positions. I assume it is NOT stored in the GIN index. By character >> offset I mean the character count from string begining to the term. For the >> previous example it would be : 'get' --> {2,20}. >> >> I thought about using ts_headline to return the whole text with term >> tagged and then parse it to compute the character offsets from the tags. But >> this function is very slow, seems like it does not use the GIN index at all. >> And I suppose it can't because there is no way to know from a term position >> where its substring is in the text. >> >> Now I think the only solution is to make my own C function parsing the >> text like to_tsvector does and counting terms AND characters read from the >> begining of the text to match them. I got a look on the code, and it does >> not seems easy to do because characters offset or string length are never >> used by the parsetext function (ts_parse.c). If you have any other >> suggestion, would love to hear that ! >> >> Regards, Yoann Moreau > > Hello again, I'm sorry my need is actually a bit different than what I have > asked. I need to get the term positions using the GIN index, when I query my > text column, i.e. for a given term. > > For example for 2 rows of a 'docs' table with a text column 'text' : > 'I get lexemes and I get term positions.' > 'Did you get the positions ?' > > I'd need a function like this : > select term_positions(text, 'get') from docs; > id_doc | positions > -+--- > 1 | {2,6} > 2 | {3} > > I know it can't be as simple as this, because the query would first need to > be filtered with a WHERE using a tsquery and this can't be done in the > function called like in my example. I suppose such a feature does not exist, > but is there any way to get the positions of the matching terms when > querying a GIN index ? > > The only possible way I imagine right now is to firstly filter the rows with > "to_tsvector(text) @@ to_tsquery('get')" and then call "to_tsvector(text)" > for the n highest ranked rows, parsing the string returned by the function > to find the term and its positions. But would be way more efficient to get > them directly at the first call when matching the terms with @@ operator. I > know it would be impossible if the query contain more than 1 term because it > can't return 2 arrays of position in one row (i.e. for one document), but > for now I'm trying to do this for 1 query term. > Any help or advice would be welcome ! > > By the way, I have done the C function computing the character offset of a > given term position for a text column. It's not done in a good way, but it's > more a topic for hackers postgresql list. > Don't forget when you success that word positions are affected by the word removed by stop-words. -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] How to install pgfincore with PG 9.1
2011/11/20 Raghavendra : > Respected, > I have tried installing pgfincore with PG 9.1(one-click installer). But > facing below error. > [root@localhost pgfincore-v1.1]# export PATH=/opt/PostgreSQL/9.1/bin:$PATH > [root@localhost pgfincore-v1.1]# echo $PATH > /opt/PostgreSQL/9.1/bin:/usr/lib/qt-3.3/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin > [root@localhost pgfincore-v1.1]# which pg_config > /opt/PostgreSQL/9.1/bin/pg_config > [root@localhost pgfincore-v1.1]# make > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > cp pgfincore.sql pgfincore--.sql > gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith > -Wdeclaration-after-statement -Wendif-labels -Wformat-security > -fno-strict-aliasing -fwrapv -fpic -I. -I. > -I/opt/PostgreSQL/9.1/include/postgresql/server > -I/opt/PostgreSQL/9.1/include/postgresql/internal -D_GNU_SOURCE > -I/usr/local/include/libxml2 -I/usr/local/include -c -o pgfincore.o > pgfincore.c > gcc -O2 -Wall -Wmissing-prototypes -Wpointer-arith > -Wdeclaration-after-statement -Wendif-labels -Wformat-security > -fno-strict-aliasing -fwrapv -fpic -L/opt/PostgreSQL/9.1/lib > -L/usr/local/lib -L/usr/local/lib > -Wl,-rpath,'/opt/PostgreSQL/9.1/lib',--enable-new-dtags -shared -o > pgfincore.so pgfincore.o > rm pgfincore.o > [root@localhost pgfincore-v1.1]# make install > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > /bin/mkdir -p '/opt/PostgreSQL/9.1/share/postgresql/extension' > /bin/mkdir -p '/opt/PostgreSQL/9.1/lib/postgresql' > /bin/mkdir -p '/opt/PostgreSQL/9.1/doc/postgresql/extension' > grep: /pgfincore.control: No such file or directory > grep: /pgfincore.control: No such file or directory > /bin/sh > /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh > -c -m 644 ./pgfincore.control > '/opt/PostgreSQL/9.1/share/postgresql/extension/' > /bin/sh > /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh > -c -m 644 ./pgfincore--unpackaged--.sql ./pgfincore--.sql > '/opt/PostgreSQL/9.1/share/postgresql/extension/' > /opt/PostgreSQL/9.1/lib/postgresql/pgxs/src/makefiles/../../config/install-sh: > ./pgfincore--unpackaged--.sql does not exist. > make: *** [install] Error 1 > Please assist me, thanks in advance. buggy Makefile. Thanks for the report. I'll fix that soon and keep you informed. -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] General performance/load issue
Le 24 novembre 2011 17:02, Tomas Vondra a écrit : > On 24 Listopad 2011, 16:39, Robert Treat wrote: >> On Thu, Nov 24, 2011 at 9:09 AM, Tomas Vondra wrote: >>> On 24 Listopad 2011, 14:51, Gaëtan Allart wrote: >>>> Postgresql.conf : >>>> >>>> max_connections = 50 >>>> shared_buffers = 12G >>>> temp_buffers = 40MB >>>> work_mem = 128MB >>>> maintenance_work_mem = 256MB >>>> max_files_per_process = 8192 >>>> checkpoint_segments = 256 >>>> checkpoint_timeout = 30min >>>> checkpoint_completion_target = 0.9 >>> >>> Fine. Let's see the options that look suspicious. >>> >> >> I think you missed some suspicious settings... I'd recommend setting >> shared buffers to 8gb, and I'd likely reduce checkpoint segements to >> 30 and set the checkpoint timeout back to 5 minutes. Everything about >> the way this server is configured (including those vm settings) is >> pushing it towards delaying the WAL/Buffer/Checkpoint as long as >> possible, which matches with the idea of good performance initial >> followed by a period of poor performance and heavy i/o. > > Yes, checkpoints were my first thought too. OTOH the OP reported that most > of the I/O is caused by WAL writer - that's not exactly the part that does > the work during checkpoint. Plus the WAL may not be postponed, as it's > usually O_DIRECT and fsynced, right. > > You're right that the writes are postponed, but I generally see that as a > good thing when combined with spread checkpoints. And even with those vm > settings (about 3.2GB for background writes), I wouldn't expect this > behaviour (because the page cache usually expires after 30 seconds). Say > you need 100% of the shared buffers is dirty and need to be written. You > have 27 minutes (30*0.9) to do that - that means about 8MB/s. With 30 > seconds expire there might be about 240MB before the pdflush starts to > write the data to the SSD. And that can surely handle more than 50MB/s. So > why the long delay? The question is what else is going on there. > > But all this is just guessing - I want to see the log_checkpoint message, > iostat results etc. > >> On a side note, I'd guess your work_mem is probably too high. 50 >> (connections) x 128 (mb work mem) x 2 (sorts per query) = 12GB RAM, >> which is 25% of total ram on the box. That doesn't necessarily mean >> game over, but it seem like it wouldn't be that hard to get thrashing >> being set up that way. YMMV. > > This is one of the reasons why effective_cache_size should be lower than > 32GB, probably ... according to 'free' output, 38GB is what is here right now. effective_cache_size is just informative, so you can put it to 1TB without memory issue. And, it is OS cache+PG cache. There is not enougth information yet to be sure on what's happening. log_checkpoint output will help for sure. -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] General performance/load issue
And that can surely handle more than 50MB/s. So >>why the long delay? The question is what else is going on there. >> >>But all this is just guessing - I want to see the log_checkpoint message, >>iostat results etc. >> >>> On a side note, I'd guess your work_mem is probably too high. 50 >>> (connections) x 128 (mb work mem) x 2 (sorts per query) = 12GB RAM, >>> which is 25% of total ram on the box. That doesn't necessarily mean >>> game over, but it seem like it wouldn't be that hard to get thrashing >>> being set up that way. YMMV. >> >>This is one of the reasons why effective_cache_size should be lower than >>32GB, probably ... >> >>Tomas > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general > -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] General performance/load issue
Le 25 novembre 2011 11:25, Tomas Vondra a écrit : > On 24 Listopad 2011, 23:19, Cédric Villemain wrote: >> Le 24 novembre 2011 20:38, Gaëtan Allart a écrit : >>> Finally, it "crashed" againŠ :-( >>> >>> Here's the output of iotop while databased was inaccessible : >>> >>> 32361 be/4 postgres 0.00 B/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(34847) idle >>> 32244 be/4 postgres 163.48 K/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(34660) SELECT >>> 32045 be/4 postgres 7.78 K/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(33765) SELECT >>> 32158 be/4 postgres 7.78 K/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(34112) SELECT >>> 32242 be/4 postgres 7.78 K/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(34632) SELECT >>> 32372 be/4 postgres 0.00 B/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(38858) idle in transaction >>> 32231 be/4 postgres 15.57 K/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(38602) SELECT >>> 28811 be/4 postgres 3.89 K/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(40594) SELECT >>> 32190 be/4 postgres 3.89 K/s 0.00 B/s 0.00 % 99.99 % postgres: >>> mydb >>> mydb host(38497) SELECT > > This iotop output is rather incomplete - it does not show which of the > processes actually did the I/O. The fact that a process does 99% of the > I/O and yet performs no reads or writes is a bit strange. > > Use "iotop -o" and post the result. > >>> Does this help? >> >> yes. >> It seem you have an issue with your checkpoint syncing time, it is >> fixed in 9.1 and backported in 9.0 here : >> http://projects.2ndquadrant.com/backports > > People generally don't want to apply backports on their own, especially > when it's a production server and when it's unclear it actually fixes the > issue they have. I'm not sure about that. I agree that most people don't want to do that themselves, but if it happens to be the solution they can proceed or ask someone to do it. People want to see their production system back to a normal situation, here the limited information are not enought to be sure, but the checkpoint sync time are clear: sync time are not correct. It is very probable that compacting the fsync will help, but it is not sure it is required yet. > >> It is possible you have other problems that explains the issue you >> have. An immediate solution before trying a patch is to reduce your >> shared_buffer setting to something very low, like 1GB. > > Well, using low shared_buffers was used especially before 8.3, when the > spread checkpoints were not available. It prevents the I/O overload when > the database suddenly decides to write all of the dirty buffers. But he's > on 9.0 (so he already has spread checkpoints). It is a different animal here. > > Plus the number of buffers he's writing is negligible - usually about 700 > buffers (6MB), 3192 buffers (25MB) at most. That surely should not be a > problem for the SSD he's using. See the blog entry from Greg Smith: http://blog.2ndquadrant.com/en/2011/06/backporting-and-checkpoint-tro.html And the slides of his talk at pgconf2011: http://www.2ndquadrant.com/static/2quad/media/pdfs/talks/WriteStuff-PGCon2011.pdf I was just pointing that there are known issues in this area, with known solutions. Getting more information on vacuum activity, bgwriter activity should help too. Gaëtan, do you still have critical performance issue, or is it back to normal/loaded (but not overloaded) situation ? -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] General performance/load issue
Le 25 novembre 2011 23:47, Gaëtan Allart a écrit : > Hello Tomas and Cédric, > > Right now, the server is not all right. Load is above 30 and queries are > slow like hell. > > > Here's the complete iotop. Note the 71 MB/s writes (apparently on SELECT > queries). > > Total DISK READ: 633.35 K/s | Total DISK WRITE: 71.06 M/s > TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND > > > 27352 be/4 postgres 15.64 K/s 86.01 K/s 0.00 % 99.99 % postgres: > database database 176.31.228.6(38816) SELECT > 20226 be/4 postgres 7.82 K/s 0.00 B/s 0.00 % 99.99 % postgres: > database database 176.31.228.6(34166) SELECT > 26950 be/4 postgres 23.46 K/s 0.00 B/s 0.00 % 82.14 % postgres: > database database 46.105.104.205(40820) SELECT > 23160 be/4 postgres 3.91 K/s 0.00 B/s 0.00 % 81.14 % postgres: > database database 46.105.104.205(58091) SELECT > 29184 be/4 postgres 7.82 K/s 0.00 B/s 0.00 % 79.17 % postgres: > database database 46.105.104.205(51047) SELECT > 27271 be/4 postgres 23.46 K/s 234.58 K/s 0.00 % 77.15 % postgres: > database database 46.105.104.205(42315) SELECT > 28224 be/4 postgres 0.00 B/s 0.00 B/s 0.00 % 74.09 % postgres: > database database 46.105.104.205(49871) SELECT > 27450 be/4 postgres 23.46 K/s 0.00 B/s 0.00 % 70.12 % postgres: > database database 46.105.104.205(49096) SELECT > 29206 be/4 postgres 121.20 K/s 0.00 B/s 0.00 % 62.99 % postgres: > database database 176.31.228.6(39787) SELECT > 8202 be/4 postgres 129.02 K/s 0.00 B/s 0.00 % 60.59 % postgres: > database database 91.121.89.14(57291) SELECT > 29209 be/4 postgres 3.91 K/s 0.00 B/s 0.00 % 55.75 % postgres: > database database 176.31.228.6(39807) idle > 27046 be/4 postgres 3.91 K/s 0.00 B/s 0.00 % 52.35 % postgres: > database database 46.105.104.205(41285) SELECT > 29217 be/4 postgres 54.73 K/s 0.00 B/s 0.00 % 50.18 % postgres: > database database 46.105.104.205(51178) SELECT > 3249 be/4 root 3.91 K/s 320.59 K/s 0.00 % 40.31 % [kjournald] > 28918 be/4 postgres 7.82 K/s 0.00 B/s 0.00 % 34.83 % postgres: > database database 46.105.104.205(50164) SELECT > 26856 be/4 postgres 7.82 K/s 234.58 K/s 0.00 % 34.30 % postgres: > database database 46.105.104.205(40589) SELECT > 29205 be/4 postgres 7.82 K/s 0.00 B/s 0.00 % 34.17 % postgres: > database database 176.31.228.6(39784) SELECT > 4175 be/4 root 0.00 B/s 0.00 B/s 0.00 % 1.84 % [flush-8:0] > 28905 be/4 postgres 0.00 B/s 0.00 B/s 0.00 % 0.02 % postgres: > database database 46.105.104.205(50125) SELECT > 28919 be/4 postgres 0.00 B/s 0.00 B/s 0.00 % 0.01 % postgres: > database database 46.105.104.205(50167) SELECT > 19807 be/4 postgres 3.91 K/s 0.00 B/s 0.00 % 0.00 % postgres: > database database 176.31.228.6(33598) SELECT > 28703 be/4 postgres 0.00 B/s 86.01 K/s 0.00 % 0.00 % postgres: > database database 176.31.228.6(39260) SELECT > 29183 be/4 postgres 7.82 K/s 0.00 B/s 0.00 % 0.00 % postgres: > database database 46.105.104.205(51044) SELECT > > > Here is dirty_expire_centisecs : > > cat /proc/sys/vm/dirty_expire_centisecs > 3000 > > > Bgwriter configuration is default : > > #bgwriter_delay = 200ms # 10-1ms between rounds > #bgwriter_lru_maxpages = 100 # 0-1000 max buffers written/round > #bgwriter_lru_multiplier = 2.0 # 0-10.0 multipler on buffers scanned/round > > > > > Is there anything I can provide to help you ? the checkpoints logs and the output of pg_stat_bgwriter (as asked by Tomas). It is probable that shared_buffers are too small for your workload (expected), do you still have issue with checkpoint sync duration ? You should increase shared_buffers (you can iterate) *and* try to not have the long checkpoint sync. (keep shared_buffers <= 8GB, as already mentioned upthread) The output you provided does not show everything, and the select are not looking to write that much. What process are writing ? > > > > Thanks again, > > Gaëtan > > > > Le 25/11/11 14:12, « Tomas Vondra » a écrit : > >>On 25 Listopad 2011, 12:43, Cédric Villemain wrote: >>> Le 25 novembre 2011 11:25, Tomas Vondra a écrit : >>>> On 24 Listopad 2011, 23:19, Cédric Villemain wrote: >>>>> >>>>> It seem you have an issue with your checkpoint syncing time, it is >>>>> fixed in 9.1 and backported in 9.0 here : >>>>> http://projects.2ndquadrant.com/backports >>>> >>>> People generally don't want to apply backports on their own, especially >>>> when it's a production server and when it's unclear it actually fixes >>>> the >>
Re: [GENERAL] What's the best way in Postgres9 to store a bunch of arbitrary boolean values for a row?
Le 5 décembre 2011 10:04, Albe Laurenz a écrit : > Mike Christensen wrote: >> I have a database full of recipes, one recipe per row. I need to >> store a bunch of arbitrary "flags" for each recipe to mark various >> properties such as Gluton-Free, No meat, No Red Meat, No Pork, No >> Animals, Quick, Easy, Low Fat, Low Sugar, Low Calorie, Low Sodium and >> Low Carb. Users need to be able to search for recipes that contain >> one or more of those flags by checking checkboxes in the UI. >> >> I'm searching for the best way to store these properties in the >> Recipes table. My ideas so far: >> >> 1. Have a separate column for each property and create an index on >> each of those columns. I may have upwards of about 20 of these >> properties, so I'm wondering if there's any drawbacks with creating a >> whole bunch of BOOL columns on a single table. >> 2. Use a bitmask for all properties and store the whole thing in one >> numeric column that contains the appropriate number of bits. Create a >> separate index on each bit so searches will be fast. >> 3. Create an ENUM with a value for each tag, then create a column that >> has an ARRAY of that ENUM type. I believe an ANY clause on an array >> column can use an INDEX, but have never done this. >> 4. Create a separate table that has a one-to-many mapping of recipes >> to tags. Each tag would be a row in this table. The table would >> contain a link to the recipe, and an ENUM value for which tag is "on" >> for that recipe. When querying, I'd have to do a nested SELECT to >> filter out recipes that didn't contain at least one of these tags. I >> think this is the more "normal" way of doing this, but it does make >> certain queries more complicated - If I want to query for 100 recipes >> and also display all their tags, I'd have to use an INNER JOIN and >> consolidate the rows, or use a nested SELECT and aggregate on the fly. >> >> Write performance is not too big of an issue here since recipes are >> added by a backend process, and search speed is critical (there might >> be a few hundred thousand recipes eventually). I doubt I will add new >> tags all that often, but I want it to be at least possible to do >> without major headaches. > > I would use a boolean column per property and a partial index on the > ones > where the property is selective, i.e. only a small percentage of all > recipes > match the property. I would like to recommend to have a look at Bloom Filtering: http://www.sai.msu.su/~megera/wiki/bloom -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_standby: How to check in which state the server is currently?
Le 5 décembre 2011 12:16, Frank Lanitz a écrit : > Hi list, > > We had in past from time to time the issue, that the standby server is > stopping recovering, creating a new timeline and become up and running. > In parallel to check for the reasons of this behavior we are looking for > a clean way to check whether warm standby database is still in > recovering mode or has become ready. I did some search, but didn't found > any ready-2-use script for doing this on a stand alone basis (should > only be temp. solution that will be replaced by real monitoring later) See check_postgres , I have added that recently for similar purpose. You give it one of '--assume-standby-mode' or '--assume-prod' when you check the last checkpoint. If the server is not in the expected mode, emit CRITICAL (for nagios, but check_postgres is a standlone script and can be used with other supervision/monitoring software) It is not yet release, see : https://github.com/bucardo/check_postgres/commit/0ff408711dab18b05de26656a945fa37e363f6aa (depends on other patches but you get the idea) http://bucardo.org/wiki/Check_postgres > I looked for some solution checking ps for pg_standby and trying to > connect to database. But I'm not sure how 'secure' in terms of falls > positive and missed events this is. > > Can anybody put me onto the right way here? > > Cheers, > Frank > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] is there example of update skript?
Le 7 décembre 2011 14:26, Pavel Stehule a écrit : > Hello > > I am playing with extensions implemented in plpgsql. I am checking update. > > I have a simple extension > > file gdlib--1.1.sql > CREATE OR REPLACE FUNCTION gdlib_version() > RETURNS numeric AS $$ > SELECT 1.1; > $$ LANGUAGE sql; > > CREATE OR REPLACE FUNCTION hello(text) > RETURNS text AS $$ > SELECT 'Hello, ' || $1 || ' from gdlib ' || gdlib_version(); > $$ LANGUAGE sql; > > file gdlib--1.0.sql > CREATE OR REPLACE FUNCTION gdlib_version() > RETURNS numeric AS $$ > SELECT 1.0; > $$ LANGUAGE sql; > > CREATE OR REPLACE FUNCTION hello(text) > RETURNS text AS $$ > SELECT 'Hello, ' || $1 || ' from gdlib ' || gdlib_version(); > $$ LANGUAGE sql; > > I created a empty update files (it's probably wrong) > > postgres=# SELECT * FROM pg_extension_update_paths('gdlib'); > source | target | path > ++-- > 1.0 | 1.1 | 1.0--1.1 > 1.1 | 1.0 | 1.1--1.0 > (2 rows) > > Issue > > After ALTER EXTENSION gdlib UPDATE TO '1.1' > > I have 1.0 function still? in the path, so you can run 1.1->1.0 if you provided a script for that. > > I expected a refresh 1.1 sql script, but it was newer loaded > > What are correct steps? upgrade the schema/function from 1.0 to 1.1 in your "extension--last--new.sql" file. A blank file can be provided to just update the version number of the extension. .so are *not* versionned so once you've install the new 1.1, the old sql will access to it. If you change sql API for your C extension, you need to run ALTER EXT UPDATE, else it is not required. (expect to increase version number) > > Regards > > Pavel Stehule > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Howto to clear PostgreSQL cache and how to preload all tuples of a table into the cache?
Le 12 décembre 2011 01:42, Stefan Keller a écrit : > I'd like to clear the PostgreSQL cache (e.g. for benchmarking purposes). > And I'd like to preload all tuples of a table (say mytable_one) into the > cache. > > AFAIK there is no way to force all caches to be cleared in PostgreSQL > with an SQL command. > The only way to achieve this, seems to restart PG (server), which is > neither an option for benchmarking purposes nor for production. > > But: > 1. Isn't it possible to achieve a kind-of cache clearing (in a > reliable way) by simply doing a "select * from mytable_two" given > mytable_two is at least as large as mytable_one (which is the one we > want to benchmark)? in postgresql cache, no, because such a query will use a sequential scan and postgreSQL will protect its cache with a ring: read tuples are stored in in this short(er than shared_memory) cache; recycled while reading the table. > > 2. I assume that "select * from mytable_one" loads as much of the > tuples as it can into the cache. Are there better ways for preloading > the contents of a table? no, see 1/ So far, there were some ideas on postgresql cache clearing/management but nothing did it because none have evidences that it is useful (for performances) You can use pgfincore: http://pgfoundry.org/projects/pgfincore to monitor your OS cache, and if you have a system with POSIX_FADVISE support you can make snpashot, restore, preload ..Etc For your benchmark, just make a good scenario, else your benchmark does not bench anything but what you supposed that can happen (wihtout happening). I don't see why someone would want to clear the postgresql cache *in production* ! The engine will use its internal mecanism to decide what to keep and what to remove with (we expect) more intelligence than us. -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL DBA in SPAAAAAAAACE
Le 12 décembre 2011 14:25, Mark Morgan Lloyd a écrit : > Rob Sargent wrote: >> >> On 12/06/2011 01:56 PM, Glyn Astill wrote: >>> >>> __ >>> >>>> From: Merlin Moncure >>>> To: Joe Miller Cc: pgsql-general@postgresql.org >>>> Sent: Tuesday, 6 December 2011, 17:30 >>>> Subject: Re: [GENERAL] PostgreSQL DBA in SPCE >>>> >>>> On Tue, Dec 6, 2011 at 10:56 AM, Joe Miller >>>> wrote: >>>>> >>>>> You may have seen this, but RedGate software is sponsoring a contest >>>>> to send a DBA on a suborbital space flight. >>>>> >>>>> And there is a PostgreSQL representativeme! >>>>> >>>>> https://www.dbainspace.com/finalists/joe-miller >>>>> >>>>> Voting is open for 7 days. Don't let one of those Oracle or SQL Server >>>>> punks win :p >>>> >>>> so jealous -- I didn't make the cut. Well, you'll have my vote. >>>> >>>> merlin >>>> >>> Me neither, voted. Good luck. >>> >> >> Tried to vote, but never got the mail to confirm my vote? > > > Message comes with header "Confirm your vote", so a firewall that is > over-zealous catching "confirm your account" type emails will can it. > > I suspect that we're supposed to do this every day- I'm sure that Certain > Corporates will have no compunction at using multiple votes. Correct «Thanks for your vote. You can vote for your favorite DBA every day to boost their chances of traveling beyond the blue and into space» > > -- > Mark Morgan Lloyd > markMLl .AT. telemetry.co .DOT. uk > > [Opinions above are the author's, not those of his employers or colleagues] > > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Server/Data Migration Advice
Le 12 décembre 2011 18:12, Carlos Mennens a écrit : > I've finally received my new virtual database server (Debian Linux) & > this weekend I'll be rolling my new PostgreSQL server online. My > question is I've never migrated production data from 8.4.8 to 9.1.1. I > would like to find out from the community what exactly is the > recommended process in moving my database server from 8.4.8 to 9.1.1. > I'm not simply upgrading the existing server as I will be installing > PostgreSQL 9.1 on the new hardware and not sure if it's as simple as > simply performing a pg_dumpall: > > pg_dumpall > mydata.sql it is as simple. You *must* use the pg_dump from 9.1. See : http://www.postgresql.org/docs/9.1/static/upgrading.html > > I don't know if I need to use some kind of conversion tool to convert > the data from 8.4.8 so that it's compatible with 9.1.1 so if you guys > could please shine in on any recommendations, I would greatly > appreciate it! > > -- > Sent via pgsql-general mailing list (pgsql-general@postgresql.org) > To make changes to your subscription: > http://www.postgresql.org/mailpref/pgsql-general -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Searchable chess positions in a Postgress DB
Le mercredi 11 avril 2012 09:15:59, Sidney Cadot a écrit : > Dear all, > > As a hobby project, I am toying around with a database containing > about 5 million chess games. On average, these games have about 80 > positions (~ 40 moves by both black and white), which means there are > about 400 million chess positions in there. > > I have written code to extract these positions, and now I want to put > them into a Postgres database. Specifically, I want to do this in a > way that allows *fast* lookups of positions, e.g. "give me all > positions that have a White King on c4 and either a Black Bishop or > White Knight on f7". > > Currently, my "Positions" table looks like this: > > Column | Type | Modifiers > ---+-+--- > gameindex | integer | not null > plyindex | integer | not null > pseudofenboard| text| not null > fenside | text| not null > fencastling | text| not null > fenenpassant | text| not null > possiblemovecount | integer | not null > isincheck | boolean | not null > Indexes: > "positions_pkey" PRIMARY KEY, btree (gameindex, plyindex) > Foreign-key constraints: > "positions_gameindex_fkey" FOREIGN KEY (gameindex) REFERENCES > games(gameindex) > > The "PseudoFenBoard" field currently holds a string describing the > position. For example, the starting position of chess looks like this: > > "rnbqkbnr///////RNBQKBNR" > > This design allows me to formulate the kind of positional queries that > I want (by using regular expression matching), but executing them will > involve a slow, linear traversal of the 400M table rows, which is not > desirable. > > I am toying around with the ugly idea to make a "Positions" table that > has a single field for each of the squares, e.g. > > CREATE TABLE Position2 ( > GameIndex INTEGER NOT NULL, > PlyIndex INTEGER NOT NULL, > a1"char" NOT NULL, > a2"char" NOT NULL, > -- (60 fields defs omitted) > h7"char" NOT NULL, > h8"char" NOT NULL > ); > > This would allow the creation of indices on each of the 64 fields > separately, which should help to achieve near-instantaneous position > query performance, especially after gathering proper statistics for > all the field-specific indices. > > I realize that this design is quite ugly, so I would be interested to > hear if there are nicer alternatives that can perform equally well. > > Also, above I use the 1-byte "char" type. Is this the only type in > PostGres that is guaranteed to be just a single byte, or are there > better alternatives? A 13-state enum would be best (listing the 6 > white pieces, 6 black pieces, and 'empty' states for every square on > the board) but as I understand from the documentation, enums always up > take 4 bytes per entry. > > Any ideas for improvement would be greatly appreciated. I'll go test with BloomFiltering and multiple columns. http://www.sai.msu.su/~megera/wiki/bloom -- Cédric Villemain +33 (0)6 20 30 22 52 http://2ndQuadrant.fr/ PostgreSQL: Support 24x7 - Développement, Expertise et Formation -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: [Pkg-postgresql-public] Postgres major version support policy on Debian
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Martin Pitt a écrit : > Gerfried Fuchs [2008-10-06 17:04 +0200]: >> I'm sorry to have done the addition of pg 8.2 initially, and propably >> should also be sorry for adding pg 8.3 to backports.org, I thought it >> would be a service to the users, > > It is, and I think that -8.3 in backports makes perfect sense. > It is what Lenny will ship with, and thus will be maintained for the > next couple of years. > > (If it helps, I'll do -8.3 package maintenance for the next 5 years > for Ubuntu 8.04 LTS) > > Martin > Any plan for 8.4 pre-beta package ? (Devrim Gunduz provide usefull rpm package, I'd like to have the same in debian). Can it be in the experimental repository ? - -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAkm9h/kACgkQo/dppWjpEvx9SACfVd+hFon1lRqe41sDS9avjAsU pYcAnRz89iHwyqwDpHVrRRO4Wz9aKoM5 =sPVB -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Re: [Pkg-postgresql-public] Postgres major version support policy on Debian
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Martin Pitt a écrit : > Cédric Villemain [2009-03-15 23:58 +0100]: >> Any plan for 8.4 pre-beta package ? (Devrim Gunduz provide usefull rpm >> package, >> I'd like to have the same in debian). >> >> Can it be in the experimental repository ? > > So far I usually started packaging those with the first public beta > version, but if we are close to that, sure. Packaging this is probably > easy, it just needs some time to get all the bits in postgresql-common > right (like correctly rewriting obsolete/removed/retyped configuration > settings, and the like). > > I'll see to packaging a current snapshot soon. > > Martin > Xcellent Martin. If I can help, ping me. Here is the announce from Devrim : http://archives.postgresql.org/pgsql-announce/2009-03/msg00012.php And more particulary about !production: http://yum.pgsqlrpms.org/news-8.4devel-ready-for-testing.php - -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAkm+EDsACgkQo/dppWjpEvwftwCgvaCgY6XdFethA449EFCsxqG+ LnUAmgM9h8N3OTzlIGkg05dsWEcdse+N =1gdr -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] debian package 8.4devel was:[Pkg-postgresql-public] Postgres major version support policy on Debian
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Cédric Villemain a écrit : > Martin Pitt a écrit : >> Cédric Villemain [2009-03-15 23:58 +0100]: >>> Any plan for 8.4 pre-beta package ? (Devrim Gunduz provide usefull rpm >>> package, >>> I'd like to have the same in debian). >>> >>> Can it be in the experimental repository ? >> So far I usually started packaging those with the first public beta >> version, but if we are close to that, sure. Packaging this is probably >> easy, it just needs some time to get all the bits in postgresql-common >> right (like correctly rewriting obsolete/removed/retyped configuration >> settings, and the like). > >> I'll see to packaging a current snapshot soon. > >> Martin > > > Xcellent Martin. If I can help, ping me. > > Here is the announce from Devrim : > http://archives.postgresql.org/pgsql-announce/2009-03/msg00012.php > > And more particulary about !production: > http://yum.pgsqlrpms.org/news-8.4devel-ready-for-testing.php > > > I have started some work on that. Old "debian/" updated to 8.4 (and the patches). But I keep on trouble with the postgresql-doc. Upstream have updated to autoconf 2.61 (git 82cd478cf80f4a5a50d39cb9e90a48823679e6a1), it is perhaps the issue : I have replace the --with-docdir by the standard --docdir (new configure option). And dh_install fail (because no files in the doc-8.4/html/). I tryed manualy the suggested 'make -C doc all' as suggested by the GNUMakefile, but the doc is not built. Any tips are welcome. - -- Cédric Villemain Administrateur de Base de Données Cel: +33 (0)6 74 15 56 53 http://dalibo.com - http://dalibo.org -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.9 (GNU/Linux) iEYEARECAAYFAknQmicACgkQo/dppWjpEvxjMACfUobjmCq1XJla/ewHQ2EBNbEc CEwAoJFFEUC3rfBGbOLfnHkY2NDWRy8W =aedw -END PGP SIGNATURE- -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general