Re: [GENERAL] Service not starting on Ubuntu 15.04
On 26 October 2015 at 18:21, Lasse Westh-Nielsen wrote: > Hey, > > I posted to stackoverflow about my problem upgrading from Ubuntu 14.04 to > Ubuntu 15.04: > http://stackoverflow.com/questions/33306475/ubuntu-15-04-postgresql-doesnt-start > > Tl;dr: postgresql service does not start properly when installed as a > package using cloud-init. > > And I can't figure out if I am doing something wrong, if the AMI is no good, > if the package has problems, ... I reckon I cannot be the first person to > use Postgres on Ubuntu Vivid, but I have been hammering my head against the > wall with this for hours. > > Any help greatly appreciated! I'd be trying 'pg_ctlcluster --force 9.4 main stop' and 'pg_ctlcluster 9.4 main start' instead of service start/stop. If it works, you have narrowed down the problem to systemd and/or the startup scripts. If it doesn't work, the problem is elsewhere (eg. some required path like /var/run or /tmp or /var/log/postgresql doesn't exist or has peculiar permissions). If nothing else, you should get more informative output rather than systemd hiding it away somewhere. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Storing files: 2.3TBytes, 17M file count
On 29 November 2016 at 16:50, Thomas Güttler wrote: > > > Am 29.11.2016 um 01:52 schrieb Mike Sofen: > >> From: Thomas Güttler Sent: Monday, November 28, 2016 6:28 AM >> >> ...I have 2.3TBytes of files. File count is 17M >> >> Since we already store our structured data in postgres, I think about >> storing the files in PostgreSQL, too. >> >> Is it feasible to store file in PostgreSQL? >> >> > I guess I will use some key-to-blob store like s3. AFAIK there are open > source s3 implementations available. > > Thank you all for your feeback! > > Regards, Thomas > I have a similar setup. I have about 20TB of data in over 60 million files. It might be possible to store that in PG, but I think it would be a huge headache easily avoided. Files are GPG encrypted and backed up offsite to S3, with lifecycle rules to migrate that to Glacier storage. A tool like boto lets you sync things easily to S3, and maybe directly to glacier, and there are alternatives out there. If your rsync is taking too long, it will be worse syncing to s3 though. If that is your bottleneck, then you need to fix it. Probably by knowing which files have changed and only resyncing them,for example using timestamps from the database or storing 'incoming' files in a separate area from your 'archive'. Once you have this sorted you can do your backups every few minutes and reduce your potential data loss. -- Stuart Bishop http://www.stuartbishop.net/
Re: [GENERAL] Initdb --data-checksums by default
On 20 April 2016 at 14:43, Alex Ignatov wrote: > Hello everyone! > Today in Big Data epoch silent data corruption becoming more and more issue > to afraid of. With uncorrectable read error rate ~ 10^-15 on multiterabyte > disk bit rot is the real issue. > I think that today checksumming data must be mandatory set by default. > Only if someone doesn't care about his data he can manually turn this option > off. > > What do you think about defaulting --data-checksums in initdb? I think --data-checksums should default to on. Databases created 'thoughtlessly' should have safe defaults. Operators creating databases with care can elect to disable it if they are redundant in their environment, if they cannot afford the overhead, or consider their data low value enough to not want to pay the overheads. If the performance impact is deemed unacceptable, perhaps the ability to turn them off on an existing database is easily doable (a one way operation). -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Regression - Query requires full scan, GIN doesn't support it
Hi. I recently converted a load of GIST indexes used by tsearch2 to GIN, as my app is read heavy. We now occasionally get the exception: Query requires full scan, GIN doesn't support it Thankfully it is happening very rarely, but as the querys are generated from user input we can't stop this entirely. Is this a permanent limitation of GIN, or is a fix possible? Is a fix being worked on? If a fix is forthcoming, will it be available in the 8.2 series or only 8.3+? (It certainly looks like a bug to me, as the relevant queries work if the index is dropped, and queries working or failing depending on the existence of an index seems rather wrong to me.) Only relevant discussion on this I can find is http://archives.postgresql.org/pgsql-hackers/2007-01/msg01581.php. There appear to be no replies visible though :-( -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] plpython array support
Sean Davis wrote: > On Nov 19, 2007 9:08 PM, Jorge Godoy <[EMAIL PROTECTED]> wrote: >> Em Monday 19 November 2007 19:29:51 Sean Davis escreveu: >>> Just a simple question--does plpythonu (8.3beta) have support for >>> arrays? I don't see a specific mention of it in the docs, so I >>> suppose not. >> Arrays work for a long time now. I've been using them since 8.1, for sure, >> but I think that on 7.4 I already had code with arrays inside the database... > > Thanks, Jorge. I know that postgresql supports arrays, but I still > don't see how plpython supports arrays. I noticed this post: > > http://archives.postgresql.org/pgsql-general/2007-01/msg01417.php > > from January, 2007 that suggests how to use the string representation > of an array to convert to/from python arrays. This will work, but I > was wondering if there is now a true conversion from postgresql arrays > to python and vice-versa. With 8.2, no there is not. Beyond the simple number and string times everything gets cast to a string (arrays, tsvectors, etc.). I have no idea if anyone is working on this for 8.3 or later. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] tsearch2 best practices
Ian Barwick wrote: > 2007/11/18, Mag Gam <[EMAIL PROTECTED]>: >> Hi All, >> >> Planning to implement tsearch2 for my websitem and dbschema. I wanted to >> know if there is a "Best practices" guide I should be following. While >> reading about it, I noticed there were lot of 'gotchas' with this, such as >> back-up/restore, Slony 1 replication issues, etc.. >> >> What do most people recommend for backup/restore solution: 1) Install >> tsearch 2)Alter tables for tsearch2, update trigger, 3) do work .. 4) >> uninstall tsearch2, 5)backup? Is that the right approach? Or something else? >> >> Also, when will tsearch2 part of core, instead of contrib? > > tsearch2 has been integrated into the upcoming 8.3 release (currently beta). > And unless you need this live *now* and can't run a beta on production, I'd recommend working with 8.3. If nothing else, the docs are s much nicer now (big thanks to whoever did this btw! I can't wait to be able to upgrade and wish I had time to do the preliminary migration right now.). -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] Compressed Backup too big
Filip Rembiałkowski wrote: > 2007/11/15, Andrus <[EMAIL PROTECTED]>: >> "PostgreSQL 8.2.3 on i686-pc-mingw32, compiled by GCC gcc.exe (GCC) 3.4.2 >> (mingw-special)" >> Database size in disk returned by pg_database_size() is 210 MB >> >> Database compressesed backup file size is now 125 MB. > > How do you produce this dump? pg_dump -Fc ? > > If you create plain dump ( pg_dump DBNAME > dump.file ) and zip it > (normal deflate, like winzip or sth like that), what's the size? > > If this is much smaller than 125MB, maybe you got some corner case > with postgres builtin compression. > > You cannot *always* expect 10x ratio... it depends on data. Maybe > usage patterns of your database changed and now the data is less > compressable? We find we get much, much better compression using bzip2. (It would be wonderful if pg_restore supported this natively, but I don't know enough to know if it is possible or not.) -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] Need help requiring uniqueness in text columns
Matthew Wilson wrote: > I have a table MESSAGE with a text column and a timestamp column. I > want to make sure that I have no rows with duplicates for both values. > I have two ideas right now for how to do this. > > IDEA 1: > > CREATE UNIQUE INDEX nodups on MESSAGE (my_timestamp_col, my_text_col); > > IDEA 2: > > CREATE UNIQUE INDEX nodups on MESSAGE (my_timestamp_col, md5(my_text_col)); > > I am speculating that an index on the md5 is cheaper than on a text > column. I'm willing to risk the chance of a hash collision. > > I don't want to use this index to allow searching inside the text > column. I just want to protect against duplication. > > Are either of these approaches any good? What are other ways to > guarantee uniqueness for the pair of my timestamp column and my text > column? Idea 1 will only work if your text columns never grow too long - I can't recall what the limit is off the top of my head but PG will issue warnings when you reach it. Idea 2 works. You can even use it for exact match searching if you like if you do WHERE my_timestamp_col='...' AND md5(my_text_col)=md5('...') Depending on your app, you might want to round your timestamps to avoid getting two identical messages in the system only a millisecond apart. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] plpythonu
Alexandre da Silva wrote: > Hello, > someone can tell me if is secure to create external python modules and > import them to functions/procedures/triggers to use? Its fine as long as you trust the users with write access to your PYTHONP= ATH. > Another question is that I have read in some discussion list (old > message year 2003) the possibility of plpython be removed from > postgresql, this information is valid yet? plpython !=3D plpythonu. plpython was the 'secure' sandboxed version. The Python devs gave up supporting any sort of sandboxing feature in Python declaring it impossib= le. plpythonu is unrestricted, so if you have the ability to create plpythonu= stored procedures you effectively have full filesystem access on your database server as the user your database is running as. So don't put open('/etc/passwd','w') in your plpythonu code. --=20 Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] UUID column as pimrary key?
>>> Next to that, UUID's are generated by computers. I have no doubts that >>> the numeric space that makes up a UUID allows for collision chances as >>> low as described, but are computers capable of generating those >>> numbers sufficiently random that they actually achieve that low a >>> chance? I think that's pushing it. > The main point with the randomness of UUID's remains that you _can_ have a > collision at any given moment. It's unlikely to ever happen, but you can't > predict when it will happen if it does. The possible consequences of a > collision matter a lot in designing whether and how to handle these > collisions. Maybe it doesn't matter at all, maybe you should get a really > hefty insurance, or maybe you need to evacuate the country. Given the math, I suspect the chance of a UUID collision generated by a respected library is roughly the same or less than any other method of choosing a unique id. If you are looking at these extreme improbabilities, your SERIAL isn't guaranteed unique either when you take into account cosmic rays flipping the right bits in your ECC memory or on your disk platter. So if you are worried about such things, give up now because your application is going to have to run with hardware and software in this universe with all its limitations. Maybe I should start a business in providing UUID collision insurance? -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Very high latency, low bandwidth replication
On 30 June 2014 15:05, Bob Jolliffe wrote: > What are people's thoughts about a more optimal solution? I would like to > use a more incremental approach to replication. This does not have to be a > "live" replication .. asynchronously triggering once every 24 hours is > sufficient. Also there are only a subset of tables which are required (the > rest consist of data which is generated). WAL shipping is probably best here. Configure an archive_command on the master to compress and push logs to cloud storage, and configure a hot standby on site to pull and decompress the logs. The wal-e tool may make things simpler pushing to cloud storage, or just follow the PostgreSQL documentation to archive the WAL files to a filesystem. If that isn't good enough, you can look at more esoteric approaches (eg. nightly plaintext dumps to a git repository, pushing changes to disk on site). -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Linux Distribution Preferences?
> On Sun, Jan 13, 2013 at 08:46:58PM -0700, Scott Marlowe wrote: >> The reasons to NOT use ubuntu under PostgreSQL are primarily that 1: >> they often choose a pretty meh grade kernel with performance >> regressions for their initial LTS release. I.e. they'll choose a >> 3.4.0 kernel over a very stable 3.2.latest kernel, and then patch away >> til the LTS becomes stable. This is especially problematic the first >> 6 to 12 months after an LTS release. I wouldn't call it a reason not to use Ubuntu, but a reason why you might want to use the previous LTS release. The kernel chosen needs to be supported for 5 years, yet remain stable enough for the supported application releases to be supported for 5 years. On Wed, Jan 16, 2013 at 4:02 AM, Bruce Momjian wrote: > This really sums it up for me. Do you need the most recent kernel with > all the performance enhancements and new hardware support, and if so, > are you willing to accept frequent updates and breakage as the bugs are > fixed? I hear lots of people like to wait for the .1 release of the LTS for this sort of reason. It seems a common policy for applications too, steering clear of .0 releases in favor of waiting for the initial patch release. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] plpython intermittent ImportErrors
On Mon, Jan 14, 2013 at 11:30 PM, Brian Sutherland wrote: > Hi, > > I have a plpython stored procedure which sometimes fails when I run my > applications automated test suite. The procedure is called hundreds of > times during the tests but only fails a few times, often with the > following ImportError: > > Traceback (most recent call last): > File "/Users/jinty/.buildout/eggs/setuptools-0.6c11-py2.7.egg/site.py", > line 73, in > __boot() > File "/Users/jinty/.buildout/eggs/setuptools-0.6c11-py2.7.egg/site.py", > line 2, in __boot > import sys, imp, os, os.path > File "/Users/jinty/src/mp/lib/python2.7/os.py", line 49, in > import posixpath as path > File "/Users/jinty/src/mp/lib/python2.7/posixpath.py", line 15, in > > import stat > ImportError: No module named stat The first thing that pops out here is that the paths contain references to buildout. Can you confirm that the exception is actually being received from PostgreSQL, or is the error coming from your test harness? Is your plpython stored procedure supposed to be using the environment constructed by buildout, or the system default environment? (we use buildout for our Python code, but our plpythonu stored procedures use the stock standard Python environment, as provided by the Ubuntu packages). If this is the correct environment, it sounds like you are triggering some sort of race condition in the buildout generated .py files. You might be able to confirm and/or work around the issue by getting your own stanza added to the top of the generated site.py, explicitly importing the problematic modules right at the top before any buildout magic happens. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is there a way to add a detail message in a warning with pl/Python?
On Tue, Jan 29, 2013 at 8:03 PM, DANIEL CRISTIAN CRUZ wrote: > Hello, > > I build a function and added some warnings where the id of the affected > rows are inside it; and would be nice to put it into a detail, not in > the message, since pgbadger can group it and I can see clearly what is > wrong in my cluster. > > I read the manual and tried something with no success: > > plpy.warning('test', 'detail') -> output both values > plpy.warning({'message': 'test', 'detail': 'detail'}) -> output the dict > plpy.warning('test', detail='detail') -> fail > > Am I missing something? > > I would read the code, but I had no time right now... You might be able to get it for errors by raising plpy.SPIError(error, detail), but that doesn't help for warnings. I think for warnings you would need to write a helper in PL/pgSQL or similar and use it to log the warnings using plpy.prepare/plpy.execute. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] var/log/postgresql deletion mystery Ubuntu 12.10
On Mon, Feb 11, 2013 at 6:36 AM, Andrew Taylor wrote: > Here's what I did to fix this in Ubuntu 12.10. > > Now I cannot explain (a) why this problem came into being or (b) what the > science is behind my fix. This was my first dive into Linux logs and there > being seemingly an array of ways logging can be handled now, and was handled > historically, with some variation across linux distributions. It's a little > bewildering. All I can say it this works, and if someone can offer me > insight into (postgres) logs in Ubuntu it would be appreciated. So here we > go: > > Syslog seems to be one of the key processes available for logging. You can > use this to determine what gets logged by given processes. > > In /etc/rsyslog.d/50-default.conf > Add this line: > local0.* -/var/log/postgresql/ I think that by adding this line, you now have the PostgreSQL startup script and syslog fighting over ownership of this directory. The Debian and Ubuntu packages pipe PostgreSQL's stderr here, and are not configured to use syslog nor PostgreSQL's builtin log rotation. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Failing backups, canceling statement due to conflict with recovery
Hi. I'm unable to offload my backups to one of my PG 9.1 hot standbys using purely streaming replication. After a few hours, usually on the same large table, pg_dump is failing with 'ERROR: canceling statement due to conflict with recovery'. >From my reading from the documentation, this should not be possible as my hot standby has 'hot_standby_feedback = on' in its postgresql.conf. I submitted a bug report a while ago, but am still unsure if this option has never worked as intended or if I'm misinterpreting the documentation. My goal is to reliably make pg_dump backups from a hot standby without the hot standby lagging far behind the master. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Failing backups, canceling statement due to conflict with recovery
On Thu, Feb 14, 2013 at 5:20 AM, René Romero Benavides wrote: > What does the following query give you on the standby ? > select * from pg_settings where category = 'Replication / Standby Servers'; name | setting --+- hot_standby | on hot_standby_feedback | on max_standby_archive_delay| 3 max_standby_streaming_delay | 3 wal_receiver_status_interval | 10 -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Failing backups, canceling statement due to conflict with recovery
On Thu, Feb 14, 2013 at 7:21 AM, Sergey Konoplev wrote: > On Wed, Feb 13, 2013 at 12:53 AM, Stuart Bishop > wrote: >> I'm unable to offload my backups to one of my PG 9.1 hot standbys >> using purely streaming replication. After a few hours, usually on the >> same large table, pg_dump is failing with 'ERROR: canceling statement >> due to conflict with recovery'. >> >> From my reading from the documentation, this should not be possible as >> my hot standby has 'hot_standby_feedback = on' in its postgresql.conf. > > hot_standby_feedback affects VACUUM only to prevent it from removing > dead rows on master that might cause the cleanup conflict. It has no > deal with other hard conflicts like in case of DROP TABLE etc. I can confirm that no DDL is being run (apart from temporary tables). I can also confirm that the replication connection does not drop out. I can't think of what else would be causing problems apart from vacuum, and I used vacuum to trigger the problem in the bug report test case ( http://postgresql.1045698.n5.nabble.com/BUG-7546-Backups-on-hot-standby-cancelled-despite-hot-standby-on-td5724284.html ) Something that might be interesting that I neglected to mention, the DETAIL of the error message is random; on production my failures end up with one of these three: DETAIL: User query might have needed to see row versions that must be removed. DETAIL: User was holding a relation lock for too long. DETAIL: User was holding shared buffer pin for too long. > Personally I recommend to do pg_dump on master at least on <=9.2. Anything in particular in 9.2? I've been seeing a lot of replication related fixes in 9.1 patch releases and had been planning on sticking with 9.1 for the next 18 months. I'm still unsure if this is supposed to work, and this is a bug in PostgreSQL or Ubuntu, or if I'm just misreading the documentation. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL binaries under /usr/lib, why?
On Fri, Feb 22, 2013 at 4:41 PM, Rafael Martinez wrote: > Why are many postgres binaries in ubuntu under /usr/lib/postgresql? They are under /usr/lib/postgresql/${VERSION}/bin. This allows multiple PostgreSQL versions to coexist on the same Debian or Ubuntu server. > According to FHS-2.3, /usr/lib includes object files, libraries, and > internal binaries that are not intended to be executed directly by > users or shell scripts. Right. This is why wrappers are installed in /usr/bin that invoke the correct version of the executable when it is known (and a default version when it isn't, such as when you are connecting to a remote server). You shouldn't need to invoke anything from /usr/lib directly, unless you are doing something particularly low level. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Better dual WAL shipping/streaming integration?
Hi. I currently have a number of servers using both streaming replication and WAL shipping in the standard setup. Occasionally, the primaries get large load spikes. Load climbs up, things slow down. So much so that streaming replication starts lagging because the WAL sender is being starved for resources. Eventually, the slave disconnects. At which point it happily starts consuming the shipped files and catches up again. All this is working just fine and as designed (apart from the load spikes, but that is my problem). However, I was thinking that it would be better streaming replication did not attempt to pull down WAL files that had already been shipped and were available locally. This would save resources when you need them most - the hot standbys have fallen behind. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PSA: If you are running Precise/12.04 upgrade your kernel.
On Fri, Jun 7, 2013 at 5:51 AM, Joshua D. Drake wrote: > > On 06/06/2013 03:48 PM, Scott Marlowe wrote: >> >> >> On Thu, Jun 6, 2013 at 4:35 PM, Joshua D. Drake >> wrote: >>> >>> I had the distinct displeasure of staying up entirely too late with a >>> customer this week because they upgraded to 12.04 and immediately >>> experienced a huge performance regression. In the process they also >>> upgraded >>> to PostgreSQL 9.1 from 8.4. There were a lot of knobs to >>> change/fix/modify >>> because of this. However, nothing I did fixed the problem. Until... I >>> upgraded the kernel. >>> >>> Upgrading from 3.2Precise to the 3.9.4 kernel produced the following >>> results: >> >> >> I've since heard that 3.4 also fixes this issue as well. >> >> What are you using for your IO on these boxes? > > I was able to demonstrate it over iSCSI to a Nimble Storage SAN as well as > DAS with 2 drive RAID 1 for xlogs and 8 drive RAID 10 for data (DL385 G7). This might sound familiar: http://postgresql.1045698.n5.nabble.com/Ubuntu-12-04-3-2-Kernel-Bad-for-PostgreSQL-Performance-td5735284.html tl;dr for that thread seems to be a driver problem (fusionIO?), I'm unsure if Ubuntu specific or in the upstream kernel. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Locale Issue
On Wed, Aug 21, 2013 at 4:55 PM, Don Parris wrote: > I did run pg_dropcluster, pg_createcluster (setting locale to C.UTF8) - and > that seemed to work, except that I could not thereafter make remote > connections (despite resetting the hba and postgres.conf files as they > should be). Rather than chasing locales, I'd look into why you failed here. When you install the PostgreSQL packages, it runs pg_createcluster for you. If you don't like the locale or encoding you used, you run pg_dropcluster and pg_createcluster as you did. The reason why your database did not work after doing this is probably obvious from your log files. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres backup solution
On 15 March 2017 at 03:04, John McKown wrote: > Your message is not diplaying. At least not for me. I guess that my reader > does not understand the "smime.p7m" file, which shows as an attachment. For > others, his question is: > > === original question from Lawrence Cohan === > > Yes, this is what I intended to ask: > > What would be a recommended solution for backing up a very large Postgres > (~13TeraBytes) database in order to prevent from data deletion/corruption. > Current setup is only to backup/restore to a standby read-only Postgres > server > via AWS S3 using wal-e however this does not offer the comfort of keeping a > full backup available in case we need to restore some deleted or corrupted > data. 'wal-e backup-push' will store a complete backup in S3, which can be restored using 'wal-e backup-fetch'. And since you are already using wal-e for log shipping, you get full PITR available. pg_dump for a logical backup is also a possibility, although with 13TB you probably don't want to hold a transaction open that long and are better off with wal-e, barman or other binary backup tool. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Create Action for psql when NOTIFY Recieved
On 29 August 2017 at 08:42, Jerry Regan wrote: > Tom, > > After a few minutes thought….. > > /s/jr > Consultant > Concerto GR > Mobile: 612.208.6601 > > Concerto - a composition for orchestra and a soloist > > > > On 28Aug, 2017, at 6:08 PM, Tom Lane wrote: > > "David G. Johnston" writes: > > On Mon, Aug 28, 2017 at 1:28 PM, Jerry Regan < > jerry.re...@concertoglobalresources.com> wrote: > > My concern is how, after LISTENing in psql, I can tell it what to do when > the NOTItFY is received. > > > As far as I am aware you cannot. > > > Yes, and psql is not designed to do anything of its own accord, > so I think the answer is really "use another program”. > > > psql would be running on *nix. > > Let’s suppose for a moment that I piped the output of a psql instance to awk > or some similar program, configured to detect the NOTIFY. That program would > then spawn a process to actually perform the work, parameters being whatever > is part of the NOTIFY. Both this psql instance and the awk script would be > dedicated to this task. > > Given this is not intended in any way to be production quality code - in > fact, it’s intended to deliver XML to the client server for validation > (xmllint) in a development/test environment - do you see anything that > clearly won’t work? Also, this would be a very low volume connection. > Perhaps one NOTIFY in five minutes - or longer. > > Yes, it’s a hack. Or crib some code from http://initd.org/psycopg/docs/advanced.html#async-notify or https://godoc.org/github.com/lib/pq/listen_example , which is probably less effort than assembling this collection of hacks and trying to make it reliable. Most PostgreSQL APIs have support for notifications. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] ubuntu packages for 8.4
On Fri, Jul 10, 2009 at 8:28 AM, Tim Uckun wrote: > I don't see any ubuntu packages for 8.4 in the default repositories. > > Does anybody know if they will be upgrading the postgresql package to > 8.4 or creating a new package for it. The postgresql-8.4 packages arrived in 9.10 (Karmic) about two hours ago. https://launchpad.net/ubuntu/+source/postgresql-8.4 Assuming you are not running the 9.10 prerelease, you may want to request a backport to whatever Ubuntu release you are running - https://help.ubuntu.com/community/UbuntuBackport. This should be no drama. Major postgresql releases always get a new package. This allows you to have multiple major versions installed and running simultaneously. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Connection pool/load balancer supporting ident authentication?
Hi. Is anyone aware of a connection pool or load balancer for PostgreSQL that supports ident based authentication? Neither pgpool-ii nor pgbouncer support this according to their docs, so I was wondering what else is out there. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Connection pool or load balancer supporting ident authentication
Hi. Is anyone aware of a connection pool or load balancer for PostgreSQL that supports ident based authentication? Neither pgpool-ii nor pgbouncer support this according to their docs, so I was wondering what else is out there. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] attempted to lock invisible tuple - PG 8.4.1
I'm running our products test suite against PostgreSQL 8.4.1. The test suite runs fine against 8.3.7. With 8.4.1, some of our tests are failing with the exception 'attempted to lock invisible tuple'. The failures are repeatable - they crash every time at the same point. They crash no matter if they are being run in isolation or as part of the larger test suite. Anyone know what we could be doing that triggers that? Looking at our statement logs we don't seem to be doing anything unusual. The failing tests I've checked are running under SERIALIZABLE isolation level, and the database will have been recreated a few instants ago using 'createdb --template test_template_db'. One of the statement logs is at http://paste.ubuntu.com/285983/ - I can't see anything unusual going on but it might help diagnose the problem. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] attempted to lock invisible tuple - PG 8.4.1
On Mon, Oct 5, 2009 at 4:22 PM, Alban Hertroys wrote: > On 5 Oct 2009, at 8:58, Stuart Bishop wrote: > >> I'm running our products test suite against PostgreSQL 8.4.1. The test >> suite runs fine against 8.3.7. >> >> With 8.4.1, some of our tests are failing with the exception >> 'attempted to lock invisible tuple'. The failures are repeatable - >> they crash every time at the same point. They crash no matter if they >> are being run in isolation or as part of the larger test suite. >> >> Anyone know what we could be doing that triggers that? Looking at our >> statement logs we don't seem to be doing anything unusual. The failing >> tests I've checked are running under SERIALIZABLE isolation level, and >> the database will have been recreated a few instants ago using >> 'createdb --template test_template_db'. > > A similar issue was discussed just recently here: > http://archives.postgresql.org/pgsql-general/2009-09/msg01219.php > > That issue involved cursors though (and a serializable isolation level, but > you have that). Do you have any triggers that use cursors on the table that > the update fails for? There is a trigger on that table, and it is certainly the culprit as can be seen here (different table, same trigger): launchpad_dev=# begin; BEGIN launchpad_dev=# set transaction_isolation to serializable; SET launchpad_dev=# update bug set name='foooz' where id=1; ERROR: attempted to lock invisible tuple launchpad_dev=# abort; ROLLBACK launchpad_dev=# alter table bug disable trigger tsvectorupdate; ALTER TABLE launchpad_dev=# begin; BEGIN launchpad_dev=# set transaction_isolation to serializable; SET launchpad_dev=# update bug set name='foooz' where id=1; UPDATE 1 launchpad_dev=# abort; ROLLBACK I haven't had luck reducing this to a test case though. I'll give it another shot tomorrow. Here are some more details for the audience: launchpad_dev=# \d bug Table "public.bug" Column |Type | Modifiers +-+ id | integer | not null default nextval('bug_id_seq'::regclass) datecreated| timestamp without time zone | not null default timezone('UTC'::text, ('now'::text)::timestamp(6) with time zone) name | text| title | text| not null description| text| not null owner | integer | not null duplicateof| integer | fti| ts2.tsvector| private| boolean | not null default false security_related | boolean | not null default false date_last_updated | timestamp without time zone | not null default timezone('UTC'::text, now()) date_made_private | timestamp without time zone | who_made_private | integer | date_last_message | timestamp without time zone | number_of_duplicates | integer | not null default 0 message_count | integer | not null default 0 users_affected_count | integer | default 0 users_unaffected_count | integer | default 0 hotness| integer | not null default 0 Indexes: "bug_pkey" PRIMARY KEY, btree (id) "bug_name_key" UNIQUE, btree (name) "bug__date_last_message__idx" btree (date_last_message) "bug__date_last_updated__idx" btree (date_last_updated) CLUSTER "bug__datecreated__idx" btree (datecreated) "bug__hotness__idx" btree (hotness) "bug__users_affected_count__idx" btree (users_affected_count) "bug__users_unaffected_count__idx" btree (users_unaffected_count) "bug__who_made_private__idx" btree (who_made_private) WHERE who_made_private IS NOT NULL "bug_duplicateof_idx" btree (duplicateof) "bug_fti" gist (fti) "bug_owner_idx" btree (owner) Check constraints: "notduplicateofself" CHECK (NOT id = duplicateof) "sane_description" CHECK (ltrim(description) <> ''::text AND char_length(description) <= 5) "valid_bug_name" CHECK (valid_bug_name(name)) Foreign-key constraints: "bug__who_made_private__fk" FOREIGN KEY (who_made_private) REFERENCES person(id) "bug_duplicateof
Re: [GENERAL] attempted to lock invisible tuple - PG 8.4.1
On Mon, Oct 5, 2009 at 11:00 PM, Alvaro Herrera wrote: Stuart Bishop wrote: On Mon, Oct 5, 2009 at 4:22 PM, Alban Hertroys wrote: > A similar issue was discussed just recently here: > http://archives.postgresql.org/pgsql-general/2009-09/msg01219.php > > That issue involved cursors though (and a serializable isolation level, but > you have that). Do you have any triggers that use cursors on the table that > the update fails for? There is a trigger on that table, and it is certainly the culprit as can be seen here (different table, same trigger): I don't think the committed patch touches anything involved in what you're testing, but if you could grab CVS tip from the 8.4 branch (or the snapshot from ftp.postgresql.org:/pub/snapshot/stable/8.4 ) and give it a try, that'd be great. I trigger the same error with a freshly built snapshot. -- Stuart Bishop http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] attempted to lock invisible tuple - PG 8.4.1
On Tue, Oct 6, 2009 at 8:28 PM, Alvaro Herrera wrote: > Stuart Bishop wrote: >> >> >> On Mon, Oct 5, 2009 at 11:00 PM, Alvaro Herrera >> wrote: >> >Stuart Bishop wrote: >> >>On Mon, Oct 5, 2009 at 4:22 PM, Alban Hertroys >> >> wrote: >> > >> >>> A similar issue was discussed just recently here: >> >>> http://archives.postgresql.org/pgsql-general/2009-09/msg01219.php >> >>> >> >>> That issue involved cursors though (and a serializable isolation level, >> >>> but >> >>> you have that). Do you have any triggers that use cursors on the table >> >>> that >> >>> the update fails for? >> >> >> >>There is a trigger on that table, and it is certainly the culprit as >> >>can be seen here (different table, same trigger): >> > >> >I don't think the committed patch touches anything involved in what >> >you're testing, but if you could grab CVS tip from the 8.4 branch (or >> >the snapshot from ftp.postgresql.org:/pub/snapshot/stable/8.4 ) and give >> >it a try, that'd be great. >> >> I trigger the same error with a freshly built snapshot. > > mmkay. So, any luck in constructing a test case? Yes. Just no luck getting it sent to the mailing list - seems to silently drop emails with attachments on me :-P I've managed to get a self contained test case assembled. I'm not sure what to make of this. The test case builds a small database from a dump (one table), and triggers the 'invisible tuple' error. If I touch the table though, such as ALTER TABLE or just updating some data in in, the problem disappears. $ sh invisible.sh [...] BEGIN SET ERROR: attempted to lock invisible tuple ROLLBACK BEGIN UPDATE 1 COMMIT BEGIN SET UPDATE 1 ROLLBACK The test case (invisible.sh) and required dump (foodump.sql - 60k) are at http://www.stuartbishop.net/invisible/ -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] attempted to lock invisible tuple - PG 8.4.1
On Wed, Oct 7, 2009 at 3:09 AM, Alvaro Herrera wrote: > Stuart Bishop wrote: > >> >I don't think the committed patch touches anything involved in what >> >you're testing, but if you could grab CVS tip from the 8.4 branch (or >> >the snapshot from ftp.postgresql.org:/pub/snapshot/stable/8.4 ) and give >> >it a try, that'd be great. >> >> I trigger the same error with a freshly built snapshot. > > If you're up for a bit of patching, please test with the attached patch > applied. The patch is working. I am no longer able to trigger the 'attempted to lock invisible tuple' error. Thanks :-) -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] hot_standby_feedback
On Thu, Aug 29, 2013 at 2:44 PM, Tatsuo Ishii wrote: > I have a question about hot_standby_feedback parameter. In my > understanding, if this parameter is on, a long running transaction on > standby will not be canceled even if the transaction conflicts. > As you can see vacuum on the primary removes all the rows in t1. I > thought vacuum will not make the page entriely empty because > of the effect of hot_standby_feedback. > > After while, on standby: > test=# select * from t1; > FATAL: terminating connection due to conflict with recovery > DETAIL: User was holding a relation lock for too long. > HINT: In a moment you should be able to reconnect to the database and repeat > your command. > > Again, this is not what I expected. Am I missing something? I also expected this behavior, and suffer similar problems with PostgreSQL 9.1. I reported this as BUG #7546 with a test case and raised it a few times on the mailing lists, but am no closer to a solution. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Incorrect password when restarting a cluster
On Wed, Sep 25, 2013 at 5:13 PM, Hombakazi Cynthia Jozela wrote: > Hello Community > > I am new in Postgresql, and I used a "one click graphic installer", and > installed on Ubuntu and I need some assistance: > > I would like to be advised on what I need to do when restarting a cluster as > I am getting a password error when I try to. These are the steps I took: > > hombakazi@hombakazi-VirtualBox:~$ sudo -i > [sudo] password for hombakazi: > root@hombakazi-VirtualBox:~# su > root@hombakazi-VirtualBox:~# su - postgres > $ /opt/PostgreSQL/9.2/scripts/serverctl.sh restart > Please enter your password if requested. > [sudo] password for postgres: > Sorry, try again. > > So I don't understand whats going on because I know my postgres password and > I use it to login and its working but here I get this error. What is it that > I am not doing right? or I need to do.? That is asking for a sudo password, so I think you can just run '/opt/PostgreSQL/9.2/scripts/serverctl.sh restart' from your normal account, without all the proceeding sudo and su commands. Most people use the standard PostgreSQL packages with Ubuntu. PostgreSQL 9.1 is available with a few clicks using the software center with Ubuntu 12.04 (precise) and Ubuntu 13.04 (raring). If you want PostgreSQL 9.2 or 9.3, you are currently best off installing Ubuntu 12.04 (precise) and following the instructions at https://wiki.postgresql.org/wiki/Apt -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] streaming replication timeout error
On Wed, Oct 9, 2013 at 9:58 AM, 高健 wrote: > The most important part is: > > 2013-09-22 09:52:47 JST[28297][51d1fbcb.6e89-2][0][XX000]FATAL: Could not > receive data from WAL stream: could not receive data from server: connection > timeout > scp: /opt/PostgresPlus/9.2AS/data/arch/00AC01F1004A: No such > file or directory > > I was asked about: > In what occasion will the above fatal error occur? It is a network error. The TCP/IP socket died somehow, and the FATAL error logged. PostgreSQL then fell back to using the restore_command defined in your recovery.conf file, but the WAL file had not yet been shipped and the output logged. I imagine streaming replication happily reconnected soon after. This is all quite normal. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Linux vs FreeBSD
On 4 April 2014 11:03, François Beausoleil wrote: > Hi all! > > Does PG perform that much better on FreeBSD? I have some performance issues > on a Ubuntu 12.04 which I'd like to resolve. iowait varies a lot, between 5 > and 50%. Does FreeBSD better schedule I/O, which could alleviate some of the > issues, or not at all? I have no experience administering FreeBSD, but I'm > willing to learn if I'll get some performance enhancements out of the switch. Ubuntu 14.04 LTS is being released in a few days and you might have more success with its newer kernel. And try different schedulers if you haven't already - IIRC switching to deadline resolved one of our load problems. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Ubuntu Packages / Config Files
On 2 May 2014 01:40, Stephan Fabel wrote: > All, > > apologies if this has been addressed somewhere already. I don't have a > lot of experience in PostgreSQL; this is my first setup where I'm trying > to scale and provide some of the more advanced features (like WAL > shipping, master-slave sync, integrating pgbouncer, etc.), and I'm > looking for help regarding the configuration files. > > I'm using Ubuntu 12.04 for these deployments at the moment. The Ubuntu > packages don't put the configuration files with the cluster data (by > default under /var/lib/postgresql/9.1/main under 12.04), but in > /etc/postgresql/9.1/main) and they start postgres with the -c option > pointing there. > > Whenever I try to add a slave, first I stop the postgresql service, move > the above data directory to something like > /var/lib/postgresql/9.1/main.orig, create a new 'main' directory with > identical permissions/ownerships, and start pg_basebackup pointing > there. It will not copy the server.crt and server.key symlinks (by > default pointing to the "snakeoil" cert/key) so I re-create those. I > then put the appropriate recovery.conf into /etc/postgresql/9.1/main, > given that that's the configuration directory where everything is. I set > "wal_level = hot_standby" and "hot_standby = on" in postgresql.conf. > After I then start the postgresql service again. recovery.conf goes into $DATADIR, which is /var/lib/postgresql/9.1/main in your case. I rationalize this by considering it database state, rather than configuration, since commands like 'pg_ctl promote' or using a trigger file will mess with it. > The problem is that recovery.conf gets ignored in this case. I can add > another symlink pointing to it into the data directory, for example, or > copy the file there, then it works, but honestly this has cost me a LOT > of time figuring out. Having the two, with the master in /etc, will confuse you. And quite possibly PostgreSQL if it needs to promote the database. > So, a couple of questions: > > 1) am I even going about this the right way under an Ubuntu system? Yes. > 2) do the packages available at apt.postgresql.org behave differently? They are more up to date. I think the manual symlink of the SSL files in $DATADIR is no longer needed, instead pulling them from /etc. > 3) do later versions of postgresql behave differently? 8.4->9.3 are all pretty much the same. > Eventually, I'd like to use configuration management tools like puppet > to deploy something like that, but I suppose that's a topic for another day. Yeah... integration with configuration management is going to be interesting when PG allows you to modify config from the SQL command line... -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pgtune and massive shared_buffers recommendation
Hi. I've got some boxes with 128GB of RAM and up to 750 connections, just upgraded to 9.3 so I'm revising my tuning. I'm getting a recommendation from pgtune to bump my shared_buffers up to 30GB and work_mem to 80MB. Is a shared_buffers this high now sane? The PostgreSQL reference doesn't make recommendations on limits, but it didn't either with earlier versions of PostgreSQL where more than a few GB was normally a bad thing to do. The most recent blob posts I see mentioning 9.3 and modern RAM sizes still seem to cap it at 8GB. (and yes, I am using pgbouncer but stuck in session mode and up to 750 connections for the time being) -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] max_connections proposal
On Fri, May 27, 2011 at 6:22 AM, Craig Ringer wrote: > Best performance is often obtained with the number of _active_ connections > in the 10s to 30s on commonplace hardware. I'd want to use "hundreds" - > because mailing list posts etc suggest that people start running into > problems under load at the 400-500 mark, and more importantly because it's > well worth moving to pooling _way_ before that point. If you can. I'd love a connection pool that knows when I have a resource that persists across transactions like a cursor or temporary table and the backend connection needs to be maintained between transactions, or if there are no such resources and the backend connection can be released to the pool between transactions. I suspect this sort of pool would need to be built into the core. At the moment I only see a benefit with a pool from connections from my webapp which I know can safely go through pgbouncer in transaction pooling mode. Or would there be some way of detecting if the current session has access to stuff that persists across transactions and this feature could be added to the existing connection pools? -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Change request - log line prefix
On Thu, May 31, 2012 at 12:19 PM, Chris Angelico wrote: > On Thu, May 31, 2012 at 2:05 PM, Evan Rempel wrote: >> Even when the wrap column is set to a very large value (32k) STATEMENT lines >> still wrap according to the line breaks in >> the original SQL statement. > > The problem isn't so much the wrapping, it seems, as that your > statements' line breaks are being propagated through. So as a possible > alternative solution, perhaps there could be an option to replace > newlines with spaces before the line goes to the log? I'd certainly like to see this or similar (encode the querys into a single line of ascii, lossy is ok). I like my logs both readable and greppable. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Calculating Replication Lag - units
On Tue, Jun 26, 2012 at 6:21 AM, David Kerr wrote: > On Mon, Jun 25, 2012 at 02:17:22PM -0700, Steve Crawford wrote: > - On 06/25/2012 01:17 PM, David Kerr wrote: > - >Howdy, > - > > - >When calculating Replication lag, I know that we have to compare the > - >pg_current_xlog_location > - >to pg_last_xlog_receive_location, etc. but what I'm trying to figure out > - >is what are > - >the units that I'm left with after the calculation. > - > > - >(i.e., does the xlog_location imply some time value?) > - > > - >Here's the output of the (slightly modified script) > - >Master: 5003964876715 > - >Receive: 5003964876715 > - >Replay: 5003964765203 > - > > - >receive.value 0 > - >apply.value 111512 > - > > - >111512 isn't inherently useful to me on its own. > - > > - >Any tips? > - > > - How about now()-pg_last_xact_replay_timestamp() (however this can be a > - large number if there have not been any recent transactions on the > - master). I suppose you could do something like: > - > - case when pg_last_xlog_receive_location() = > - pg_last_xlog_replay_location() then '0 seconds'::interval > - else now()-pg_last_xact_replay_timestamp() end as log_delay; > > i don't know for sure that 111512 is a time value.. that's kind of > what i'm wondering. If i knew that it was like miliseconds or something > that would be helpful. On the hot standby: SELECT now()-pg_last_xact_replay_timestamp() AS lag; This gives you the lag time as a PostgreSQL interval. (It also might give you a value if you run it on a database that is not a hot standby if it started in recovery mode). It seems difficult or impossible to calculate this on the master. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Minimal streaming replication
On Tue, Jun 26, 2012 at 6:47 AM, Steve Crawford wrote: > The documentation says "To use streaming replication, set up a file-based > log-shipping standby server as described in Section 25.2" however I'm > not using any of the archive or restore commands but instead use > pg_basebackup to do the initial copy in a script that at its core runs > pg_basebackup then starts the standby server. So... > > Given a sufficiently large wal_keep_segments on the master is this a > reasonable approach? Its what I've setup and seems to be working fine. > Is there a disadvantage, other than disk-space required, to having > wal_keep_segments set to a fairly large number, say 256 or 512? I set mine to 5000. > However I'm seeing troubling messages in the log. While running pgbench I > see the following types of messages on the master every minute or few: > 2012-06-25 11:36:26 PDT FATAL: could not send data to WAL stream: SSL > error: sslv3 alert unexpected message > 2012-06-25 11:36:26 PDT LOG: invalid magic number in log file 457, > segment 173, offset 15851520 > ... > 2012-06-25 11:36:41 PDT LOG: streaming replication successfully connected > to primary > ... > > Any advice on what this is telling me? I'm not keen on words like "FATAL" in > my logs. I saw this with Ubuntu 12.04 and PostgreSQL 9.1.4, replicating to an identical machine. Google suggested it was caused by different versions of libssl, but I don't think that is the case here unless one of the packages got statically linked with an old libssl. I haven't had time to investigate so I've disabled SSL for now, even though replication appears to work apart from the disconnections. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Streaming replication failover with 3 servers
Hi. I'm trying to work out failover and disaster recovery procedures for a cluster of three servers. Streaming replication is being used with a high wal_keep_segments, no log shipping is happening. I need to avoid the several hours it takes to rebuild a hot standby from scratch. ServerA is the master. ServerB is a streaming hot standby and prefered failover server. ServerC is a streaming hot standby. For a planned failover, maintenance on ServerA: 1. Shutdown ServerB & ServerC 2. Shutdown ServerA 3. Copy pg_xlog from ServerA to ServerB and ServerC 4. Reconfigure ServerB as master, start it up. 5. Reconfigure ServerC as streaming hot standby of ServerB. Start it. 6. After maintenance, reconfigure ServerA as streaming hot standby of ServerB. Start it. For an unplanned failover, ServerA has exploded: 1. Run 'SELECT pg_last_xlog_receive_location()' on ServerB and ServerC, determining which is most up to date. 2. Shutdown ServerB and ServerC 3. If ServerC is more up to date, copy pg_xlog from ServerC to ServerB. 4. Reconfigure ServerB as master, start it up. 5. Reconfigure ServerC as streaming hot standby of ServerB, start it up. Does this look correct to people? Am I going to end up in trouble copying files into pg_xlog like this on a busy system? Is it overengineered? eg. will a master ensure everything is streamed to connected hot standbys before a graceful shutdown? -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pg_dump on hot standby canceled despite hot_standby_feedback=on
Hi. I've found a situation on one of my PG 9.1 servers where pg_dump running on a hot standby gets terminated when a tble on the master is vacuumed. This is PostgreSQL 9.1.4, and purely streaming replication. pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery DETAIL: User was holding shared buffer pin for too long. pg_dump: The command was: COPY public.webcatalog_machine (id, owner_id, uuid, hostname, packages_checksum, package_list, logo_checksum) TO stdout; pg_dump: *** aborted because of error hot_standby_feedback is on, and my understanding is that this should instruct the master that there is still an open transaction and vacuum should not clean stuff up that is still in use on the hot standby. Replication is otherwise working flawlessly, and I've confirmed that the walstreamer has been alive the whole time. The pg_dump works when no vacuum kicks in, but I have reproduced the fault by manually running vacuum on the master once the pg_dump has started on this larger table. I think I must be missing something, as I don't see this on my other servers. This database isn't particularly large, with pg_dump finishing in a few minutes. I'm successfully using pg_dump on other hot standbys that take half a day to dump with tables active enough that they certainly should have triggered autovacuums. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pg_dump on hot standby canceled despite hot_standby_feedback=on
I'm still getting my pg_dumps on the 9.1 hot standby cancelled occasionally, despite hot_standby_feedback being set. pg_stat_replication tells me the replication connection is not being reset or anything. The last one was: pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery DETAIL: User was holding a relation lock for too long. Can anyone shed some insight? My understanding of hot_standby_feedback is that it should make this sort of query cancellation never happen. On Tue, Aug 14, 2012 at 6:34 PM, Stuart Bishop wrote: > Hi. > > I've found a situation on one of my PG 9.1 servers where pg_dump > running on a hot standby gets terminated when a tble on the master is > vacuumed. This is PostgreSQL 9.1.4, and purely streaming replication. > > pg_dump: Error message from server: ERROR: canceling statement due to > conflict with recovery > DETAIL: User was holding shared buffer pin for too long. > pg_dump: The command was: COPY public.webcatalog_machine (id, > owner_id, uuid, hostname, packages_checksum, package_list, > logo_checksum) TO stdout; > pg_dump: *** aborted because of error > > hot_standby_feedback is on, and my understanding is that this should > instruct the master that there is still an open transaction and vacuum > should not clean stuff up that is still in use on the hot standby. > Replication is otherwise working flawlessly, and I've confirmed that > the walstreamer has been alive the whole time. > > The pg_dump works when no vacuum kicks in, but I have reproduced the > fault by manually running vacuum on the master once the pg_dump has > started on this larger table. > > I think I must be missing something, as I don't see this on my other > servers. This database isn't particularly large, with pg_dump > finishing in a few minutes. I'm successfully using pg_dump on other > hot standbys that take half a day to dump with tables active enough > that they certainly should have triggered autovacuums. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Race condition between hot_standby_feedback and wal_receiver_status_interval ?
Hi. I am running streaming replication with PostgreSQL 9.1.5, and using hot_standby_feedback=on. Per previous messages, I'm still experiencing query cancellations on the hot standbys triggered by vacuums on the primary (http://postgresql.1045698.n5.nabble.com/pg-dump-on-hot-standby-canceled-despite-hot-standby-feedback-on-td5719753.html). pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery DETAIL: User was holding shared buffer pin for too long. pg_dump: The command was: COPY public.webcatalog_machine (id, owner_id, uuid, hostname, packages_checksum, package_list, logo_checksum) TO stdout; pg_dump: Error message from server: ERROR: canceling statement due to conflict with recovery DETAIL: User was holding a relation lock for too long. Looking at the documentation for hot_standby_feedback, "this parameter can be used to eliminate query cancels caused by cleanup records", but "feedback messages will not be sent more frequently than once per wal_receiver_status_interval". I'm wondering if there is a race condition here, where if vacuum kicks in on the primary before feedback information has been sent, then records could be removed that are still needed on the hot standby. My initial thought was vacuum_defer_cleanup_age could be used to check this, but it describes itself correctly as fairly useless since it is set in transactions rather than with a time interval and recommends using hot_standby_feedback as an alternative. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Timeline switch problem with streaming replication with 3 nodes
On Mon, Sep 24, 2012 at 7:37 PM, wrote: > I've found a discussion > (http://archives.postgresql.org/pgsql-general/2011-12/msg00553.php) on a > similar issue a while back. They talk about sharing WAL files as the > solution. But I thought that the idea with streaming replication was that I > would not need a shared storage. > > Is that the only solution or is there another way? Things should work if you manually copy across the 010.history file from the new master's pg_xlog directory to the slave's. This method isn't documented, but seems to work. I believe the problem is being fixed, by letting the history files be shipped along with the WAL files. http://archives.postgresql.org/pgsql-general/2011-12/msg00456.php -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [Slony1-general] Re: [GENERAL] Stripping out slony after / before / during pg_restore?
>>> I'm setting us up a separate staging / test server and I want to read >>> in a pg_dump of our current origin stripping out all the slony stuff. >>> >>> I was thinking this could serve two purposes a) test out backups >>> restore properly and b) provide us with us with the staging / test >>> server >>> >>> What's the best way to remove all the slony bits? >> >> Well, you can always just drop the slony schema (with a cascade) - >> that should do it. > > Not quite. There are two things that *doesn't* hit: So what was the final recommended process for building a stand alone database from a pg_dump of a replicated node? pg_dump --oids --format=c --file=master.dump master_db createdb staging_db pg_restore -d staging_db master.dump slonik << EOM cluster name = sl; node 1 admin conninfo = 'dbname=staging_db user=slony'; uninstall node (id = 1); EOM This process dies on the last step with: :3: PGRES_FATAL_ERROR select "_sl".uninstallNode(); - ERROR: Slony-I: alterTableRestore(): Table with id 1 not found CONTEXT: SQL statement "SELECT "_sl".alterTableRestore( $1 )" PL/pgSQL function "uninstallnode" line 14 at PERFORM Failed to exec uninstallNode() for node 1 So if I'm reading this thread correctly, the alternative is 'DROP _sl CASCADE;', which doesn't do a full cleanup. Is there no supported disaster recovery procedure? -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [Slony1-general] Re: [GENERAL] Stripping out slony after / before / during pg_restore?
On Mon, Oct 13, 2008 at 5:05 PM, Stuart Bishop <[EMAIL PROTECTED]> wrote: > So what was the final recommended process for building a stand alone > database from a pg_dump of a replicated node? > So if I'm reading this thread correctly, the alternative is 'DROP _sl > CASCADE;', which doesn't do a full cleanup. Is there no supported > disaster recovery procedure? So to (hopefully) answer my own question, the following seems to Do The Right Thing™: pg_dump --oids --format=c --file=master.dump master_db createdb staging_db pg_restore -d staging_db master.dump slonik << EOM cluster name = sl; node 1 admin conninfo = 'dbname=staging_db user=slony'; repair config (set id = 1, event node = 1, execute only on = 1); repair config (set id = 2, event node = 1, execute only on = 1); uninstall node (id = 1); EOM Can anyone who actually knows what they are doing confirm or ridicule this recipe? -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] log_statement and syslog severity
On Wed, Mar 10, 2010 at 8:51 AM, Bruce Momjian wrote: Greg Sabino Mullane wrote: Bruce replied: ... >> This means that, even using syslog as a destination, it's not possible for >> me to filter statements without some sort of log-text parsing, which I'd >> prefer to avoid on effort, performance and data-integrity grounds. > Our logging system is very flexible, but not work-free on the user end. > I don't see us changing things in that area. Bruce, that's a little harsh, I think the original poster has a legitimate request. Personally, I'd love to be able to split the logs on various things, the most important to me being durations and per-database. I looked at the code about a year ago to see how hard this would be and found it non-trivial (for me), as we're really assuming hard things go to a single filehandle. It's definitely an area for improvement, and should be a TODO if not already. This issue has been discussed and I think the community conclusion was that this should not be done by the database but rather by external tools. I think I was giving an accurate portrayal of the odds of this getting added. I do not think there is enough support for this to be a TODO item. How do you plug in this external tool? Installing a filter on stderr doesn't play well with packaged versions of PostgreSQL and probably not Windows either. You also don't get easily machine readable output. It might be possible to trick csvlog to log to a static filename, and perhaps substituting that with a named pipe might work (under unix at least). syslog doesn't give you easily machine readable output. I'm not sure how syslog implementations handle high load (our sysadmins won't use it, so I haven't investigated this further). I need to be analyzing log messages from PostgreSQL in real time, so am starting to investigate solutions. It seems painful, which would be avoidable for future generations if PostgreSQL could spawn a subprocess and send log messages to that in a machine readable format. Perhaps useful filters might start to exist and eventually end up in contrib or core. -- Stuart Bishop http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
[GENERAL] Excessive planner time for some queries with high statistics
I'm seeing some queries, possibly to do with using a UNIQUE index, that have fast time reported by EXPLAIN ANALYZE but the actual time as reported by \timing at 150ms+ higher. PostgreSQL 8.4.9 Simple example queries: http://paste.ubuntu.com/726131/ Table definitions: http://paste.ubuntu.com/726193/ Rewriting the query to use common table expressions worked around the original problem: http://paste.ubuntu.com/726141/ ('fixed' version of the original more complex query). We also found this problem did not occur on one of our staging systems, which had a default statistics target of 100. Lowering the statistics on the relavant columns from 1000 to 100 and reanalyzing made the overhead unnoticeable. Thoughts on IRC was this might be a regression in 8.4.9, but I haven't got earlier versions to test with at the moment. I was asked to obtain some traces but have not been able to organize getting tools on a suitable server yet. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Excessive planner time for some queries with high statistics
On Sat, Nov 5, 2011 at 1:26 AM, Tom Lane wrote: > Stuart Bishop writes: >> We also found this problem did not occur on one of our staging >> systems, which had a default statistics target of 100. Lowering the >> statistics on the relavant columns from 1000 to 100 and reanalyzing >> made the overhead unnoticeable. > > eqjoinsel() is O(N^2) in the number of entries in the MCV lists. > I wouldn't expect this to be an issue unless comparison is pretty > expensive, but maybe those are string not integer columns? No, per http://paste.ubuntu.com/726193/ there are only integer columns being used. >> Thoughts on IRC was this might be a regression in 8.4.9, but I haven't >> got earlier versions to test with at the moment. > > eqjoinsel has worked like that for many years. Are you claiming you > didn't see this behavior in a prior release? If so, which one? I'm repeating speculation from discussions on IRC with pg devs in my timezone. I haven't tested with earlier versions. I can't confirm if this problem appeared with 8.4.9 or not - it is only recently that our query times got to the stage where we can start worrying about milliseconds instead of seconds :-) Previously, we have never noticed the planner overhead so had set default_statistics_target to 1000 to minimize the chance of bad plans due to a skewed sample. 150ms+ seems excessive though in this simple case. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is Synchronous Postgresql Replication Slower Than Asynchronous?
On Sat, Jan 21, 2012 at 3:31 AM, Jerry Richards wrote: > Is synchronous postgresql replication slower than asynchronous? If so, how > much? I am looking into database replication for a phone system, so the > response time is of concern. You might want to investigate pgpool-ii. It sits as a proxy between the client and the databases, and as queries are executed simultaneously, a synchronous replication setup should be just as fast as an unreplicated setup. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is Synchronous Postgresql Replication Slower Than Asynchronous?
On Mon, Jan 23, 2012 at 9:37 PM, Simon Riggs wrote: > On Mon, Jan 23, 2012 at 2:30 PM, Stuart Bishop > wrote: >> On Sat, Jan 21, 2012 at 3:31 AM, Jerry Richards >> wrote: >>> Is synchronous postgresql replication slower than asynchronous? If so, how >>> much? I am looking into database replication for a phone system, so the >>> response time is of concern. >> >> You might want to investigate pgpool-ii. It sits as a proxy between >> the client and the databases, and as queries are executed >> simultaneously, a synchronous replication setup should be just as fast >> as an unreplicated setup. > > Can you share your actual results on that? No. This is based on my assumptions from the design, not from actual tests. I'm currently asynchronously replicated with Slony-I and looking at PG 9.1 builtin replication for our simpler clusters. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Measuring replication lag time
Hi. I need to measure how far in the past a hot standby is, async streaming replication. On the Hot Standby, "select age(current_timestamp,pg_last_xact_replay_timestamp())" gets me this (or close enough for my purposes - I understand that if there are no updates, there are no logs to replay and the lag time will increase). Is there some way to get this same information on the master? pg_stat_replication contains the log information, but I can't see how to map this to a timestamp. Is there a better way of measuring this? -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Measuring replication lag time
On Thu, Feb 23, 2012 at 2:58 AM, Greg Williamson wrote: > Stuart Bishop shaped the aether to ask: > >> Hi. >> >> I need to measure how far in the past a hot standby is, async >> streaming replication. > > Not sure if this will help, but we are using repmgr > <https://github.com/greg2ndQuadrant/repmgr>; it sets up a monitoring schema > which we poll )see the "Monitoring and Testing" section ... study their > source code some and see how they come up with lag times. Might help indeed. My existing solution already has a small daemon (I can't always query the Slony-I sl_status view fast enough for load balancing web requests, so I maintain a cache). But repmgr seems to cover other work I need to do to keep ops happy so something for me to look closer at. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] A 154 GB table swelled to 527 GB on the Slony slave. How to compact it?
On Wed, Mar 7, 2012 at 10:05 AM, Aleksey Tsalolikhin wrote: > We're replicating a PostgreSQL 8.4.x database using Slony1-1.2.x > > The origin database "data/base" directory is 197 GB in size. > > The slave database "data/base" directory is 562 GB in size and is > over 75% filesystem utilization which has set off the "disk free" siren. > > My biggest table* measures 154 GB on the origin, and 533 GB on > the slave. (*As reported by > > SELECT relname as "Table", pg_size_pretty(pg_total_relation_size(relid)) > As "Size" from pg_catalog.pg_statio_user_tables > ORDER BY pg_total_relation_size(relid) DESC; > ) > > I took a peek at this table on the slave using pgadmin3. The table > has auto-vacuum enabled, and TOAST autovacuum enabled. > > There are 8.6 million live tuples, and 1.5 million dead tuples. > > Last autovacuum was over a month ago. > > Last autoanalyze was 3 hours ago. > > Table size is 4 Gigs, and TOAST table size is 527 Gigs. > Indexes size is 3 Gigs. > > Autovacuum threshold is 20%, and the table is just under that threshold. > > I ran vacuum analyze verbose. But the filesystem is still at 76% > utilization. > In fact, now, the "data/base" directory has grown to 565 GB. > > Why is my slave bigger than my master? How can I compact it, please? Do you have a long running transaction on the slave? vacuum will not reuse space that was freed after the longest running transaction. You need to use the CLUSTER command to compact it, or VACUUM FULL followed by a REINDEX if you don't have enough disk space to run CLUSTER. And neither of these will do anything if the space is still live because some old transaction might still need to access the old tuples. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is it possible to call other functions inside plpythonu?
On Fri, Apr 27, 2012 at 4:42 PM, Frank Lanitz wrote: > Hi folks, > > Just looking for a nice server side solution to implement some > fundamental logic for an application. plpythonu looks in this tmers very > well as I'm liking the syntax of Python. However, an very old blog post > at [1] made me unsure whether really to use it. Is it still (or has it > ever been) an issue that plpythonu is having a lot of overhead and not > able to make use of other functions? Didn't found anything on docu for > 9.1 about that. There is overhead transforming data to and from types that the Python interpreter can work with. How much overhead is rather subjective - we find it fast enough for the things we use it for. The Python syntax can become a bit cumbersome if you need to interact with the database a lot, in which case plpgSQL is likely the better language for that job. There are counter examples too, where the Python code is much cleaner and in some cases lets you do things impossible in plpgSQL. Access to the Python standard library gives you a great toolkit, and it being 'unsafe' you can do stuff you would otherwise need to write and deploy C extensions for. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Is it possible to call other functions inside plpythonu?
On Fri, May 4, 2012 at 4:09 PM, Stuart Bishop wrote: > On Fri, Apr 27, 2012 at 4:42 PM, Frank Lanitz wrote: >> Hi folks, >> >> Just looking for a nice server side solution to implement some >> fundamental logic for an application. plpythonu looks in this tmers very >> well as I'm liking the syntax of Python. However, an very old blog post >> at [1] made me unsure whether really to use it. Is it still (or has it >> ever been) an issue that plpythonu is having a lot of overhead and not >> able to make use of other functions? Didn't found anything on docu for >> 9.1 about that. I forgot to mention that you can (and always have been able to) call other functions using plpy.execute("SELECT some_func()"). If you need to invoke Python functions, you can also just 'import' the module like any other Python code and avoid the overheads of converting data too and from PostgreSQL data types. You may need to ensure the PYTHONPATH environment variable gets set correctly if you need to import your own code from the filesystem rather than the standard library. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] postgreSQL & amazon ec2 cloud
On Tue, Mar 3, 2009 at 11:53 AM, Sanjay Arora wrote: > Is it possible to host postgreSQL on Amazon's cloud? What are the issues > involved? Runs just fine under Ubuntu. We haven't tried it under serious load though. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] pgstattuple triggered checkpoint failure and database outage?
I just had a brief outage on a production server running 8.3.6, which I suspect was triggered by me running a table bloat report making lots of pgstattuple calls. The first I got of it was the script I'd just kicked off died: could not open segment 1 of relation 1663/16409/11088101 (target block 131292): No such file or directory CONTEXT: writing block 131292 of relation 1663/16409/11088101 More alerts came in - looks like everything was failing with similar errors. Checking the logs the first indication of the problem is: <@:6160> 2009-03-30 06:49:27 BST LOG: checkpoint starting: time [...] <@:6160> 2009-03-30 06:49:58 BST ERROR: could not open segment 1 of relation 1663/16409/11088101 (target block 131072): No such file or directory <@:6160> 2009-03-30 06:49:58 BST CONTEXT: writing block 131072 of relation 1663/16409/11088101 <@:6160> 2009-03-30 06:49:59 BST LOG: checkpoint starting: time <@:6160> 2009-03-30 06:49:59 BST ERROR: could not open segment 1 of relation 1663/16409/11088101 (target block 134984): No such file or directory <@:6160> 2009-03-30 06:49:59 BST CONTEXT: writing block 134984 of relation 1663/16409/11088101 <@:6160> 2009-03-30 06:50:00 BST LOG: checkpoint starting: time <@:6160> 2009-03-30 06:50:01 BST ERROR: could not open segment 1 of relation 1663/16409/11088101 (target block 135061): No such file or directory <@:6160> 2009-03-30 06:50:01 BST CONTEXT: writing block 135061 of relation 1663/16409/11088101 Doing an immediate shutdown and restart seems to have brought everything back online. I don't think there is any corruption (not that I can tell easily...), and I'm not worried if I lost a transaction or three. Can anyone think what happened here? I suspect pgstattuple as it was the only unusual activity happening at that time and as far as I'm aware we have no hardware alerts and the box has been running smoothly for quite some time. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Accent insensitive search?
On Tue, Mar 24, 2009 at 4:53 PM, Jasen Betts wrote: > On 2009-03-18, cifroes wrote: >> This is a multi-part message in MIME format. >> >> --_=_NextPart_001_01C9A7E6.B32BBA87 >> Content-Type: text/plain; >> charset="iso-8859-1" >> Content-Transfer-Encoding: quoted-printable >> >> Hi, >> >> I have a DB in utf-8 and postgres 8.3.x.=20 >> >> How can I do an accent insensitive search (like ...) ? > > use a posix regular expression that matches the string you want: > > select * from foo where colname ~ '^[CcĆćĈĉĊċČč][ĀāĂ㥹Aa][Pp][ŌōŎŏŐő]$'; > > you could write a function to do the translation. Unicode normalization is an interesting way to strip accents I've found: >>> import unicodedata >>> >>> s = u'Björn' >>> s = unicodedata.normalize("NFKD", s) >>> s = ''.join(c for c in s if ord(c) < 127) >>> print s Bjorn You can also use the character names to map many more characters to the ascii equivalent. A large number of these can me smashed into ASCII using regular expressions and some manual mappings to map LETTER THORN -> th, LETTER LATERAL CLICK -> X etc. Just mapping CAPITAL LETTER XX -> XX and SMALL LETTER XX -> xx seems to get you most of europe if you special case SHARP S -> ss and THORN -> th. >>> s = u'ァ' >>> print unicodedata.name(s) KATAKANA LETTER SMALL A -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] pgstattuple triggered checkpoint failure and database outage?
On Tue, Mar 31, 2009 at 8:59 AM, Tom Lane wrote: Stuart Bishop writes: I just had a brief outage on a production server running 8.3.6, which I suspect was triggered by me running a table bloat report making lots of pgstattuple calls. The first I got of it was the script I'd just kicked off died: could not open segment 1 of relation 1663/16409/11088101 (target block 131292): No such file or directory CONTEXT: writing block 131292 of relation 1663/16409/11088101 ... Doing an immediate shutdown and restart seems to have brought everything back online. What's the actual size of that relation now? Is it growing rapidly? (I'm trying to figure out whether those writes *should* have succeeded, or whether the block numbers were corrupt in memory.) I can't seem to find a file on disk named 11088101 or an entry in pg_class where relfilenode = 11088101. Are the allocated table oids always increasing? If so, I can pretty much guarantee that the missing relation was a temporary table or the index on the temporary table. It had a single integer column and maybe 50million rows. -- Stuart Bishop http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] pgstattuple triggered checkpoint failure and database outage?
On Tue, Mar 31, 2009 at 11:10 AM, Tom Lane wrote: > Stuart Bishop writes: >> On Tue, Mar 31, 2009 at 8:59 AM, Tom Lane wrote: >>> What's the actual size of that relation now? Is it growing rapidly? >>> (I'm trying to figure out whether those writes *should* have succeeded, >>> or whether the block numbers were corrupt in memory.) > >> I can't seem to find a file on disk named 11088101 or an entry in pg_class >> where relfilenode = 11088101. > >> Are the allocated table oids always increasing? If so, I can pretty much >> guarantee that the missing relation was a temporary table or the index on >> the temporary table. It had a single integer column and maybe 50million rows. > > The OIDs increase till they wrap around, so what this sounds like is a > problem with somebody fetching temporary-table blocks into shared memory > (where they should never be), and then things going wrong after the > owning backend drops the temp table (without having cleared out shared > buffers, which it won't do because it doesn't think it needs to). Can > you say what was the exact command(s) you were using with pgstattuple? Get a list of everything I'm interested in: SELECT pg_class.oid, nspname, relname FROM pg_class,pg_namespace WHERE relnamespace = pg_namespace.oid AND relkind IN ('r', 't', 'i') -- normal table, toast, index AND nspname IN ('public', '_sl') ORDER BY nspname, relname For every result, get the pgstattuple information I'm interested in for my reports: SELECT table_len, pg_size_pretty(table_len), dead_tuple_len, pg_size_pretty(dead_tuple_len), free_space, pg_size_pretty(free_space) FROM pgstattuple(%(oid)s) I might be passing objects into pgstattuple that it can't handle - the system exploded before I could tune the table selection criteria. I notice that my query will have asked pgstattuple for information about the temporary table. Which appears to work when testing locally, but I'm not engineering any race conditions. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] [fulltext]Gin index full scan
On Mon, May 11, 2009 at 8:04 PM, esemba wrote: > > I've a table with tsvector column named meta_vector. Over this column there > is a gin index. When I execute query like: > select id from theses where meta_vector @@@ to_tsquery('cs', 'foo | (!bar)') > > I get an errror message: query requires full scan, which is not supported by > GIN indexes. > > The weird thing is, that when I drop the index, or disable index scans, it > works. Why can't the planner just use full scans instead of index scans on > such the queries? Thanks for help. You can search the archives for the last time this was brought up. Apparently, it isn't an easy fix. People hoped to have it addressed for 8.4 but I don't know if it made it. Unfortunately, this makes GIN indexes unusable for many applications such as text searching using arbitrary user queries. GIST indexes work, but perform worse for reads. -- Stuart Bishop http://www.stuartbishop.net/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] unique index with bool
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 [EMAIL PROTECTED] wrote: > CREATE UNIQUE INDEX name on table(param1,param2); > > How to create such unique index when param2 is bool type, and this param2 > should be accepted only in case of true ? > > I tried: CREATE UNIQUE INDEX name on table(param1,(param2 = 'true')); > but it's not working. CREATE UNIQUE INDEX foo ON table(param1, (NULLIF(param2, false))) - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.5 (GNU/Linux) iD8DBQFCk/6KAfqZj7rGN0oRAvZ+AJwLWDd8FQe5nBjFDv7ariZ8o8rwLgCfRTdy BP2yMApbjMDdpDqetUQnX3A= =k5zA -END PGP SIGNATURE- ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] PL/pgGRESQL, SHA, BYTEA - Creating SHA1 hash for Bytea
Enrico Riedel wrote: > Has anyone an idea on how or any pointer into the right direction to > accomplish the above task? > > Thanks already in advance! If you don't mind having plpythonu installed in your database, a lot of this sort of thing becomes trivial: CREATE OR REPLACE FUNCTION sha1(text) RETURNS char(40) AS ' import sha return sha.new(args[0]).hexdigest() ' LANGUAGE plpythonu IMMUTABLE RETURNS NULL ON NULL INPUT; -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] Japanese words not distinguished
Harry Mantheakis wrote: >>Correct. The lesson is, never use locale support for Asian languages >>and multibyte encodings including UTF-8. > > > Thank you for your reply - much appreciated. > > I'm now concerned if and how this will affect ORDER BY query results (and > other functions) with respect to Latin-1 names and words. > > I think I'll have to suck it and see, and then post my results - but that > won't be until after this next week-end. C locale and en_* locales give different ordering (at least under Linux). The en_* ordering is case insensitive, and the C locale ordering is case sensitive because it is simply comparing the ASCII codes. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] PostgreSQL's bug tracker
Greg Stark wrote: > Neil Conway <[EMAIL PROTECTED]> writes: > > >>I think debbugs is fairly close to what we'd need, for reasons stated >>earlier: >> >>http://archives.postgresql.org/pgsql-hackers/2005-05/msg01156.php >> >>(I think Bugzilla is *completely* the wrong tool for the Postgres >>development model.) >> >>I've heard vague comments from Debian people that the debbugs code is >>kind of evil, although I haven't confirmed that myself. Writing a system >>like this from scratch would not be much work, anyway... The bug tracker component of Launchpad (aka Malone) is heavily influenced by debbugs, and under active and rapid development. https://launchpad.net/products/pgsql/+bugs Its designed to allow tracking of bugs both in the 'upstream' sourcecode, forks and commercial varients, and in the packages distributed by OS vendors. Developers hang out in #launchpad on freenode.net > Well in fact debbugs was rewritten from scratch not long ago. They added lots > of new features and presumably made the code less evil. I suppose that's a big > presumption though :) > > I agree that it would be a good match though. There's an email interface for > everything and a number of debian packages use a mailing list as the primary > contact which is how I imagine Postgres developers would like things set up. Current email interface documentation for Malone is on our wiki at https://wiki.launchpad.canonical.com/MaloneEmailInterfaceUserDoc We use it internally the way you describe. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] plpython question
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Sim Zacks wrote: | I have the following function and I am getting an invalid syntax error | when I try to run it. I have tried the function a number of ways | including with named parameters and non-named parameters using the | args array. I also tried it with a tab at the beginning and without. | I've also tried with the $$ and with single quotes and the double | single quoting all the existing single quotes. | | Any help would be greatly appreciated. Easier to help if you actually give the error message, and what version of PostgreSQL you are running might be significant too. | create or replace function BatchBalanceStatus(balance int, needed int, freestock int) returns varchar as | $$ | if balance < 0: | return 'Unhandled' | elif freestock >= needed: | return 'OK' | else: | return 'Ordered' | $$ language plpythonu create or replace function BatchBalanceStatus(int, int, int) returns varchar as ' ~balance, needed, freestock = args ~if balance < 0: ~return "Unhandled" ~elif freestock >= needed: ~return "OK" ~else: ~return "Ordered" ' language plpythonu; Works just fine here on 7.4.5 - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBdfmqAfqZj7rGN0oRAiipAJ9X3IoxinVNx/JRwF9OlzSsZMAATQCgh636 b4kuADMg75BBHqaDjV55c+4= =LMiW -END PGP SIGNATURE- ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] plpython question
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Stuart Bishop wrote: | create or replace function BatchBalanceStatus(int, int, int) returns | varchar as ' | ~balance, needed, freestock = args | ~if balance < 0: | ~return "Unhandled" | ~elif freestock >= needed: | ~return "OK" | ~else: | ~return "Ordered" | ' language plpythonu; | | Works just fine here on 7.4.5 Urgh... minus the ~ characters of course that my mail program helpfully inserted :-P - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBdfo0AfqZj7rGN0oRAu0DAKCX1RknM3U+iDMAixKrJtQlSMPVIgCfYA5A YVvTTcARsnzB8EHVVIc1J+8= =cdg/ -END PGP SIGNATURE- ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Comment on timezone and interval types
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruno Wolff III wrote: | Recently there has been some discussion about attaching a timezone to | a timestamp and some other discussion about including a 'day' part | in the interval type. These two features impact each other, since | if you add a 'day' to a timestamp the result can depend on what timezone | the timestamp is supposed to be in. It probably makes more sense to use | a timezone associated with the timestamp than say the timezone GUC or the | fixed timezone UTC. If you add a 'day' to a timestamp, it should be identical to adding 24 hours. Any other interpretation leads to all sorts of wierd ambiguities. For example, what is '2am April 3rd 2004 US/Eastern + 1 day'? 2am on April 4th 2004 didn't exist in that timezone because the clocks were put forward and that hour skipped. If you round up to the nearest existant time, you then have the issue that '2am April 3rd + 1 day == 3am Aril 3rd + 1 day'. - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBge+sAfqZj7rGN0oRAgInAJsEuYkxX6/jsaszquhjEX/PH3nXvACfVBW9 Z3sfU5XGgxSOI77vuOOOzKA= =euY6 -END PGP SIGNATURE- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] Comment on timezone and interval types
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Bruno Wolff III wrote: | On Fri, Oct 29, 2004 at 11:14:31 -0600, | Guy Fraser <[EMAIL PROTECTED]> wrote: | |>1 day should always be calculated as 24 hours, just as an hour |>is calculated as 60 minutes... | | | If you want 24 hours you can use 24 hours. Days are not constant length, | just like months aren't constant length. Days *are* of constant length - check your nearest dictionary, which will define it as 24 hours or the period of rotation of the earth. If people see 'day', they think '24 hours' because that is the definition they have been using since preschool. This breeds sleeping bugs that nobody notices until the DST transition kicks in and events happen an hour late or not at all. What you are talking about is useful, but should be called calendar_day or something that makes it obvious that it isn't using the traditional definition. People are used to months being ambiguous so it is less likely to cause upsets, although it still bites people because their toolkits definition of 'month' does not match their business rules of 'month' (which might be 30 days, 31 days, 4 weeks, calendar month rounded down). - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBizvxAfqZj7rGN0oRAqLAAJ9sNmXB/yv/P6saytH4qrPQ9xdwEACfX8I9 krauCgYalcWsW+7qiPymoYo= =pMyc -END PGP SIGNATURE- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PostgreSQL 8.0.0 Release Candidate 3
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Marc G. Fournier wrote: | A current list of *known* supported platforms can be found at: | | http://developer.postgresql.org/supported-platforms.html | | We're always looking to improve that list, so we encourage anyone that | is running a platform not listed to please report on any success or | failures with Release Candidate 3. I notice that Ubuntu is not yet on this list. I can confirm that PostgreSQL 7.4.5 is supported under Ubuntu 4.10 (warty) on all platforms (x86, amd64 and ppc) and that PostgreSQL 7.4.6 is supported under Ubuntu 5.4 (hoary) on all platforms. Version freeze for hoary is today, so that version is fixed and the 8 series won't be officially supported until the following release (October 2005), although installing the Debian packages should work just fine. - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.canonical.com/ Canonical Ltd. http://www.ubuntulinux.com/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.5 (GNU/Linux) iD8DBQFB257ZAfqZj7rGN0oRAkvrAJ92nz1G2NkKqTCQLxoNK2SR2TT/OACffHBi akfLOnKYf2wObq2qx7GEwME= =885J -END PGP SIGNATURE- ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] PostgreSQL 8.0.0 Release Candidate 4
Tom Lane wrote: Stuart Bishop <[EMAIL PROTECTED]> writes: Marc G. Fournier wrote: | A current list of *known* supported platforms can be found at: | http://developer.postgresql.org/supported-platforms.html I notice that Ubuntu is not yet on this list. I can confirm that PostgreSQL 7.4.5 is supported under Ubuntu 4.10 (warty) on all platforms (x86, amd64 and ppc) and that PostgreSQL 7.4.6 is supported under Ubuntu 5.4 (hoary) on all platforms. We are now in version freeze for hoary, so that version is fixed and the 8 series won't be officially supported until the following release (October 2005), although installing the Debian packages should work just fine. You seem to be under a misconception about the purpose of that list. It has nothing to do with what Postgres releases are packaged by various distributions. Entries on the list mean that someone has verified that the *source code* distribution builds per instructions and passes regression tests on that platform. If you are trying to say that you personally have verified that for Ubuntu, you didn't phrase it clearly. My bad. I can personally confirm that 8.0rc5 builds cleanly and passes the regression tests under Ubuntu i386 (using the current Ubuntu 5.04/hoary development release, now in version freeze). I can't personally confirm the other architectures or the Ubuntu 4.10/warty release (although the results should be identical to Debian). The install docs require some interpretation, as by default under Ubuntu root's password is disabled and you need to use sudo for various steps. I don't think this is a problem, as naive Ubuntu users will generally be using the click'n'drool pacakge manager. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.canonical.com/ Canonical Ltd. http://www.ubuntulinux.com/ ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] Postgresql 8.0 and Cancel/Kill backend functions
Magnus Hagander wrote: The cancel function is implemented. See http://developer.postgresql.org/docs/postgres/functions-admin.html#FUNCT IONS-ADMIN-SIGNAL-TABLE. Kill function was considered too dangerous. Pity - I would have loved this for my test harnesses. I need to drop and recreate the database between each test and, unless I can kill them, a test that fails to close a connection victimizes all subsequent tests. (But not a showstopper in our case - we replace the connect method with a wrapper and have the harnesses keep track of the connection. This only leaves connections opened by spawned processes a problem.) It would be great if this was available as an external method I could install into a particular database. Hmm... I guess it wouldn't be difficult to write this - it would simply involve selecting the procpid from pg_stat_activity and sending a kill signal to it, wouldn't it? -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
Re: [GENERAL] Multiline plpython procedure
Michael Fuhr wrote: On Tue, Jan 18, 2005 at 07:34:59PM -0800, Adrian Klaver wrote: Actually universal newline support seems to be covered by the following PEP and is present in the version of Python(2.3) I am running. http://www.python.org/peps/pep-0278.txt I see the following in the PEP: There is no support for universal newlines in strings passed to eval() or exec. It is envisioned that such strings always have the standard \n line feed, if the strings come from a file that file can be read with universal newlines. Does the above mean that the PyRun_*() family doesn't have universal newline support? Or at least that some members of the family don't? That would explain why the simple C program I tested failed. http://archives.postgresql.org/pgsql-general/2005-01/msg00876.php I would tend to agree with Hong Yuan that the problem exists in plpythonu's handling of newlines. If Python's behavior is intentional then the newline burden would seem to be on the user or on plpythonu. I think Tom's point is that that's just silly Changing this behavior in Python would break backwards compatibility. In particular, the exec() function accepts strings that have already been unescaped: >>> exec('print """\n\r\n\r\n"""') In the above example, the exec function is being passed a string containing carridge returns and line feeds - not '\n' and '\r' character sequences. It is too late for the Python 2.3 series anyway - 2.3.5 is being released Jan 26th and there won't be a 2.3.6. If it was championed and it decided that the above example is a bug and not a feature and a patch produced, it could get into 2.4.1 due April and 2.5+ I suspect this means fixing this problem in plpythonu for 8.1. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] Multiline plpython procedure
Martijn van Oosterhout wrote: On Wed, Jan 19, 2005 at 06:28:25PM +1100, Stuart Bishop wrote: Michael Fuhr wrote: If Python's behavior is intentional then the newline burden would seem to be on the user or on plpythonu. I think Tom's point is that that's just silly Changing this behavior in Python would break backwards compatibility. In particular, the exec() function accepts strings that have already been unescaped: exec('print """\n\r\n\r\n"""') In the above example, the exec function is being passed a string containing carridge returns and line feeds - not '\n' and '\r' character sequences. Ofcourse, if the \r is within a literal string, then ofcourse you can't ignore it. Other languages like C and Perl also maintain any character within a string. The point is that outside of character strings, there is no need to consider a \n different form a \r (unless there is a place in Python where an extra newline changes the meaning). Sure, you can't just run dos2unix over the code, but within the parser this is a simple change. Oh - I had confused myself. Your point about dos2unix shows that plpythonu might not be able to do this 'correctly' unless it understands a good chunk of Python syntax. It could do it 'good enough' if that is acceptible. I'll take this to python-dev, but unfortunately I think my comment below about the 2.3 series still stands unless higher powers believe this is a show stopper. It is too late for the Python 2.3 series anyway - 2.3.5 is being released Jan 26th and there won't be a 2.3.6. If it was championed and it decided that the above example is a bug and not a feature and a patch produced, it could get into 2.4.1 due April and 2.5+ I suspect this means fixing this problem in plpythonu for 8.1. I suggest adding to the Release Notes: User defined functions using the Python language must use the newline delimiter of the server OS. There is currently no standard way of determining the newline delimiter of the server. Note this also affects the portability of pg_dump output. I don't see how it affects the portability of pg_dump. If you have a working Python function (with unix line endings), won't pg_dump dump the source with unix line endings? -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [GENERAL] Multiline plpython procedure
Tom Lane wrote: Greg Stark <[EMAIL PROTECTED]> writes: Egads. So the set of valid Python programs is different depending on what platform you're on? That's just, uhm, insane. No quibble here. Incidentally, are we sure we've diagnosed this correctly? I'm discussing this with some Python developers and they're expressing skepticism. One just tried a quick test with a Python program containing a mixture of all three newline flavours and it ran fine. He tried reading 'em from files, using Python's text-format-converting file reader, no? See the test case posted up-thread, which demonstrates that feeding a string directly to PyExec (or whatever its called) produces newline-sensitive results. This is currently being discussed on python-dev: http://mail.python.org/pipermail/python-dev/2005-January/051203.html It looks like my earlier concerns were unwarrented - current consensus seems to be to transform line endings in the string to the one-true-format expected by Python's guts: http://mail.python.org/pipermail/python-dev/2005-January/051214.html -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [GENERAL] Best Linux Distribution
Esteban Kemp wrote: I'm starting to develop a production enviroment with Postgres and Tomcat, And I have to choose between some free linux distribution like: whitebox RHEL Fedora Suse Which is the better distribution in terms of postgres? if this has an answer If you are looking for boxes to run PostgreSQL, you may want to add Ubuntu to your list. Ubuntu is free to use and supported, and commercial support available too. We use PostgreSQL internally, including on a monster box with 12GB of RAM, and have a vested interest in ensuring it works well. http://www.ubuntulinux.org If you need Java on the boxes running PostgreSQL, that will probably be the decisive factor - Java licencing makes it difficult for free distributions to provide easy installation and support. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.canonical.com/ Canonical Ltd. http://www.ubuntulinux.com/ ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Autovacuum Daemon Disrupting dropdb?
Tom Lane wrote: > "Thomas F. O'Connell" <[EMAIL PROTECTED]> writes: >> On Mar 11, 2006, at 4:13 PM, Tom Lane wrote: >>> For a "real" solution, perhaps DROP DATABASE could somehow look to >>> determine if there's an autovac daemon active in the target database, >>> and if so send it a SIGINT and wait for it to go away. > >> In general, it also seems like a --force option or something similar >> would be reasonable for dropdb because the state of the database in >> terms of user activity wouldn't seem to matter a whole lot if the >> intent is to drop it. > > ... except to the processes connected to it. > > If we trusted selective SIGTERM we could imagine sending that to > non-autovac processes connected to the target database, but we don't > really. In any case, killing a database that has active users seems > like a pretty large-caliber foot-gun to me; that condition suggests > *very* strongly that the database is not so idle as all that. I would find this useful. We have a large test suite that drops and recreates a test database as required to maintain test isolation. Two problems we have are: - If a test fails to close all of its connections, the rest of the tests are victimized as the database cannot be dropped. - If you close all your connections and immediately attempt to drop the database, it will often fail as it appears that PostgreSQL is still cleaning up the recently closed connections. I don't know if this is a PostgreSQL issue or an issue on how our database driver closes connections (psycopg1 for Python). To work around the first issue, we have to examine pg_stat_activity for process ids and kill any outstanding ones. To work around he second issue, we attempt to drop a number of times with a short sleep between each try. Which is rather 'icky. I have similar issues I need to deal with on our staging server, which each day automatically needs to have the database reset with a fresh dump of our production database, code updates rolled out and schema and data migration patches applied. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] Install issue on Kubuntu
P G wrote: > I have recently switched to Kubuntu and I just installed Postgres on > it. The installation was successful. > > When I try psql, I get this error message, though: > > % psql postgres > Error: You must install at least one postgresql-client- package. > > But I have installed postgresql-client-common and re-installs do not > seem to help. > > Any suggestions? Does /etc/postgresql-common/user_clusters exist? If not, you have struck a known bug in the Ubuntu packaging that I was told has since been fixed. An update should fix things (sudo apt-get update; sudo apt-get dist-upgrade). Or 'sudo /etc/postgresql-common/user_clusters' will get you going. If this doesn't help, you can open a support request at https://launchpad.net/distros/ubuntu/+source/postgresql-8.1/+tickets Or, if you think you know what is going wrong, a bug report at https://launchpad.net/distros/ubuntu/+source/postgresql-8.1/+bugs might be more appropriate. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.canonical.com/ Canonical Ltd.http://www.ubuntu.com/ signature.asc Description: OpenPGP digital signature
Re: [GENERAL] Best high availability solution ?
Arnaud Lesauvage wrote: > Hi list ! > > I have a small enterprise network (~15 workstations, 1 server), all > running windows OSes. Most of our work is done on a PostgreSQL DB (on > the windows server). > I am the only IT here, and my boss asked me to find a way to have the > database always online, without my intervention. > Last time I went on vacation, the server crashed and no one was able to > repair it. If your application is normally reliable, I think the best, cheapest and simplest way of keeping the system online when you are on leave is to give your work the phone number of a company offering PostgreSQL support services. I would avoid adding the extra complexity and additional support and maintenance burdens unless you really need it - can work afford to be without the system for a day if it crashes? And do you expect it to happen infrequently enough that the outages will not be a problem? The advantage of having a human available to restore functionality is that they will be able to deal with the situations you haven't thought of, whereas an automated solution will likely only deal with the situations you have thought of as well as making the system more complex, thus creating more things that could go wrong. -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ signature.asc Description: OpenPGP digital signature
[GENERAL] 'NOW' in UTC with no timezone
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Hi. I'm trying to determine the best way of saying 'The current time in UTC with no time zone information'. I'm currently using CURRENT_TIMESTAMP AT TIME ZONE 'UTC' and inserting into columns defined as TIMESTAMP WITHOUT TIME ZONE which appears to work. However, PostgreSQL parses this into the much more confusing "timezone('UTC'::text, ('now'::text)::timestamp(6) with time zone)" which is what is appearing on my generated documentation. Is there any magic string like 'NOW'::timestamp or CURRENT_TIMESTAMP which returns UTC time? - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org iD8DBQFBa2XUAfqZj7rGN0oRAkLJAJ9vOWl1hDSbubKQUnCSvBZg8nzvwACdFjvV 9vACiPZyhnXjlLZuTbGoUrs= =gEL/ -END PGP SIGNATURE- ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [GENERAL] 'NOW' in UTC with no timezone
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Tom Lane wrote: | Stuart Bishop <[EMAIL PROTECTED]> writes: | |>I'm trying to determine the best way of saying 'The current time in UTC |>with no time zone information'. | | | Isn't that a contradiction in terms? Not at all - I want 'now' in UTC time without the extra backage of the timezone information, since it is a well known fact. The functionality of python's datetime.utcnow() or time.gmtime() basically. I can get this, but it looks a bit ugly and confusing. I'll probably solve this using a stored procedure. | I *think* maybe what you want is to SET TIMEZONE = 'UTC' and then | stop worrying about it. But anyone who is worried about timezones | and yet is storing his data in timestamp-without-time-zone columns | probably needs to reconsider exactly what his data represents. | What is it that you actually want to store, and how do you want | it presented? I've got that set on the production server - this is mainly to ensure that developers on their local instances are catered for and to make things explicit. It makes the other developers more aware of what is going on rather than things just accidently working in most cases. We are a Python house and not a Perl house in other words :-) How much overhead is there in storing a timestamp with timezone as opposed to one without? You would need an extra few bits for the offset but I don't know if that affects the total number of bytes to store it. - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBblAhAfqZj7rGN0oRAhhcAJ9c9o8Q6gK900U4hwqEjg/3bTyHIgCfY9x6 pMp+Iw3Yxrck0jIZCUz8ryk= =mQ+L -END PGP SIGNATURE- ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] 'NOW' in UTC with no timezone
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Tom Lane wrote: | Stuart Bishop <[EMAIL PROTECTED]> writes: | |>How much overhead is there in storing a timestamp with timezone as |>opposed to one without? | | | Exactly zero. You have a misconception about what the datatype really | does --- see other responses in this thread. Indeed - I was under the impression that the timezone would be preserved (which is the case in the external datetime libraries I use), but I now see that PostgreSQL will lose this information. - -- Stuart Bishop <[EMAIL PROTECTED]> http://www.stuartbishop.net/ -BEGIN PGP SIGNATURE- Version: GnuPG v1.2.4 (GNU/Linux) iD8DBQFBb05DAfqZj7rGN0oRAnz3AJwLqPBMY1MNxjeXjg/orFWNI4+MrwCfSTyG wSZ0Hmo6Bg9y6ZgfItJOf3w= =xSUI -END PGP SIGNATURE- ---(end of broadcast)--- TIP 8: explain analyze is your friend