[GENERAL] invalid header page in block ....
Hello, Ive been working with PostgreSQL on Windows for almost a year. But when I put it in production to the customer it has had the following error after a power cut: invalid page header in block aaa of relation bbb. in which aaa is a block number and bbb is a table. Trying to find out what the problem could be, I could only realized that it breaks the file corresponding to the table leaving it in a format that cannot be read again. I wasnt able to reproduce the error in any PC of the company. At the beginning I thought it could be a problem from the Cygwin, but I tried the native version for Windows and the mistake goes on. I would like to know if it is something related to PostgreSQL configuration or a mistake. Im working with Windows 2000 or Windows XP (and both get the same problem). I go to the database through ADO, and I use Delphi as programming language. The PostgreSQL version is version 8 beta 2 for Windows. Besides, none of the database where the error appears is over 300MB. Thanks, and I look forward to receiving your answer as soon as possible. Expósito, Sergio (Sistem Engineer) Ábaco Informática S.A. ___ 250MB gratis, Antivirus y Antispam Correo Yahoo!, el mejor correo web del mundo http://correo.yahoo.com.ar ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] MongoDB FDW Problem.
My best for all in this list. i'm trying to use FDW for MongoDB using PostgreSQL 9.4 using pgdg yum repo: yum install postgresql94-server postgresql94-contrib mongo_fdw94 My S.O. is CentOS 9.7 64bits All is ok with Postgres. I can create extension, foreign server and foreign table. My problem is when the "columns" of the MongoDB collection are a document or array. In my Foreign Table, I have only null values returned in selects. All of others columns returns its values with no problem. Can any help or advise me about this? Thanks a lot! Lucio - Rio de Janeiro - Brazil Please see below some info: On mongo i have this on collection: { "_id" : ObjectId("56ae3a5c2e277c04c77beec7"), "address" : { "building" : "469", "coord" : [-73.961704, 40.662942], "street" : "Flatbush Avenue", "zipcode" : "11225" }, "borough" : "Brooklyn", "cuisine" : "Hamburgers", "grades" : [{ "date" : ISODate("2014-12-30T00:00:00Z"), "grade" : "A", "score" : 8 }, { "date" : ISODate("2014-07-01T00:00:00Z"), "grade" : "B", "score" : 23 }, { "date" : ISODate("2013-04-30T00:00:00Z"), "grade" : "A", "score" : 12 }, { "date" : ISODate("2012-05-08T00:00:00Z"), "grade" : "A", "score" : 12 }], "name" : "Wendy'S", "restaurant_id" : "30112340" } On PostgreSQL I have this: "_id" "address" "borough" "cuisine" "grades" "name" "restaurant_id" "56ae3a5c2e277c04c77beec7" "Brooklyn" "Hamburgers" "Wendy'S" "30112340" CREATE FOREIGN TABLE mongo_fdw.restaurants ( _id NAME NOT NULL, address JSON, borough VARCHAR, cuisine VARCHAR, grades VARCHAR, --> errors happens if I put json here name VARCHAR, restaurant_id VARCHAR ) SERVER mongo_server OPTIONS ( database 'test', collection 'restaurants');
[GENERAL] spi/timetravel: unique constraint violation on UPDATE
Hey, I'm having a problem with the timetravel extension. Following simple schema: CREATE EXTENSION timetravel; CREATE TABLE entries ( id SERIAL NOT NULL, value INT NOT NULL, valid_from ABSTIME NOT NULL, valid_to ABSTIME NOT NULL, PRIMARY KEY (id, valid_from), UNIQUE (id, valid_to) ); CREATE TRIGGER time_travel BEFORE INSERT OR UPDATE OR DELETE ON entries FOR EACH ROW EXECUTE PROCEDURE timetravel(valid_from, valid_to); I need to be able to reference specific versions of an entry, as well as the always-last version. That's why I'm using a primary key and a unique constraint/index here. (PK for specific version, unique index to be able to refer to id+valid_to='infinity'). Now if I try updating a row: database=# INSERT INTO entries (value) VALUES (12); INSERT 0 1 database=# SELECT * FROM entries; id | value | valid_from | valid_to +---++-- 1 |12 | 2017-07-11 17:11:51+02 | infinity (1 row) database=# UPDATE entries SET value=14 WHERE id=1; ERROR: duplicate key value violates unique constraint "entries_pkey" DETAIL: Key (id, valid_from)=(1, 2017-07-11 17:11:51+02) already exists. CONTEXT: SQL statement "INSERT INTO entries VALUES ( $1,$2,$3,$4)" What am I doing wrong here? According to the function comment in timetravel.c it should: a) set last-version row to NEW data; valid_from=now() b) insert a new row with OLD data; valid_to=now() - at this point the old valid_from is already supposed to be updated. > 1. IF an update affects tuple with stop_date eq INFINITY > then form (and return) new tuple with start_date eq current date > and stop_date eq INFINITY [ and update_user eq current user ] > and all other column values as in new tuple, and insert tuple > with old data and stop_date eq current date > ELSE - skip updating of tuple. Source: https://github.com/postgres/postgres/blob/master/contrib/spi/timetravel.c#L49 Or is INSERT done before UPDATE? Then how can I use UNIQUE constraints here? I can't make them INITIALLY DEFERRED because I need to be able to use foreign keys. I also have some debug logs: [47-1] DEBUG: 0: StartTransactionCommand [47-2] LOCATION: start_xact_command, postgres.c:2442 [47-3] STATEMENT: UPDATE entries SET value=2 WHERE id=1; [48-1] DEBUG: 0: StartTransaction [48-2] LOCATION: ShowTransactionState, xact.c:4929 [48-3] STATEMENT: UPDATE entries SET value=2 WHERE id=1; [49-1] DEBUG: 0: name: unnamed; blockState: DEFAULT; state: INPROGR, xid/subid/cid: 0/1/0, nestlvl: 1, children: [49-2] LOCATION: ShowTransactionStateRec, xact.c:4967 [49-3] STATEMENT: UPDATE entries SET value=2 WHERE id=1; [50-1] LOG: 0: statement: UPDATE entries SET value=2 WHERE id=1; [50-2] LOCATION: exec_simple_query, postgres.c:946 [51-1] DEBUG: 0: ProcessQuery [51-2] LOCATION: ProcessQuery, pquery.c:168 [51-3] STATEMENT: UPDATE entries SET value=2 WHERE id=1; [52-1] DEBUG: 0: timetravel (entries) update: sql: INSERT INTO entries VALUES ( $1,$2,$3,$4) [52-2] LOCATION: timetravel, timetravel.c:336 [52-3] STATEMENT: UPDATE entries SET value=2 WHERE id=1; [53-1] ERROR: 23505: duplicate key value violates unique constraint "entries_pkey" [53-2] DETAIL: Key (id, valid_from)=(1, 2017-07-15 16:50:59+02) already exists. [53-3] CONTEXT: SQL statement "INSERT INTO entries VALUES ( $1,$2,$3,$4)" [53-4] LOCATION: _bt_check_unique, nbtinsert.c:424 [53-5] STATEMENT: UPDATE entries SET value=2 WHERE id=1; -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] spi/timetravel: unique constraint violation on UPDATE
Den 15. juli 2017 23:15, skrev Tom Lane: > postgre...@get-experience.com writes: >> I'm having a problem with the timetravel extension. Following simple schema: >> ... >> What am I doing wrong here? According to the function comment in >> timetravel.c it should: >> a) set last-version row to NEW data; valid_from=now() >> b) insert a new row with OLD data; valid_to=now() - at this point the >> old valid_from is already supposed to be updated. > > I don't see anywhere in that comment that says "the old valid_from is > already supposed to be updated". Given that the INSERT of the cloned row > happens during the BEFORE UPDATE trigger, it would take a lot of magic > for things to happen that way ;-). Yeah, good point. > Perhaps you could make your PK be on (id, valid_from, valid_to). Doesn't really work because valid_to would change on UPDATE. I'd need to update foreign relations with another trigger which would be very ugly. I guess the only solution is to write my own time travel trigger function and do update before insert... -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Good Books
Hi I am new to PostgreSQL and find that the standard documentation is very thin. I would like to buy a more comprehensive book. All the available books seem very out of date!!! Firstly, does it matter if I buy a book that was published in Feb 2003? Will such a book be hopelessly out of date? Does anyone know of any books that will be releasing imminently? Thanks Craig ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Good Books
Hi Bruno There are a lot of the topics that I feel could have better (or more comprehensive examples). In particular I am thinking of plpgsql. One example is information about working with Exceptions (postgresql specific) and another is the small amount of info about the RAISE Statement. Refering to my mention of exceptions above, the following is information that I could not find in the standard docs: How to use(Return using RAISE) the actual exception code and message once you have handled the error. This is especially usefull if you have used WHEN OTHERS to catch the error. Thanks Craig > On Thu, Mar 24, 2005 at 14:37:59 +0200, > [EMAIL PROTECTED] wrote: >> Hi >> >> I am new to PostgreSQL and find that the standard documentation is very >> thin. I would like to buy a more comprehensive book. All the available >> books seem very out of date!!! Firstly, does it matter if I buy a book >> that was published in Feb 2003? Will such a book be hopelessly out of >> date? > > What parts of the documentation do you find thin? Without knowing what > you need extra info on, it will be hard to recommend other documentation. > > ---(end of broadcast)--- > TIP 5: Have you checked our extensive FAQ? > >http://www.postgresql.org/docs/faq > ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Scripting issues
Hi I come from a MSSQL background and am trying to figure out how to write deployment scripts for PostgreSQL. Typically, if I want to drop a function, I would write a script that first checks for it's existence and then performs the drop. In MSSQL it would be as easy as (this can be done in native TSQL): IF EXISTS (...some query to system tables...) DROP function XXX However, I am really strugling to understand how to do this in PostgreSQL. It seem the standard SQL language doesn't support the IF statement. All the development that I do needs to be deployed in a script fashion and generally I need to check for the existence of an object before replacing or dropping. Any help will be much appreciated. Thanks Craig ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [GENERAL] Scripting issues
> [EMAIL PROTECTED] wrote: >> Hi >> >> I come from a MSSQL background and am trying to figure out how to write >> deployment scripts for PostgreSQL. Typically, if I want to drop a >> function, I would write a script that first checks for it's existence >> and >> then performs the drop. >> >> In MSSQL it would be as easy as (this can be done in native TSQL): >> IF EXISTS (...some query to system tables...) >> DROP function XXX >> >> However, I am really strugling to understand how to do this in >> PostgreSQL. >> It seem the standard SQL language doesn't support the IF statement. >> >> All the development that I do needs to be deployed in a script fashion >> and >> generally I need to check for the existence of an object before >> replacing >> or dropping. >> >> Any help will be much appreciated. > > It's simple enough to write a plpgsql function that takes two text > parameters - execute the first and see if any rows are returned, then > execute the second if any rows were. > > -- >Richard Huxton >Archonet Ltd > > ---(end of broadcast)--- > TIP 9: In versions below 8.0, the planner will ignore your desire to > choose an index scan if your joining column's datatypes do not >match > Hi Richard Thanks for the suggestion. I'm sure I'll go that way. One other question: Since in PostgreSQL you can have "overloaded" functions, how do you query the system tables for the existence of a particular version of the function? I can query information_schema.routines for the function name, but not for the particular parameters. Thanks Craig ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Statistics and Indexes
Hi I am from a MSSQL background and am trying to understand something about statistics in PostgreSQL. Question 1: In MSSQL, if you create an index (and you are using MSSQL's default settings) the Server will automatically create appropriate statistics for you. Does this happen in PostgreSQL? Or should I explicitly create statistics for every one of my indexes? Question 2: I believe ANALYZE keeps indexes and statistics up to date. How often should this be run (assume that my DB has 200,000 new records daily)? Thanks in advance Craig ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] PostgreSQL Scalability
Hi I am trying to get information about the Scalability options available for PostgreSQL. Assuming you a database that has a huge volume of reads and writes happening, what options have you got to scale your database. Many commercial RDBMS' allow the clustering of Database servers and some other options. What options are available for PostgreSQL? Thanks Craig ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Cross database queries
Hi Suppose I have two databases on one server (lets call them DB_A and DB_B) and I was to write a cross database query. How do I do this in PostgreSQL? On MS-SQL I would do something like: SELECT ta.Field1, ta.Field2, tb.Field2 FROM DB_A.dbo.SomeTable ta JOIN DB_B.dbo.SomeOtherTable tb ON ta.Field1 = tb.Field1 Thanks Craig ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[GENERAL] Restoring Database created on windows on FreeBSD
Hi I am very new to FreeBSD (Windows Background), but am busy trying to implement a PostgreSQL database that I have running in the Windows environemtn on FreeBSD. Naturally, most of my problems so far have been geeting to grips with FreeBSD. Anyway, I am having a problem at the moment. Having being previously spoilt with the ease of installation on windows, I am sure that I have missed the boat on something. I downloaded the latest port for PostgreSQL-8.0.4 from postgresql.org. I followed the instructions in the documentation and all seemed to work. I have successfully created "test" databases and have happily managed to connect to the new PostgreSQL server using psql and pgAdmin III(on windows). I have now backed up my DB from windows and have created a new database on the BSD server. When restoring this DB I got a lot of errors related to some of the system installed functions. I then realised that the PostgreSQL installation on FreeBSD did not install any of the contrib modules. I went to the /contrib folder in the port I used to to install postgreSQL (/usr/ports/database/postgresql-8.0.4/contrib). I excluded /cube and /mysql from the MakeFile (cube seemed to be failing and why would I want mysql stuff on PostgreSQL?). I then did a gmake install. All contribs seemed to install fine. I reran the restore on a clean DB and most of the errors went away, but I am still getting errors all related to system functions looking for '$libdir/admin'. Does anyone know what I could have missed here? Any help would be appreciated. Thanks Craig ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Restoring Database created on windows on FreeBSD
Hi Sven Thanks for the info. This is a great help. One question: > On Wed, 2005-10-19 at 08:23 +0200, [EMAIL PROTECTED] wrote: >> Hi >> >> I am very new to FreeBSD (Windows Background), but am busy trying to >> implement a PostgreSQL database that I have running in the Windows >> environemtn on FreeBSD. Naturally, most of my problems so far have been >> geeting to grips with FreeBSD. >> >> Anyway, I am having a problem at the moment. Having being previously >> spoilt with the ease of installation on windows, I am sure that I have >> missed the boat on something. > > Freebsd ports is an amazingly easy way to install software. > > >> I downloaded the latest port for PostgreSQL-8.0.4 from postgresql.org. > > the postgresq website does not have Freebsd ports ... just source code > from what I can see. > >> I >> followed the instructions in the documentation and all seemed to work. I >> have successfully created "test" databases and have happily managed to >> connect to the new PostgreSQL server using psql and pgAdmin III(on >> windows). I have now backed up my DB from windows and have created a new >> database on the BSD server. When restoring this DB I got a lot of errors >> related to some of the system installed functions. I then realised that >> the PostgreSQL installation on FreeBSD did not install any of the >> contrib >> modules. I went to the /contrib folder in the port I used to to install >> postgreSQL (/usr/ports/database/postgresql-8.0.4/contrib). I excluded >> /cube and /mysql from the MakeFile (cube seemed to be failing and why >> would I want mysql stuff on PostgreSQL?). I then did a gmake install. >> All >> contribs seemed to install fine. I reran the restore on a clean DB and >> most of the errors went away, but I am still getting errors all related >> to >> system functions looking for '$libdir/admin'. >> >> Does anyone know what I could have missed here? > > It looks like you built from source rather that using the FreeBSD ports > system. Assuming you have a fresh FreeBSD install one would do the > following to achieve your desired end goal. > > 1) cd /usr/ports/databases/postgresql80-server > 2) make && make install > 3) pay attention to the notes at the end about tuning your kernel for > shared memory usage (if you plan on having lots of connections) > 4) cd /usr/ports/databases/postgresql-contrib > 5) make && make install > 6) /usr/local/etc/rc.d/010.pgsql.sh initdb > 7) /usr/local/etc/rc.d/010.pgsql.sh start > > Now if your ports tree is out of date (or even if not) you may want to > do the following instead: > > 1) pkg_add -r postgresql80-server > 2) pkg_add -r postgresql-contrib > > Then continue with steps 6 and 7 above. You can verify the package > installation via pkg_info. Also, you may need to build (via ports) or > add (via pkg_add) postgresql80-client. > > HTH > > Sven > > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Restoring Database created on windows on FreeBSD
Hi Sven Thanks for the help. This looks like the kind of info I needed. One question: You suggest that I use : pkg_add -r postgresql80-server. If this requires postgresql80-client, will it automatically download it and install? The docs I have read suggest this, but maybe you can confirm? Thanks Craig > On Wed, 2005-10-19 at 08:23 +0200, [EMAIL PROTECTED] wrote: >> Hi >> >> I am very new to FreeBSD (Windows Background), but am busy trying to >> implement a PostgreSQL database that I have running in the Windows >> environemtn on FreeBSD. Naturally, most of my problems so far have been >> geeting to grips with FreeBSD. >> >> Anyway, I am having a problem at the moment. Having being previously >> spoilt with the ease of installation on windows, I am sure that I have >> missed the boat on something. > > Freebsd ports is an amazingly easy way to install software. > > >> I downloaded the latest port for PostgreSQL-8.0.4 from postgresql.org. > > the postgresq website does not have Freebsd ports ... just source code > from what I can see. > >> I >> followed the instructions in the documentation and all seemed to work. I >> have successfully created "test" databases and have happily managed to >> connect to the new PostgreSQL server using psql and pgAdmin III(on >> windows). I have now backed up my DB from windows and have created a new >> database on the BSD server. When restoring this DB I got a lot of errors >> related to some of the system installed functions. I then realised that >> the PostgreSQL installation on FreeBSD did not install any of the >> contrib >> modules. I went to the /contrib folder in the port I used to to install >> postgreSQL (/usr/ports/database/postgresql-8.0.4/contrib). I excluded >> /cube and /mysql from the MakeFile (cube seemed to be failing and why >> would I want mysql stuff on PostgreSQL?). I then did a gmake install. >> All >> contribs seemed to install fine. I reran the restore on a clean DB and >> most of the errors went away, but I am still getting errors all related >> to >> system functions looking for '$libdir/admin'. >> >> Does anyone know what I could have missed here? > > It looks like you built from source rather that using the FreeBSD ports > system. Assuming you have a fresh FreeBSD install one would do the > following to achieve your desired end goal. > > 1) cd /usr/ports/databases/postgresql80-server > 2) make && make install > 3) pay attention to the notes at the end about tuning your kernel for > shared memory usage (if you plan on having lots of connections) > 4) cd /usr/ports/databases/postgresql-contrib > 5) make && make install > 6) /usr/local/etc/rc.d/010.pgsql.sh initdb > 7) /usr/local/etc/rc.d/010.pgsql.sh start > > Now if your ports tree is out of date (or even if not) you may want to > do the following instead: > > 1) pkg_add -r postgresql80-server > 2) pkg_add -r postgresql-contrib > > Then continue with steps 6 and 7 above. You can verify the package > installation via pkg_info. Also, you may need to build (via ports) or > add (via pkg_add) postgresql80-client. > > HTH > > Sven > > > ---(end of broadcast)--- > TIP 6: explain analyze is your friend > ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Functions returning RECORD
This was very helpfull. Thank you Craig > On Thu, Jan 13, 2005 at 07:58:33PM +0200, Craig Bryden wrote: > >> When I run select * from GetAccountInfo (100)I get the following >> error message: >> ERROR: a column definition list is required for functions returning >> "record" > > If the function will always return the same row type then create a > composite type with CREATE TYPE and return that instead of RECORD. > >> please can someone explain to me how to create a column definition list. > > See the documentation for SELECT in the Reference part of the > PostgreSQL documentation; the Examples section shows a query with > a column definition list. But you won't need a column definition > list if you return a known type instead of RECORD. > > -- > Michael Fuhr > http://www.fuhr.org/~mfuhr/ > ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[GENERAL] how to update structural & data changes between PostgreSQL databases?
Hi, I'm looking for advice on how to update both structural changes and data changes between PostgreSQL databases. Here is my situation: I have both a development and production environment. In the development environment I have a PostgreSQL database that has many tables. Some of the tables contain records that must be removed when the database is in production (for example records that were entered while testing the database) and other tables have information that must be maintained when the database is moved to production (for example records that contain the contents of drop down menus in my web application). Here are my questions: 1. How should I do the initial move of the database from development to production where the structure of the database is moved and only the data in specific tables is moved? 2. After further development is done in the development environment, how do I make those changes in the production database? I will need to update the following: a. structural changes(ex. Adding or removing fields from tables, adding new tables, removing old tables) b. administrative changes (ex. setting permissions on databases and/or tables) c. data changes (ex. Adding new records to tables that have "maintained data") d. ensuring that test data does not end up in the production database (ex. Adding fake users to the system, or updating counters with inaccurate data) e. ensuring that existing data is not removed from the production database (ex. A user's account information is not deleted) I would really appreciate any and all thoughts and ideas on how to go about these procedures. Thank you in advance, Courtenay ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] giving users access to specific databases
Hi, I'm hoping someone can tell me how to go about this, or if a solution is even possible with my current set up. I realize this question may go beyond pure postgres topics and have to do more with how my hosting company has their servers configures, but this group seemed like my best option for help. If anyone has suggestions on what other groups might be helpful to post this question to, I would really appreciate it. I do my database & application work on a shared Linux server provided by my hosting company. It appears they have just one installation of postgres on the server and that all of their customers on that server are able to create databases and users under their account. I'm not quite sure how they have enabled specific user accounts for access to my specific part of the server, but I do know that when I log in I have access to a cpanel interface, email configuration, all of my files on the server, access to create new postgres databases and a link to phpPgAdmin. The difficulty I have is that in order to give another of the developers access to a specific database, I must give them the username and password to my entire account on their server. I've spent weeks corresponding with them on how I can give individual access to just a specific database. We finally tried installing a separate instance of phpPgAdmin in my web directory. I can get access to my databases through this, but it turns out I still need to use my main username and password to access the databases since we realized they needed to tighten security so other people on the server couldn't get access to my databases. Below is a copy of the last email I got from the hosting company. If anyone has any suggestions on what would fix this problem I would really appreciate it. I'm not set on having to use phpPgAdmin, but I would like some sort of graphical interface available so less-technical users can update information in the database without having to learn sql. Subject: Separate login for phpPgAdmin The only way with phpPgAdmin is to allow you access to all databases including those owned by you. This can cause a security issue where others would have access to your database as well. Unlike phpmyadmin, phpPgAdmin does not seem to allow you to login to a database with a database username unless the above option is enabled and therefore creating a security risk. The way Postgres is set to label owners it uses your CPANEL login as the owner which is different than how MySql tracks the databases. Due to this you would not be able to use phpPgAdmin with other users. Thanks in advance, Courtenay ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [GENERAL] giving users access to specific databases
> you should play around with phpPgAdmin for a while and let us know > specifically what we can help you with. I've managed to figure out how to fix most of my user access problems, but I'm still having trouble limiting the list of databases users see when they log in through phpPgAdmin. I noticed in the config.inc.php file that I can set a user's view to only show the databases that they own (via the $conf['owned_only'] = true; parameter). What I would like to do is limit the users to seeing the databases for which they are a user for. For example I would like to set a parameter that was something like $conf['user_only'] = true. This will be particularly helpful for double checking which users have access to what databases and also to eliminate confusion for less technically adept users who would have to wade through all sorts of databases owned by other people on the shared server my hosting company provides. Thanks, Courtenay ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[GENERAL] installing and using autodoc
Hi, I'm interested in using postgresql_autodoc to document my postgres databases on Linux, but I'm unable to find any information on how to install the program or use it. Can someone point me in the right direction? I've downloaded postgresql_autodoc-1.25.tar.gz from http://www.rbt.ca/autodoc/releasenotes.html, un-ziped it and have no idea what to do. Does it need to be compiled? If so, what command do I use? If not, what do I do to configure the program? Thanks in advance, Courtenay ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
[GENERAL] PostgreSQL Logs
What logs does postgresql produce? 1) It has transaction logs. Can it log connections? Are there other logs? 2) When I start postgresql it outputs log messages to the console. Does this also log anywhere else? What I’m trying to do is find all the postgresql log files and then find a way to load them into a database and then have a program scan that data and look for anything out of the ordinary or to do reporting on the logs (like how many connections per hour, per day, etc). Thank you! ~ T r o y ~
Re: [GENERAL] PostgreSQL Logs
Basically, I'm looking for information on logging connections (who's connecting and when), what the users are doing when connected (creating tables, updating data, etc), and if there are any system messages (like when the postgresql server is started, stopped, if it has problems, etc). Thanks! -Original Message- From: Mike G [mailto:[EMAIL PROTECTED] Sent: Thursday, May 27, 2004 10:27 PM To: Postgresql Cc: [EMAIL PROTECTED] Subject: Re: [GENERAL] PostgreSQL Logs Yes to the first two questions under number 1. Not sure what you are looking for on the last one. Yes to number 2 if you configure it correctly although I don't know if it can write to a file and screen at the same time. Open the postgresql.conf file. This page in the docs might help. http://www.postgresql.org/docs/7.4/static/runtime-config.html On Thu, May 27, 2004 at 05:39:19PM -0400, Postgresql wrote: > What logs does postgresql produce? > > 1) It has transaction logs. Can it log connections? Are there other > logs? > > 2) When I start postgresql it outputs log messages to the console. > Does this also log anywhere else? > > > > What I'm trying to do is find all the postgresql log files and then find a > way to load them into a database and then have a program scan that data and > look for anything out of the ordinary or to do reporting on the logs (like > how many connections per hour, per day, etc). > > > > > > Thank you! > > > > ~ T r o y ~ > > > ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [GENERAL] connecting to a windows application, visual basic
You are probably going to use ODBC. I don't know if there are VB drivers for postgresql. I have been using RealBasic on the mac and there are drivers to directly access postgresql. Connecting is very easy. Check out the ODBC availability for VB. Ted -Original Message- From: "Evan Panagiotopoulos" <[EMAIL PROTECTED]> To: <[EMAIL PROTECTED]> Date: Mon, 15 Oct 2001 20:56:27 -0400 Subject: [GENERAL] connecting to a windows application, visual basic > Well, I am in the process of trying to find an easy way of connecting > to a > database. I picked visual basic because I an a little familiar with the > language and its interface. How do I connect to the database? > > Any help/suggestions will be appreciated. > > Thanks, > > Evan Panagiotopoulos > Technology, Library and Media Director > Poughkeepsie City School District > > > ---(end of > broadcast)--- > TIP 3: if posting/reading through Usenet, please send an appropriate > subscribe-nomail command to [EMAIL PROTECTED] so that your > message can get through to the mailing list cleanly ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] How to import data from MDB or pipe delimited file into PostgreSQL
What is the best way to import into a PostgreSQL database data from a Microsoft Access database? Should I export the data to a pipe or comma delimited file and then import? *But how can I do that* ? With mSQL it is easy, it has already a builtin function where you specify the name of the text file you are importing, the database and table to input the data, which character should be treated as a separator (comma, pipe, etc.). TIA, Paulo
[GENERAL] dblink and hostname resolution problem
Hi list! I am using DBLink on a Win32 system running PostgreSQL 8.2 to query a table on another similar system (Win32 / PostgreSQL 8.2). If I query the server with dblink('hostaddr=ip_address ...', ...), it works fine, but if I try to use the hostname instead (dblink('hostaddr=myserver.mydomain.com ...', ...)), I have an error : ERROR: could not establish connection DETAIL: could not translate host name "myserver.mydomain.com" to address: Unknown server error Both server can resolve this address though. Is this a known issue ? Is there something I can do about it ? Thansk a lot ! Regards -- Arnaud ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] dblink and hostname resolution problem
[EMAIL PROTECTED] a écrit : Hi list! I am using DBLink on a Win32 system running PostgreSQL 8.2 to query a table on another similar system (Win32 / PostgreSQL 8.2). If I query the server with dblink('hostaddr=ip_address ...', ...), it works fine, but if I try to use the hostname instead (dblink('hostaddr=myserver.mydomain.com ...', ...)), I have an error : ERROR: could not establish connection DETAIL: could not translate host name "myserver.mydomain.com" to address: Unknown server error OK I've got it : I had to use "host=..." instead of "hostaddr=...". Seems obvious now that I know it. Regards -- Arnaud ---(end of broadcast)--- TIP 6: explain analyze is your friend
[GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?
In the hopes that someone has already blazed this trail ... I'm running Postgresql (v8.1.10) on Solaris 10 (Sparc) from within a non-global zone. I originally had the database "storage" in the non-global zone (e.g. /var/local/pgsql/data on a UFS filesystem) and was getting performance of "X" (e.g. from a TPC-like application: http://www.tpc.org). I then wanted to try relocating the database storage from the zone (UFS filesystem) over to a ZFS-based filesystem (where I could do things like set quotas, etc.). When I do this, I get roughly half the performance (X/2) I did on the UFS system. I did try to run some low level I/O tests (i.e. http://www.iozone.org/) to see if there was a filesystem performance difference, but there doesn't seem to be any between the UFS and ZFS numbers I'm seeing. So, I was hoping someone might have already tried this type of setup and can provide some suggestions for helping boost the ZFS performance numbers (and save me a truss debugging session). Are there any known issues with using Postgresql and ZFS? Things I have already tried: - setting the record size of the ZFS filesystem to be 8K (vs 128K ) default -- no noticeable performance difference - addind the ZFS filesystem as a loopback (i.e. lofs) filesystem in the non-global zone -- no noticeable difference - adding the ZFS filesystem as a dataset filesystem in the non-global zone -- no noticeable difference Bob ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?
Mayuresh, comments in-lined, below ... - Original Message > From: Mayuresh Nirhali <[EMAIL PROTECTED]> > To: [EMAIL PROTECTED] > Cc: pgsql-general@postgresql.org > Sent: Thursday, January 31, 2008 6:23:23 AM > Subject: Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global > zones on Solaris? > > Bob, > > First, > Which > exact > version > of > S10 > are > you > using > ? > more /etc/release Solaris 10 8/07 s10s_u4wos_12b SPARC Copyright 2007 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 16 August 2007 > pkginfo SUNWpostgr-server system SUNWpostgr-server The programs needed to create and run a PostgreSQL 8.1.10 server > > In > general, > I > have > seen > having > logs > and > data > on > different > pools > helps. > Also, > let > us > know > your > wal > tunning > parameters > like > commit_delay, > fsync. > If > you > are > using > S10u4, > setting > commit_delay > to > a > non > zero > number > should > help > get > better > performance. > Thanks for the info. I'll give it a shot and let you know. Bob ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?
[Following up on my own message.] > Also, > > > let > > > us > > > know > > > your > > > wal > > > tunning > > > parameters > > > like > > > commit_delay, > > > fsync. I haven't done any tuning as of yet. I'm running with the default settings produced by initdb. Bob ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?
Don't even bother trying to tune zfs untill after you've tuned postgres, otherwise your wasting your time. -- Robert Treat Build A Brighter LAMP :: Linux Apache {middleware} PostgreSQL As it turns out, I think the ZFS-on-RAID setup I had is the problem[1]. After running some more I/O tests with "fsync" turned on, I noticed that filesystem "writes" were about 400% slower. Running PostgreSQL without "fsync" made the application run about 1.5 times faster on ZFS versus UFS. So, now it's back to the drawing board to figure out if I can make the synchronous writes faster. Bob [1] the suggestions made previously helped a bit, but not nearly enough to overcome the 50% drop originally noted.
Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris?
- Original Message From: Greg Smith <[EMAIL PROTECTED]> To: [EMAIL PROTECTED] Cc: pgsql-general@postgresql.org Sent: Sunday, February 3, 2008 8:43:28 PM Subject: Re: [GENERAL] Performance problems with Postgresql/ZFS/Non-global zones on Solaris? On Thu, 31 Jan 2008, [EMAIL PROTECTED] wrote: > I haven't done any tuning as of yet. I'm running with the default > settings produced by initdb. The default settings are junk and the disk pattern will change once they're set correctly, so tuning ZFS first and then PostgreSQL is probably backwards. You may return to tuning the database again after ZFS, but for the first shot I'd start with a somewhat tuned DB server and then play with the filesystem. Put the major postgresql.conf parameters in the right ballpark--shared_buffers, effective_cache_size, and a large setting for checkpoint_segments since I think you mentioned a write-heavy benchmark. You should do your own experiments with wal_sync_method, I haven't seen any tests that are really definitive on the best setting there for S10+ZFS and it kind of depends on the underlying hardware--try both open_datasync and fdatasync. Greg, Thanks for the reply. Unfortunately, the project I'm working is trying to provide "database-as-a-service" functionality, so I can't really tune the DB since the application/load will vary by customer (and the whole idea was to abstract all the low-level tuning parameters from the customer because we aren't expecting "power" users). Bob
[GENERAL] PostgreSQL Magazine #01 is out !
Dear PostgreSQL users, I am very pleased to announce the release of the first issue of PostgreSQL Magazine. This issue #01 is brought to you thanks to the collective work of dozen of people. Writers, Editors, Reviewers. Kudos to them all ! Here's a quick view of the ToC : - PostgreSQL 9.1 : 10 awesome new features - NoSQL : The Key Value store everyone ignored - Interview : Stefan Kaltenbrunner - Opinion : Funding PostgreSQL Features - Waiting for 9.2 : Cascading Streaming Replication - Tips & Tricks : PostgreSQL in Mac OS X Lion The magazine is available online and on paper. You can either : * Read it Online: http://pgmag.org/01/read * Buy the Print Edition: http://pgmag.org/01/buy * or Download the PDF: http://pgmag.org/01/download The magazine is currently available only in "US Letter" and "A4" format. Finally, I would like to thank our benefactors⦠Fotolia.com has offered us a free subscription plan to access their stock photo database. We also received fundings from PostgreSQL Europe (PGEU) and Software in the Public Interest (SPI). Thanks a lot to them ! -- Damien Clochard PostgreSQL Magazine Editor in Chief PS : Soon we'll start preparing the next issue ! If you want to join us please subscribe to our mailing list : https://groups.google.com/group/pgmag/ -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Problem with index in OR'd expression
Hello, I've been using PostgreSQL for a few years and mostly love it. Aside from a few (perceived, anyway) annoying limitations in PL/PGSQL (which I almost exclusively am using for db interaction), I'm very satisfied with it. I ran across this problem several months back and decided to blow it off back then, but now I'm really wanting to understand why it's happening, if there's a work around, if I'm doing something wrong, and if it's just a shortcoming / bug if the developers are aware of it and might implement a fix some day. I've already done a fair amount of testing (a total of hours, I'm sure) and Googling around on it. I also read this FAQ on indexes: http://www.postgresql.org/docs/faqs.FAQ.html#item4.6 I think my issue is not explained by any of the resources I reviewed. To simplify my actual problem, I conducted a very simple test. First I defined a table: create table t ( c1 int primary key, c2 int ) with the single index (primary key). Then I filled the table with 100,000 rows using a quick pl/pgsql function (c1 = 1, 2, 3... 100,000). Then I ran these two tests (with EXPLAIN in pgAdmin): select * from t where c1 = 75000; select * from t where ((0 is null) OR (c1 = 75000)); The first one properly uses the index on c1, the second does not. Obviously, a human looking at the second one would realize it's essentially identical to the first and properly evaluate the (0 is null) part once (since it's immutable) and then ignore it for the rest of the searching. Now, I'm sure some of you might ask "why the hell are you doing that in the first place?" I have a good reason. I write a lot of pl/pgsql functions that are search functions with a list of *optional* parameters. I don't know ahead of time whether a user will include on or not. In MSSQL, what I'm able to do (with no obvious index problems that I've seen) is add those all to the WHERE clause like this: ((vC1 IS NULL) OR (C1 = vC1)) AND ((vC2 IS NULL) OR (C2 = vC2)) ... (here vC1 and vC2 represent variables passed into the pl/pgsql function). So my question were basically asked at the beginning of this post: is there another way to get the optimizer to understand what I'm trying to do here? Is this a known problem? Is it working as preferred and unlikely to change any time soon? Is there some setting I can hit somewhere to make it work like I want? The only solution that I've come up with so far is making all of my statements that otherwise wouldn't have to be dynamic and then only including criteria for ones that I really need (based on the tested nullness of the variables), but I find that highly annoying and have run into other problems as a result as well. I'd appreciate any suggestions you might have to help resolve this. Thank, John Lawler ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [GENERAL] Problem with index in OR'd expression
Tom Lane wrote: Well, you could update --- 8.2 contains code to recognize that the IS NULL expression is constant, but prior releases do not. That's excellent to hear -- I'd missed that in my perusing of the changelogs between 8.0.x and 8.2. That does give me one more reason to upgrade. It appears I did not mention what version I was running -- sorry for that, though you guessed it was < 8.2. It's actually 8.0.x. However, if you're hoping to do this: ((vC1 IS NULL) OR (C1 = vC1)) AND ((vC2 IS NULL) OR (C2 = vC2)) ... you're still gonna lose because those are variables not constants ... Well, that *is* what I'm hoping to do. I understand how (0 IS NULL) is different from (variable IS NULL), but isn't it reasonable to expect that PG could evaluate that expression only once (knowing that the variable couldn't change during the course of the query execution) and then treat that expression as constant? I appreciate that you're saying that it won't work even in 8.2, but what I'm getting at is would it be possible to add it in the future? As I mentioned, I'm pretty sure that that must be what MSSQL (6.5, 7, 2000 and 2005 [all of which I've had some experience with]) seem to be doing. Now failing all of this, does any one have a better idea for what I'm trying to do? A simple syntax for optionally including WHERE criteria depending on the null-ness of variables (w/o having to go to dynamic execution)? Thanks for your reply Tom. jl ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [GENERAL] Problem with index in OR'd expression
[EMAIL PROTECTED] wrote: Tom Lane wrote: you're still gonna lose because those are variables not constants ... Well, that *is* what I'm hoping to do. I understand how (0 IS NULL) is different from (variable IS NULL), but isn't it reasonable to expect that PG could evaluate that expression only once (knowing that the variable couldn't change during the course of the query execution) and then treat that expression as constant? Also, I should mention that in my case, the "variables" are actually constants either defined in the function param list (and thus non-modifiable within the function body) or defined with: DECLARE vC1 CONSTANT varchar(10) := 'blah' -type syntax. I would submit that in that situation, it would be reasonable for a user to expect my suggested syntax to still use the indicated indexes. Do you agree? It it possible that something other than picking up the 'IS NULL' boolean as constant has changed between 8.0.x & 8.2 that might make this work? Thanks, jl ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org/
Re: [GENERAL] Problem with index in OR'd expression
Tom Lane wrote: [EMAIL PROTECTED] writes: I would submit that in that situation, it would be reasonable for a user to expect my suggested syntax to still use the indicated indexes. The only thing that will make that work is if "indexed_col IS NULL" were an indexable condition, which it isn't because the PG index API only supports "indexed_col operator something" as an indexable condition (IS NULL is not an operator, and even if it were, there's no "something" on its righthand side). Fixing this has been on the radar screen for awhile, but it's not done, largely for lack of agreement about a reasonably clean way to change that API. Sorry to keep this issue alive even longer, Tom, but I think I may've been unclear with my example. I was referring to the situation where one has this in a WHERE clause: ((vConstant IS NULL) OR (Table.IndexedCol = vConstant)) where vConstant is a *constant* parameter in a pl/pgsql function. In the latest versions (8.1 *or* 8.2), would you expect this to successfully use the index on Table.IndexedCol and not have PG be confused (into a sequential scan) by the (vConstant IS NULL) expression? As I indicated, I'm currently running 8.0.x, and am wondering whether it would be worth the effort to upgrade to 8.1 or 8.2 (Gentoo doesn't yet have PG at 8.2, and I'm a bit lazy with installing things outside of Portage) to solve this issue or whether I should just enable a workaround for now and keep an eye on future releases for a better solution to this problem. Thanks again, John ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [GENERAL] Problem with index in OR'd expression
Ragnar wrote: Reguardless of the issue whether pl/pgsql could be expected to optimize this case, I find it difficult to imagine a scenario where this kind of coding makes sense. I understand that in some cases on would like to do this with a *variable* to simplify logic, but what possible gain can be had from doing this with a constant, other that obfuscation? Well, in one way it's a variable, but in another a constant. It's a variable in the context of general PG usage... e.g., my application code may call the function with whatever parameters a user chooses, leaving some parameters null and others not. Within the context of the function (after calling), these variables are constant and I'm attempting to use my OR syntax as shorthand to avoid having to use a dynamic statement *only* because of this situation. As I've mentioned, this approach seems to work with MSSQL 6.5+, which I assume we consider as a valid competitor to PG... if this didn't work anywhere else, I probably wouldn't even have brought it up. I'll re-iterate another question I attempted to pose which was: what have other PG application developers done in this situation? Is it most common to just use dynamic statements? Thanks for your response. jl ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[GENERAL] pgpool log errors
I've started using pgpool and while everything appears to be working, I've been getting a lot of the following errors in my logs: ERROR: pid 14761: pool_read: EOF encountered This seems to be simple enough - the client/user probably just canceled the request and isn't anything to be concerned about? I get about a dozen of these per hour on a system that has about 25,000 connections/day. ERROR: pid 14761: ProcessFrontendResponse: failed to read kind I have no idea what this one means. I'm concerned because I get a lot of these Between one and two dozen per hour. I've googled. I've searched the postgres mailing list archives. Asked around in IRC. Can't really find anything to explain it. I would greatly appreciate clarification from anyone with insight to this. Regards, ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
[GENERAL] Advice on efficient approach to aggregation statistics
Hi - I have a table for which I want to fit an exponential curve to values in one column of the table and compute a correlation statistic on the fit of the values to the exponential curve. The table consists of the following three columns: {"ID","Date","Value"}. The desired result of the exercise is to have a list of "ID"s, with the associated correlation statistic (based on the "Value" column) for a subset of "Date"s. The "fitted" exponential curve is not a true exponential regression for performance reasons. It uses the exponential form: y = a * exp(b * x), but selects the constant values 'a' & 'b' using only the first and last "Values" (as sorted by "Date") -- so it's really a best-fit exponential curve which is assured to pass through the first and last sorted points. The following is a simplified example (assuming the best practice solution involves the development of an aggregate function): Table (primary key is {"ID","Date"}): ID | Date | Value -- A | 2010-11-12 | 48. A | 2010-11-11 | 24. A | 2010-11-10 | 12. A | 2010-11-09 | 6. A | 2010-11-08 | 3. A | 2010-11-07 | 1.5000 B | 2010-11-12 | 33.1479 B | 2010-11-11 | 19.5030 B | 2010-11-10 | 11.4749 B | 2010-11-09 | 6.7514 B | 2010-11-08 | 3.9723 B | 2010-11-07 | 2.3371 Hypothetical query: SELECT "ID", correl_exp("Value") AS "Corr" FROM Table WHERE "Date" >= '2010-11-09' GROUP BY "ID"; Expected output: ID | Corr --- A | 1. B | 0.8690 If attempting to reconstruct the simple example, the following are the values for constants 'a' and 'b' for "ID"s A & B (also note that values for dates 11/7 and 11/8 are ignored given the hypothetical query): A: a = 3., b = 0.6931 B: a = 1.4283, b = 0.9513 Questions: 1) What is the best approach to accomplish the expected result? I would think the creation of an aggregate function would be the "best" option (i.e. highest performance with little ongoing database maintenence), but thought a nested SQL statement may also work. 2) If the creation of an aggregate function is the best option, A) What is the highest performance language (or does it even matter)? PL/C, PL/R, PL/pgSQL, other? Can one use pre-complied shared libraries? B) What is the best state function used to construct an array of "Values", sorted by "Date" (as both the first and the last value need to be known to compute the exponential curve)? I have heard array_append has performance issues. Many thanks - Will PostgreSQL v. 8.3.8 (expecting to upgrade to v.9.0.1 soon -- so best practice solution should be compatable with v9.0.1 -- if best solution takes advantage of improvements in v9.0.1 and is incompatable with v8.3.8, please note and I can implement the solution after upgrade)
[GENERAL] Log-shipping replication in one machine
We have install PostgreSQL 9.3 in Ubuntu. We want to ask it is posibble to do log shipping replication in one machine with different port such as port 5435 as a master while 5436 as a slave? We also tried that process in one machine but still get an error in slave such as warning: connection to the database failed, disabling startup checks: psql: FATAL: the database system is starting up We do the log shipping replication process based on http://www.themagicnumber.es/replication-in-postgresql-i?lang=en We hope all of you can help us to solve this problem. Thank you __ If you reply to this email, your message will be added to the discussion below: http://postgresql.1045698.n5.nabble.com/Log-shipping-replication-in-one-machine-tp5823774.html This email was sent by nurul (via Nabble) To receive all replies by email, subscribe to this discussion: http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=subscribe_by_code&node=5823774&code=cGdzcWwtZ2VuZXJhbEBwb3N0Z3Jlc3FsLm9yZ3w1ODIzNzc0fDk5Mzg2MjUzMg==
Re: [GENERAL] Log-shipping replication in one machine
Thank you for your response. May i know what is the difference between log shipping and streaming replication actually? I'm sorry i am very new in postgreSQL and still confused with these two __ If you reply to this email, your message will be added to the discussion below: http://postgresql.1045698.n5.nabble.com/Log-shipping-replication-in-one-machine-tp5823774p5824365.html This email was sent by nurul (via Nabble)
Re: [GENERAL] How to avoid base backup in automated failover
Hey Amitkapila, Thank you for the quick reply. How can implement this patch in windows, because I am using windows 9.1.1 postgreSQL application ?? ___ If you reply to this email, your message will be added to the discussion below: http://postgresql.1045698.n5.nabble.com/How-to-avoid-base-backup-in-automated-failover-tp5711147p5728562.html To unsubscribe from How to avoid base backup in automated failover, visit http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5711147&code=cGdzcWwtZ2VuZXJhbEBwb3N0Z3Jlc3FsLm9yZ3w1NzExMTQ3fDk5Mzg2MjUzMg==
Re: [GENERAL] How to avoid base backup in automated failover
On Wednesday, October 17, 2012 3:09 PM chinnaobi wrote: > Hey Amitkapila, > > Thank you for the quick reply. > > How can implement this patch in windows, because I am using windows > 9.1.1 > postgreSQL application ?? > If the patch serves the feature you require, then once it gets committed (there are few bugs yet to be resolved), the feature will be available for windows as well. About the version, I think it will be available in 9.3 only. If you are very urgent need of this, may be you can merge in your own copy of 9.1.1. However that has its own implications. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hack...@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers ___ If you reply to this email, your message will be added to the discussion below: http://postgresql.1045698.n5.nabble.com/How-to-avoid-base-backup-in-automated-failover-tp5711147p5728573.html To unsubscribe from How to avoid base backup in automated failover, visit http://postgresql.1045698.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5711147&code=cGdzcWwtZ2VuZXJhbEBwb3N0Z3Jlc3FsLm9yZ3w1NzExMTQ3fDk5Mzg2MjUzMg==
Re: [GENERAL] [HACKERS] contrib/plantuner - enable PostgreSQL planner hints
hi there ... for this work i will include you in my evening prayers for at least one week. i know there has been a lot of discussion about this but what you just posted it excellent and more important: USEFUL to many people. i had something else in mind recently as well: virtual indexes. it would help people to decide whether and index would make sense if it would actually exist. in some cases this would make sense as well as many datasets are just to big to try out if an index help.s if there was a vote whether this should be in contrib or in core: +999 from me ... many thanks, hans Oleg Bartunov wrote: Hi there, this is an announcement of our new contribution module for PostgreSQL - Plantuner - enable planner hints (http://www.sai.msu.su/~megera/wiki/plantuner). Example: =# LOAD 'plantuner'; =# create table test(id int); =# create index id_idx on test(id); =# create index id_idx2 on test(id); =# \d test Table "public.test" Column | Type | Modifiers +-+--- id | integer | Indexes: "id_idx" btree (id) "id_idx2" btree (id) =# explain select id from test where id=1; QUERY PLAN --- Bitmap Heap Scan on test (cost=4.34..15.03 rows=12 width=4) Recheck Cond: (id = 1) -> Bitmap Index Scan on id_idx2 (cost=0.00..4.34 rows=12 width=0) Index Cond: (id = 1) (4 rows) =# set enable_seqscan=off; =# set plantuner.forbid_index='id_idx2'; =# explain select id from test where id=1; QUERY PLAN -- Bitmap Heap Scan on test (cost=4.34..15.03 rows=12 width=4) Recheck Cond: (id = 1) -> Bitmap Index Scan on id_idx (cost=0.00..4.34 rows=12 width=0) Index Cond: (id = 1) (4 rows) =# set plantuner.forbid_index='id_idx2,id_idx'; =# explain select id from test where id=1; QUERY PLAN - Seq Scan on test (cost=100.00..140.00 rows=12 width=4) Filter: (id = 1) (2 rows) Regards, Oleg _ Oleg Bartunov, Research Scientist, Head of AstroNet (www.astronet.ru), Sternberg Astronomical Institute, Moscow University, Russia Internet: o...@sai.msu.su, http://www.sai.msu.su/~megera/ phone: +007(495)939-16-83, +007(495)939-23-83 -- Cybertec Schoenig & Schoenig GmbH Reyergasse 9 / 2 A-2700 Wiener Neustadt Web: www.postgresql-support.de -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] immutable stable volatile
hello, I have read the documentation couple of times and I still can not figure out the following aspects. if a function does insert/update/delete it needs to be stable or volatile ? if a immutable function executes 'nextval' should itself be also volatile ? thanks, Razvan Radu ---(end of broadcast)--- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
[GENERAL] set-level update fails with unique constraint violation
Hello, this fails with "duplicate key value": CREATE TABLE x ( i INT NOT NULL UNIQUE ); INSERT INTO x (i) VALUES (1), (2), (3); UPDATE x SET i = i + 1; are there any plans to make this work? -- Roman Neuhauser -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Update links
Hi! I think there are dead links on this page. http://www.postgresql.org/community/international/ These links cannot be accessed: Chinese (Simplified) Simplified Chinese PostgreSQL Community Chinese (Traditional) Traditional Chinese PostgreSQL Community As of 1 May, 2013, I could not open the mentioned pages. I suggest removing them or updating them. Just to keep things organized. Thanks. I appreciate your website for easy navigation. Regards, C. H. D. PostgreSQL User _ _ _ _ _ _ _ _ _ _ _ _ _ _ _